code example
Build Recommendation Engine with altor-vec
What this pattern solves: Related-item retrieval for storefronts, feeds, and in-app suggestion modules.
Recommendation surfaces often need a 'more like this' answer before you have enough behavioral data for collaborative filtering. Vector similarity gives you a content-based recommendation layer that can ship with the product itself.
Running the retrieval step in the client keeps related-item modules fast and cheap, especially for offline-first catalogs or content collections that barely change during a session.
Install
npm install altor-vec
Concept explanation
In a recommendation engine workflow, users usually describe intent in their own words. That is why vector search works well here: each record is turned into an embedding, the embeddings are indexed once, and later queries retrieve the nearest semantic neighbors instead of relying only on exact tokens. In practice this means the interface can respond to paraphrases, shorthand, and partial descriptions far better than a literal-only search box.
The browser is often the right place to do this when the corpus is moderate in size and safe to ship. The instant benefit is lower latency. The architectural benefit is that you remove a whole search service from the request path. That matters for keystroke-heavy interactions, offline-capable apps, and product surfaces where search should feel like a UI primitive rather than a network round trip.
This page uses a deterministic embedding helper so the sample is runnable with only altor-vec installed. That keeps the example honest and easy to paste into a demo project. Many teams start with text or metadata embeddings, then blend in clicks and sales signals later. altor-vec still handles the nearest-neighbor step after you compute those vectors.
Runnable JavaScript example
The following snippet indexes a small in-memory dataset, performs a semantic lookup for lightweight travel backpack, and prints the nearest matches. It uses the real altor-vec API, including init(), WasmSearchEngine.from_vectors(), and search().
import init, { WasmSearchEngine } from 'altor-vec';
const dims = 12;
const records = [
{
"title": "Urban commuter backpack",
"text": "Slim weatherproof bag for laptops, cables, and daily train commutes.",
"meta": "bags"
},
{
"title": "Carry-on travel duffel",
"text": "Soft-sided weekender with shoe pocket and overhead-bin size.",
"meta": "bags"
},
{
"title": "Minimal hiking daypack",
"text": "Compact trail bag with hydration sleeve and lightweight straps.",
"meta": "outdoor"
},
{
"title": "Packing cube organizer set",
"text": "Compression cubes for separating shoes, clothes, and toiletries.",
"meta": "travel"
},
{
"title": "RFID passport wallet",
"text": "Travel wallet for cards, passport, and boarding passes.",
"meta": "travel"
},
{
"title": "Camera sling bag",
"text": "Protective shoulder bag for mirrorless kits and accessories.",
"meta": "photo"
}
];
function embedText(text) {
const vector = new Float32Array(dims);
for (const token of text.toLowerCase().split(/[^a-z0-9]+/).filter(Boolean)) {
let hash = 2166136261;
for (const char of token) {
hash = Math.imul(hash ^ char.charCodeAt(0), 16777619);
}
const slot = Math.abs(hash) % dims;
vector[slot] += 1;
vector[(slot + token.length) % dims] += token.length / 10;
}
const magnitude = Math.hypot(...vector) || 1;
return Array.from(vector, (value) => value / magnitude);
}
async function main() {
await init();
const flat = new Float32Array(
records.flatMap((record) => embedText(`${record.title} ${record.text} ${record.meta}`))
);
const engine = WasmSearchEngine.from_vectors(flat, dims, 16, 200, 64);
const hits = JSON.parse(engine.search(new Float32Array(embedText('lightweight travel backpack')), 4));
const results = hits.map(([id, distance]) => ({
...records[id],
similarity: Number((1 - distance).toFixed(3)),
}));
console.table(results);
engine.free();
}
main();
React component version
The React version keeps the same index build but wires it into component state so the UI can query on input changes. That is usually how teams introduce semantic retrieval into an existing product: initialize once, keep the engine in memory, and map nearest-neighbor hits back to the original records.
import { useEffect, useState } from 'react';
import init, { WasmSearchEngine } from 'altor-vec';
const dims = 12;
const records = [
{
"title": "Urban commuter backpack",
"text": "Slim weatherproof bag for laptops, cables, and daily train commutes.",
"meta": "bags"
},
{
"title": "Carry-on travel duffel",
"text": "Soft-sided weekender with shoe pocket and overhead-bin size.",
"meta": "bags"
},
{
"title": "Minimal hiking daypack",
"text": "Compact trail bag with hydration sleeve and lightweight straps.",
"meta": "outdoor"
},
{
"title": "Packing cube organizer set",
"text": "Compression cubes for separating shoes, clothes, and toiletries.",
"meta": "travel"
},
{
"title": "RFID passport wallet",
"text": "Travel wallet for cards, passport, and boarding passes.",
"meta": "travel"
},
{
"title": "Camera sling bag",
"text": "Protective shoulder bag for mirrorless kits and accessories.",
"meta": "photo"
}
];
function embedText(text) {
const vector = new Float32Array(dims);
for (const token of text.toLowerCase().split(/[^a-z0-9]+/).filter(Boolean)) {
let hash = 2166136261;
for (const char of token) {
hash = Math.imul(hash ^ char.charCodeAt(0), 16777619);
}
const slot = Math.abs(hash) % dims;
vector[slot] += 1;
vector[(slot + token.length) % dims] += token.length / 10;
}
const magnitude = Math.hypot(...vector) || 1;
return Array.from(vector, (value) => value / magnitude);
}
export function RecommendationEngineExample() {
const [engine, setEngine] = useState(null);
const [query, setQuery] = useState('');
const [results, setResults] = useState([]);
useEffect(() => {
let cancelled = false;
let instance;
(async () => {
await init();
const flat = new Float32Array(
records.flatMap((record) => embedText(`${record.title} ${record.text} ${record.meta}`))
);
instance = WasmSearchEngine.from_vectors(flat, dims, 16, 200, 64);
if (!cancelled) setEngine(instance);
})();
return () => {
cancelled = true;
instance?.free();
};
}, []);
useEffect(() => {
if (!engine || query.trim().length < 2) {
setResults([]);
return;
}
const hits = JSON.parse(engine.search(new Float32Array(embedText(query)), 5));
setResults(
hits.map(([id, distance]) => ({
...records[id],
similarity: Number((1 - distance).toFixed(3)),
}))
);
}, [engine, query]);
return (
<section>
<input
value={query}
onChange={(event) => setQuery(event.target.value)}
placeholder="Describe the item you want to match"
/>
<ul>
{results.map((result) => (
<li key={result.title}>
<strong>{result.title}</strong> — {result.meta} (score {result.similarity})
</li>
))}
</ul>
</section>
);
}
How this example works
The pattern has three moving parts. First, you choose what text represents each record: title, description, metadata, or a chunk of content. Second, you turn that text into vectors and flatten them into one Float32Array. Third, you build the HNSW graph and query it with a vector created from the user input. The library returns nearest-neighbor IDs and distances, and your app decides how to display or post-process them.
Because the retrieval step is approximate nearest-neighbor search, it stays fast even as the dataset grows beyond trivial linear scans. The most important quality lever is still the embedding model. Better vectors usually matter more than micro-optimizing ANN parameters, so teams should benchmark retrieval quality on real user phrasing before shipping the experience widely.
When to use this pattern
This is a practical fit when the search corpus is small to medium, shipped with the app, and searched frequently enough that backend latency would be noticeable. Common examples include docs portals, embedded support widgets, local-first assistants, and curated catalogs.
- Related products
- More like this buttons
- Editorial recommendation blocks
- Session-based content suggestions
Limitations
Content-based recommendations can over-index on similarity and ignore business rules, margin, availability, or diversity. You may also need post-filters to prevent showing out-of-stock items or recommendations from the wrong collection.
Be especially careful about corpus size, update frequency, and data sensitivity. Browser vector search is excellent when those three constraints are favorable, but it is not the right answer when the dataset is huge, private, or changing constantly for every user.
FAQ
Is altor-vec a full recommendation platform?
No. It is the retrieval layer. Ranking logic, merchandising rules, and experiments still belong in your application.
Can this work without user history?
Yes. This pattern is specifically useful before you have meaningful collaborative signals because the vectors come from item content instead.
When should I avoid client-side recommendations?
Avoid shipping the full index when the catalog is sensitive, huge, or updated so often that a local asset quickly becomes stale.
Get started: npm install altor-vec · GitHub