code example
Build Autocomplete with altor-vec
What this pattern solves: Semantic typeahead for support intents, command palettes, and documentation search boxes.
Autocomplete breaks down when users remember the goal but not the exact label. Semantic retrieval lets the interface recover from fuzzy prefixes such as 'forgot login' or 'team invite setup' without waiting on a backend request.
Keeping the HNSW index inside the browser removes network jitter from every keystroke, which is exactly where perceived quality lives. Local search also means you can keep half-written queries on device instead of sending them to an API.
Install
npm install altor-vec
Concept explanation
In a autocomplete workflow, users usually describe intent in their own words. That is why vector search works well here: each record is turned into an embedding, the embeddings are indexed once, and later queries retrieve the nearest semantic neighbors instead of relying only on exact tokens. In practice this means the interface can respond to paraphrases, shorthand, and partial descriptions far better than a literal-only search box.
The browser is often the right place to do this when the corpus is moderate in size and safe to ship. The instant benefit is lower latency. The architectural benefit is that you remove a whole search service from the request path. That matters for keystroke-heavy interactions, offline-capable apps, and product surfaces where search should feel like a UI primitive rather than a network round trip.
This page uses a deterministic embedding helper so the sample is runnable with only altor-vec installed. That keeps the example honest and easy to paste into a demo project. Use a real embedding model such as MiniLM or multilingual-e5 for production ranking quality, but keep the altor-vec index build and query flow exactly the same.
Runnable JavaScript example
The following snippet indexes a small in-memory dataset, performs a semantic lookup for forgot my password, and prints the nearest matches. It uses the real altor-vec API, including init(), WasmSearchEngine.from_vectors(), and search().
import init, { WasmSearchEngine } from 'altor-vec';
const dims = 12;
const records = [
{
"title": "Reset password",
"text": "Help a user recover account access after forgetting a password.",
"meta": "account"
},
{
"title": "Invite teammate",
"text": "Add a coworker to the workspace with the right permissions.",
"meta": "team"
},
{
"title": "Update billing card",
"text": "Change the payment method for a subscription or invoice.",
"meta": "billing"
},
{
"title": "Enable SSO",
"text": "Configure enterprise single sign-on with SAML.",
"meta": "security"
},
{
"title": "Export analytics report",
"text": "Download weekly metrics as CSV or PDF.",
"meta": "reporting"
},
{
"title": "Change notification email",
"text": "Update where invoices and alerts are delivered.",
"meta": "account"
}
];
function embedText(text) {
const vector = new Float32Array(dims);
for (const token of text.toLowerCase().split(/[^a-z0-9]+/).filter(Boolean)) {
let hash = 2166136261;
for (const char of token) {
hash = Math.imul(hash ^ char.charCodeAt(0), 16777619);
}
const slot = Math.abs(hash) % dims;
vector[slot] += 1;
vector[(slot + token.length) % dims] += token.length / 10;
}
const magnitude = Math.hypot(...vector) || 1;
return Array.from(vector, (value) => value / magnitude);
}
async function main() {
await init();
const flat = new Float32Array(
records.flatMap((record) => embedText(`${record.title} ${record.text} ${record.meta}`))
);
const engine = WasmSearchEngine.from_vectors(flat, dims, 16, 200, 64);
const hits = JSON.parse(engine.search(new Float32Array(embedText('forgot my password')), 4));
const results = hits.map(([id, distance]) => ({
...records[id],
similarity: Number((1 - distance).toFixed(3)),
}));
console.table(results);
engine.free();
}
main();
React component version
The React version keeps the same index build but wires it into component state so the UI can query on input changes. That is usually how teams introduce semantic retrieval into an existing product: initialize once, keep the engine in memory, and map nearest-neighbor hits back to the original records.
import { useEffect, useState } from 'react';
import init, { WasmSearchEngine } from 'altor-vec';
const dims = 12;
const records = [
{
"title": "Reset password",
"text": "Help a user recover account access after forgetting a password.",
"meta": "account"
},
{
"title": "Invite teammate",
"text": "Add a coworker to the workspace with the right permissions.",
"meta": "team"
},
{
"title": "Update billing card",
"text": "Change the payment method for a subscription or invoice.",
"meta": "billing"
},
{
"title": "Enable SSO",
"text": "Configure enterprise single sign-on with SAML.",
"meta": "security"
},
{
"title": "Export analytics report",
"text": "Download weekly metrics as CSV or PDF.",
"meta": "reporting"
},
{
"title": "Change notification email",
"text": "Update where invoices and alerts are delivered.",
"meta": "account"
}
];
function embedText(text) {
const vector = new Float32Array(dims);
for (const token of text.toLowerCase().split(/[^a-z0-9]+/).filter(Boolean)) {
let hash = 2166136261;
for (const char of token) {
hash = Math.imul(hash ^ char.charCodeAt(0), 16777619);
}
const slot = Math.abs(hash) % dims;
vector[slot] += 1;
vector[(slot + token.length) % dims] += token.length / 10;
}
const magnitude = Math.hypot(...vector) || 1;
return Array.from(vector, (value) => value / magnitude);
}
export function AutocompleteExample() {
const [engine, setEngine] = useState(null);
const [query, setQuery] = useState('');
const [results, setResults] = useState([]);
useEffect(() => {
let cancelled = false;
let instance;
(async () => {
await init();
const flat = new Float32Array(
records.flatMap((record) => embedText(`${record.title} ${record.text} ${record.meta}`))
);
instance = WasmSearchEngine.from_vectors(flat, dims, 16, 200, 64);
if (!cancelled) setEngine(instance);
})();
return () => {
cancelled = true;
instance?.free();
};
}, []);
useEffect(() => {
if (!engine || query.trim().length < 2) {
setResults([]);
return;
}
const hits = JSON.parse(engine.search(new Float32Array(embedText(query)), 5));
setResults(
hits.map(([id, distance]) => ({
...records[id],
similarity: Number((1 - distance).toFixed(3)),
}))
);
}, [engine, query]);
return (
<section>
<input
value={query}
onChange={(event) => setQuery(event.target.value)}
placeholder="Type a support question"
/>
<ul>
{results.map((result) => (
<li key={result.title}>
<strong>{result.title}</strong> — {result.meta} (score {result.similarity})
</li>
))}
</ul>
</section>
);
}
How this example works
The pattern has three moving parts. First, you choose what text represents each record: title, description, metadata, or a chunk of content. Second, you turn that text into vectors and flatten them into one Float32Array. Third, you build the HNSW graph and query it with a vector created from the user input. The library returns nearest-neighbor IDs and distances, and your app decides how to display or post-process them.
Because the retrieval step is approximate nearest-neighbor search, it stays fast even as the dataset grows beyond trivial linear scans. The most important quality lever is still the embedding model. Better vectors usually matter more than micro-optimizing ANN parameters, so teams should benchmark retrieval quality on real user phrasing before shipping the experience widely.
When to use this pattern
This is a practical fit when the search corpus is small to medium, shipped with the app, and searched frequently enough that backend latency would be noticeable. Common examples include docs portals, embedded support widgets, local-first assistants, and curated catalogs.
- Support intent suggestions
- Command palettes
- Documentation typeahead
- Settings search
Limitations
The deterministic embedding helper keeps the snippet dependency-free, but it is not a substitute for a learned embedding model. Very large suggestion catalogs still cost memory, and if labels change constantly you need a strategy for rebuilding or shipping fresh vectors.
Be especially careful about corpus size, update frequency, and data sensitivity. Browser vector search is excellent when those three constraints are favorable, but it is not the right answer when the dataset is huge, private, or changing constantly for every user.
FAQ
Can I use altor-vec for keystroke-by-keystroke autocomplete?
Yes. That is one of the strongest fits because query latency is local and predictable once the vectors are already on device.
Do I need a backend to make this semantic?
No. The search index can live entirely in the browser. You only need a backend if your corpus is private, too large to ship, or updated continuously.
What should I swap in for the demo embedText helper?
Use a real embedding model or precomputed vectors. The example keeps the same indexing API so you can upgrade quality without changing the surrounding search code.
Get started: npm install altor-vec · GitHub