code example
Build Privacy-Preserving Search with altor-vec
What this pattern solves: Local semantic retrieval for sensitive notes, contracts, and on-device assistants.
Some search experiences lose value the moment queries leave the device. Employees searching compensation guidelines, users exploring personal notes, or legal teams reviewing contract clauses often need the retrieval layer to stay local.
Because altor-vec can run entirely in-browser, the query text and the search index do not need to transit a third-party search API. That changes the privacy story substantially for certain products.
Install
npm install altor-vec
Concept explanation
In a privacy-preserving search workflow, users usually describe intent in their own words. That is why vector search works well here: each record is turned into an embedding, the embeddings are indexed once, and later queries retrieve the nearest semantic neighbors instead of relying only on exact tokens. In practice this means the interface can respond to paraphrases, shorthand, and partial descriptions far better than a literal-only search box.
The browser is often the right place to do this when the corpus is moderate in size and safe to ship. The instant benefit is lower latency. The architectural benefit is that you remove a whole search service from the request path. That matters for keystroke-heavy interactions, offline-capable apps, and product surfaces where search should feel like a UI primitive rather than a network round trip.
This page uses a deterministic embedding helper so the sample is runnable with only altor-vec installed. That keeps the example honest and easy to paste into a demo project. Privacy-preserving search still requires careful thought about how embeddings are generated, stored, and refreshed. Local retrieval is only one part of the privacy model.
Runnable JavaScript example
The following snippet indexes a small in-memory dataset, performs a semantic lookup for termination clause with 30 day notice, and prints the nearest matches. It uses the real altor-vec API, including init(), WasmSearchEngine.from_vectors(), and search().
import init, { WasmSearchEngine } from 'altor-vec';
const dims = 12;
const records = [
{
"title": "Termination clause",
"text": "Contract language covering notice periods and convenience termination rights.",
"meta": "contracts"
},
{
"title": "Data processing addendum",
"text": "Terms about subprocessors, retention, and deletion obligations.",
"meta": "privacy"
},
{
"title": "Expense reimbursement note",
"text": "Internal policy on meal limits, airfare class, and receipt rules.",
"meta": "policy"
},
{
"title": "Manager feedback journal",
"text": "Private note summarizing performance conversations and follow-ups.",
"meta": "notes"
},
{
"title": "Offer approval checklist",
"text": "Compensation review flow for recruiting and finance sign-off.",
"meta": "people"
},
{
"title": "M&A diligence memo",
"text": "Summary of outstanding legal and data room questions.",
"meta": "legal"
}
];
function embedText(text) {
const vector = new Float32Array(dims);
for (const token of text.toLowerCase().split(/[^a-z0-9]+/).filter(Boolean)) {
let hash = 2166136261;
for (const char of token) {
hash = Math.imul(hash ^ char.charCodeAt(0), 16777619);
}
const slot = Math.abs(hash) % dims;
vector[slot] += 1;
vector[(slot + token.length) % dims] += token.length / 10;
}
const magnitude = Math.hypot(...vector) || 1;
return Array.from(vector, (value) => value / magnitude);
}
async function main() {
await init();
const flat = new Float32Array(
records.flatMap((record) => embedText(`${record.title} ${record.text} ${record.meta}`))
);
const engine = WasmSearchEngine.from_vectors(flat, dims, 16, 200, 64);
const hits = JSON.parse(engine.search(new Float32Array(embedText('termination clause with 30 day notice')), 4));
const results = hits.map(([id, distance]) => ({
...records[id],
similarity: Number((1 - distance).toFixed(3)),
}));
console.table(results);
engine.free();
}
main();
React component version
The React version keeps the same index build but wires it into component state so the UI can query on input changes. That is usually how teams introduce semantic retrieval into an existing product: initialize once, keep the engine in memory, and map nearest-neighbor hits back to the original records.
import { useEffect, useState } from 'react';
import init, { WasmSearchEngine } from 'altor-vec';
const dims = 12;
const records = [
{
"title": "Termination clause",
"text": "Contract language covering notice periods and convenience termination rights.",
"meta": "contracts"
},
{
"title": "Data processing addendum",
"text": "Terms about subprocessors, retention, and deletion obligations.",
"meta": "privacy"
},
{
"title": "Expense reimbursement note",
"text": "Internal policy on meal limits, airfare class, and receipt rules.",
"meta": "policy"
},
{
"title": "Manager feedback journal",
"text": "Private note summarizing performance conversations and follow-ups.",
"meta": "notes"
},
{
"title": "Offer approval checklist",
"text": "Compensation review flow for recruiting and finance sign-off.",
"meta": "people"
},
{
"title": "M&A diligence memo",
"text": "Summary of outstanding legal and data room questions.",
"meta": "legal"
}
];
function embedText(text) {
const vector = new Float32Array(dims);
for (const token of text.toLowerCase().split(/[^a-z0-9]+/).filter(Boolean)) {
let hash = 2166136261;
for (const char of token) {
hash = Math.imul(hash ^ char.charCodeAt(0), 16777619);
}
const slot = Math.abs(hash) % dims;
vector[slot] += 1;
vector[(slot + token.length) % dims] += token.length / 10;
}
const magnitude = Math.hypot(...vector) || 1;
return Array.from(vector, (value) => value / magnitude);
}
export function PrivacyPreservingSearchExample() {
const [engine, setEngine] = useState(null);
const [query, setQuery] = useState('');
const [results, setResults] = useState([]);
useEffect(() => {
let cancelled = false;
let instance;
(async () => {
await init();
const flat = new Float32Array(
records.flatMap((record) => embedText(`${record.title} ${record.text} ${record.meta}`))
);
instance = WasmSearchEngine.from_vectors(flat, dims, 16, 200, 64);
if (!cancelled) setEngine(instance);
})();
return () => {
cancelled = true;
instance?.free();
};
}, []);
useEffect(() => {
if (!engine || query.trim().length < 2) {
setResults([]);
return;
}
const hits = JSON.parse(engine.search(new Float32Array(embedText(query)), 5));
setResults(
hits.map(([id, distance]) => ({
...records[id],
similarity: Number((1 - distance).toFixed(3)),
}))
);
}, [engine, query]);
return (
<section>
<input
value={query}
onChange={(event) => setQuery(event.target.value)}
placeholder="Search locally on this device"
/>
<ul>
{results.map((result) => (
<li key={result.title}>
<strong>{result.title}</strong> — {result.meta} (score {result.similarity})
</li>
))}
</ul>
</section>
);
}
How this example works
The pattern has three moving parts. First, you choose what text represents each record: title, description, metadata, or a chunk of content. Second, you turn that text into vectors and flatten them into one Float32Array. Third, you build the HNSW graph and query it with a vector created from the user input. The library returns nearest-neighbor IDs and distances, and your app decides how to display or post-process them.
Because the retrieval step is approximate nearest-neighbor search, it stays fast even as the dataset grows beyond trivial linear scans. The most important quality lever is still the embedding model. Better vectors usually matter more than micro-optimizing ANN parameters, so teams should benchmark retrieval quality on real user phrasing before shipping the experience widely.
When to use this pattern
This is a practical fit when the search corpus is small to medium, shipped with the app, and searched frequently enough that backend latency would be noticeable. Common examples include docs portals, embedded support widgets, local-first assistants, and curated catalogs.
- Personal knowledge bases
- Client-side legal review
- Private journals
- On-device enterprise assistants
Limitations
If you compute embeddings in a remote service, the pipeline is no longer fully local. You also need to protect the downloaded corpus itself; client-side search is not appropriate if the underlying material should never be present on the device.
Be especially careful about corpus size, update frequency, and data sensitivity. Browser vector search is excellent when those three constraints are favorable, but it is not the right answer when the dataset is huge, private, or changing constantly for every user.
FAQ
Does browser search automatically make a product private?
No. It reduces query exposure, but you still need to consider where vectors come from, how files are cached, and whether the corpus itself is safe to download.
Can I keep both queries and content on device?
Yes, if you ship the vectors locally and avoid remote embedding generation at query time.
When should I choose a server instead?
Choose a server when the dataset should not be distributed to clients or when permissions differ per user.
Get started: npm install altor-vec · GitHub