benchmark comparison

altor-vec vs Voyager

Browser-first vector search alternatives with different algorithm and packaging tradeoffs.

When both options are already browser-aware, the decision usually becomes empirical. Small differences in packaging, defaults, and recall/latency tradeoffs matter more than sweeping architectural arguments.

These numbers are representative, not universal. Bundle size, query latency, and memory usage all vary with vector dimensions, index parameters, browser runtime, hardware, and whether embeddings are generated on device or ahead of time.

Comparison table

Categoryaltor-vecVoyager
Runtime modelClient-side WASM ANN focused on a small integration surface.Browser-capable vector search alternative with its own algorithm and packaging assumptions.
Bundle size / delivery~54KB gzipped representative payload.Often somewhat larger depending on packaging and chosen build output.
Query latencyFast local lookups intended for interactive frontend UX.Also optimized for local search; benchmark directly because browser performance is corpus-specific.
Memory usageIndex lives in client memory.Similar client-memory story with implementation-specific overhead.
FeaturesCompact ANN API and serialized local index workflow.Competing browser vector search capabilities with different integration choices.
Dataset sweet spotModerate corpora bundled into the app.Similar browser-delivered dataset sizes.

Where altor-vec wins

Where Voyager wins

Honest decision guide

At this tier, benchmark on your own corpus. Developer experience, bundle budget, and how predictable the results feel in the browser often decide more than theory alone.

The honest pattern across all of these benchmark pages is simple: if the search corpus should stay on the server, choose server-oriented infrastructure. If the search corpus is intentionally shipped with the product and the UX benefit of local retrieval matters more than backend scale, altor-vec is usually the more natural fit.

FAQ

Is one clearly better than the other?

Not without a corpus-specific benchmark. Browser ANN comparisons are highly dependent on vector shape and index parameters.

What matters more than theoretical speed?

How easy it is to ship, load, and operate in the product bundle.

Why can a smaller library win even if raw performance is similar?

Because frontend teams also pay in startup time, build complexity, and maintenance overhead.

Get started: npm install altor-vec · GitHub