benchmark comparison
altor-vec vs Voyager
Browser-first vector search alternatives with different algorithm and packaging tradeoffs.
When both options are already browser-aware, the decision usually becomes empirical. Small differences in packaging, defaults, and recall/latency tradeoffs matter more than sweeping architectural arguments.
Comparison table
| Category | altor-vec | Voyager |
|---|---|---|
| Runtime model | Client-side WASM ANN focused on a small integration surface. | Browser-capable vector search alternative with its own algorithm and packaging assumptions. |
| Bundle size / delivery | ~54KB gzipped representative payload. | Often somewhat larger depending on packaging and chosen build output. |
| Query latency | Fast local lookups intended for interactive frontend UX. | Also optimized for local search; benchmark directly because browser performance is corpus-specific. |
| Memory usage | Index lives in client memory. | Similar client-memory story with implementation-specific overhead. |
| Features | Compact ANN API and serialized local index workflow. | Competing browser vector search capabilities with different integration choices. |
| Dataset sweet spot | Moderate corpora bundled into the app. | Similar browser-delivered dataset sizes. |
Where altor-vec wins
- Small payload and simple API shape.
- Strong fit when browser delivery friction is the first concern.
- Good option for teams that want minimal surface area.
Where Voyager wins
- May hit better recall or latency on some corpora.
- Alternative algorithmic tradeoffs could suit certain embedding distributions better.
- Existing user familiarity may reduce switching costs.
Honest decision guide
At this tier, benchmark on your own corpus. Developer experience, bundle budget, and how predictable the results feel in the browser often decide more than theory alone.
The honest pattern across all of these benchmark pages is simple: if the search corpus should stay on the server, choose server-oriented infrastructure. If the search corpus is intentionally shipped with the product and the UX benefit of local retrieval matters more than backend scale, altor-vec is usually the more natural fit.
FAQ
Is one clearly better than the other?
Not without a corpus-specific benchmark. Browser ANN comparisons are highly dependent on vector shape and index parameters.
What matters more than theoretical speed?
How easy it is to ship, load, and operate in the product bundle.
Why can a smaller library win even if raw performance is similar?
Because frontend teams also pay in startup time, build complexity, and maintenance overhead.
Get started: npm install altor-vec · GitHub