code example

Build Image Search with altor-vec

What this pattern solves: Caption- and tag-based image retrieval for curated media libraries.

Image libraries are often searched by intent, mood, or scene description rather than exact filenames. Vector search lets a user type 'sunset city skyline' or 'happy team workshop' and retrieve the nearest captioned assets.

For design systems, marketing asset bundles, or offline media explorers, local retrieval removes API round trips and keeps browsing fluid while the user scans many variations.

Install

npm install altor-vec

Concept explanation

In a image search workflow, users usually describe intent in their own words. That is why vector search works well here: each record is turned into an embedding, the embeddings are indexed once, and later queries retrieve the nearest semantic neighbors instead of relying only on exact tokens. In practice this means the interface can respond to paraphrases, shorthand, and partial descriptions far better than a literal-only search box.

The browser is often the right place to do this when the corpus is moderate in size and safe to ship. The instant benefit is lower latency. The architectural benefit is that you remove a whole search service from the request path. That matters for keystroke-heavy interactions, offline-capable apps, and product surfaces where search should feel like a UI primitive rather than a network round trip.

This page uses a deterministic embedding helper so the sample is runnable with only altor-vec installed. That keeps the example honest and easy to paste into a demo project. The example indexes text captions because that keeps the snippet lightweight. In production you can use CLIP-style embeddings or any pipeline that projects both text and images into the same vector space.

Representative browser benchmark: ~54KB gzipped library payload, sub-millisecond local query time on a moderate corpus, and no per-query API dependency. Exact numbers depend on vector dimensions, index parameters, and device class.

Runnable JavaScript example

The following snippet indexes a small in-memory dataset, performs a semantic lookup for cozy coffee shop interior, and prints the nearest matches. It uses the real altor-vec API, including init(), WasmSearchEngine.from_vectors(), and search().

import init, { WasmSearchEngine } from 'altor-vec';

        const dims = 12;
        const records = [
  {
    "title": "Morning cafe counter",
    "text": "Warm wooden coffee bar with pastries, espresso machine, and soft window light.",
    "meta": "interior"
  },
  {
    "title": "Remote team workshop",
    "text": "Colleagues around a table covered in sticky notes and laptops.",
    "meta": "team"
  },
  {
    "title": "Downtown skyline at dusk",
    "text": "City skyline with purple sunset, office towers, and river reflections.",
    "meta": "city"
  },
  {
    "title": "Trail runner in forest",
    "text": "Athlete sprinting through a pine trail with dramatic side light.",
    "meta": "outdoor"
  },
  {
    "title": "Minimal product flat lay",
    "text": "Neutral tabletop composition with skincare bottles and shadows.",
    "meta": "product"
  },
  {
    "title": "Warehouse fulfillment line",
    "text": "Workers packing boxes beside shelves and barcode scanners.",
    "meta": "operations"
  }
];

        function embedText(text) {
  const vector = new Float32Array(dims);
  for (const token of text.toLowerCase().split(/[^a-z0-9]+/).filter(Boolean)) {
    let hash = 2166136261;
    for (const char of token) {
      hash = Math.imul(hash ^ char.charCodeAt(0), 16777619);
    }
    const slot = Math.abs(hash) % dims;
    vector[slot] += 1;
    vector[(slot + token.length) % dims] += token.length / 10;
  }
  const magnitude = Math.hypot(...vector) || 1;
  return Array.from(vector, (value) => value / magnitude);
}

        async function main() {
          await init();

          const flat = new Float32Array(
            records.flatMap((record) => embedText(`${record.title} ${record.text} ${record.meta}`))
          );

          const engine = WasmSearchEngine.from_vectors(flat, dims, 16, 200, 64);
          const hits = JSON.parse(engine.search(new Float32Array(embedText('cozy coffee shop interior')), 4));

          const results = hits.map(([id, distance]) => ({
            ...records[id],
            similarity: Number((1 - distance).toFixed(3)),
          }));

          console.table(results);
          engine.free();
        }

        main();

React component version

The React version keeps the same index build but wires it into component state so the UI can query on input changes. That is usually how teams introduce semantic retrieval into an existing product: initialize once, keep the engine in memory, and map nearest-neighbor hits back to the original records.

import { useEffect, useState } from 'react';
        import init, { WasmSearchEngine } from 'altor-vec';

        const dims = 12;
        const records = [
  {
    "title": "Morning cafe counter",
    "text": "Warm wooden coffee bar with pastries, espresso machine, and soft window light.",
    "meta": "interior"
  },
  {
    "title": "Remote team workshop",
    "text": "Colleagues around a table covered in sticky notes and laptops.",
    "meta": "team"
  },
  {
    "title": "Downtown skyline at dusk",
    "text": "City skyline with purple sunset, office towers, and river reflections.",
    "meta": "city"
  },
  {
    "title": "Trail runner in forest",
    "text": "Athlete sprinting through a pine trail with dramatic side light.",
    "meta": "outdoor"
  },
  {
    "title": "Minimal product flat lay",
    "text": "Neutral tabletop composition with skincare bottles and shadows.",
    "meta": "product"
  },
  {
    "title": "Warehouse fulfillment line",
    "text": "Workers packing boxes beside shelves and barcode scanners.",
    "meta": "operations"
  }
];

        function embedText(text) {
  const vector = new Float32Array(dims);
  for (const token of text.toLowerCase().split(/[^a-z0-9]+/).filter(Boolean)) {
    let hash = 2166136261;
    for (const char of token) {
      hash = Math.imul(hash ^ char.charCodeAt(0), 16777619);
    }
    const slot = Math.abs(hash) % dims;
    vector[slot] += 1;
    vector[(slot + token.length) % dims] += token.length / 10;
  }
  const magnitude = Math.hypot(...vector) || 1;
  return Array.from(vector, (value) => value / magnitude);
}

        export function ImageSearchExample() {
          const [engine, setEngine] = useState(null);
          const [query, setQuery] = useState('');
          const [results, setResults] = useState([]);

          useEffect(() => {
            let cancelled = false;
            let instance;

            (async () => {
              await init();
              const flat = new Float32Array(
                records.flatMap((record) => embedText(`${record.title} ${record.text} ${record.meta}`))
              );
              instance = WasmSearchEngine.from_vectors(flat, dims, 16, 200, 64);
              if (!cancelled) setEngine(instance);
            })();

            return () => {
              cancelled = true;
              instance?.free();
            };
          }, []);

          useEffect(() => {
            if (!engine || query.trim().length < 2) {
              setResults([]);
              return;
            }

            const hits = JSON.parse(engine.search(new Float32Array(embedText(query)), 5));
            setResults(
              hits.map(([id, distance]) => ({
                ...records[id],
                similarity: Number((1 - distance).toFixed(3)),
              }))
            );
          }, [engine, query]);

          return (
            <section>
              <input
                value={query}
                onChange={(event) => setQuery(event.target.value)}
                placeholder="Describe the image you need"
              />
              <ul>
                {results.map((result) => (
                  <li key={result.title}>
                    <strong>{result.title}</strong> — {result.meta} (score {result.similarity})
                  </li>
                ))}
              </ul>
            </section>
          );
        }

How this example works

The pattern has three moving parts. First, you choose what text represents each record: title, description, metadata, or a chunk of content. Second, you turn that text into vectors and flatten them into one Float32Array. Third, you build the HNSW graph and query it with a vector created from the user input. The library returns nearest-neighbor IDs and distances, and your app decides how to display or post-process them.

Because the retrieval step is approximate nearest-neighbor search, it stays fast even as the dataset grows beyond trivial linear scans. The most important quality lever is still the embedding model. Better vectors usually matter more than micro-optimizing ANN parameters, so teams should benchmark retrieval quality on real user phrasing before shipping the experience widely.

When to use this pattern

This is a practical fit when the search corpus is small to medium, shipped with the app, and searched frequently enough that backend latency would be noticeable. Common examples include docs portals, embedded support widgets, local-first assistants, and curated catalogs.

Limitations

This pattern assumes you have useful captions or multimodal embeddings. Pure filename indexing is not enough, and very large image collections still challenge browser memory and download budgets.

Be especially careful about corpus size, update frequency, and data sensitivity. Browser vector search is excellent when those three constraints are favorable, but it is not the right answer when the dataset is huge, private, or changing constantly for every user.

FAQ

Do I need multimodal embeddings for image search?

For strong production quality, yes. The lightweight example uses captions only, which is runnable but less powerful than a shared text-image embedding space.

Can I search a private asset bundle entirely offline?

Yes. If the assets and their embeddings are safe to ship, altor-vec can keep retrieval fully local.

Where does the browser approach stop working well?

At very large media-library sizes or when the source assets cannot be exposed to the client.

Get started: npm install altor-vec · GitHub