code example

Build Chat Memory with altor-vec

What this pattern solves: Semantic recall of past turns, notes, and facts for assistants that run in the browser.

Chat history gets long quickly, but only a few facts matter to the next answer. Vector search lets an assistant recall semantically related memories instead of replaying the entire transcript every turn.

Local memory retrieval is compelling for privacy-sensitive assistants and low-latency chat UIs because recall does not require a network round trip. It can also reduce token usage by selecting only the most relevant snippets.

Install

npm install altor-vec

Concept explanation

In a chat memory workflow, users usually describe intent in their own words. That is why vector search works well here: each record is turned into an embedding, the embeddings are indexed once, and later queries retrieve the nearest semantic neighbors instead of relying only on exact tokens. In practice this means the interface can respond to paraphrases, shorthand, and partial descriptions far better than a literal-only search box.

The browser is often the right place to do this when the corpus is moderate in size and safe to ship. The instant benefit is lower latency. The architectural benefit is that you remove a whole search service from the request path. That matters for keystroke-heavy interactions, offline-capable apps, and product surfaces where search should feel like a UI primitive rather than a network round trip.

This page uses a deterministic embedding helper so the sample is runnable with only altor-vec installed. That keeps the example honest and easy to paste into a demo project. In production you may combine recency, conversation IDs, and memory types with semantic similarity so the assistant retrieves both relevant and current context.

Representative browser benchmark: ~54KB gzipped library payload, sub-millisecond local query time on a moderate corpus, and no per-query API dependency. Exact numbers depend on vector dimensions, index parameters, and device class.

Runnable JavaScript example

The following snippet indexes a small in-memory dataset, performs a semantic lookup for what city did the customer say they were opening in, and prints the nearest matches. It uses the real altor-vec API, including init(), WasmSearchEngine.from_vectors(), and search().

import init, { WasmSearchEngine } from 'altor-vec';

        const dims = 12;
        const records = [
  {
    "title": "Customer expansion plan",
    "text": "The customer plans to open a new office in Austin next quarter.",
    "meta": "account"
  },
  {
    "title": "Preferred renewal month",
    "text": "Procurement wants contract review to start in September.",
    "meta": "renewal"
  },
  {
    "title": "Feature request",
    "text": "Asked for weekly exports to S3 instead of manual CSV downloads.",
    "meta": "product"
  },
  {
    "title": "Stakeholder note",
    "text": "Security approval depends on SSO and audit log documentation.",
    "meta": "security"
  },
  {
    "title": "Support blocker",
    "text": "User cannot complete MFA because they changed phones.",
    "meta": "support"
  },
  {
    "title": "Budget guidance",
    "text": "Team expects to stay under a twenty seat plan this quarter.",
    "meta": "commercial"
  }
];

        function embedText(text) {
  const vector = new Float32Array(dims);
  for (const token of text.toLowerCase().split(/[^a-z0-9]+/).filter(Boolean)) {
    let hash = 2166136261;
    for (const char of token) {
      hash = Math.imul(hash ^ char.charCodeAt(0), 16777619);
    }
    const slot = Math.abs(hash) % dims;
    vector[slot] += 1;
    vector[(slot + token.length) % dims] += token.length / 10;
  }
  const magnitude = Math.hypot(...vector) || 1;
  return Array.from(vector, (value) => value / magnitude);
}

        async function main() {
          await init();

          const flat = new Float32Array(
            records.flatMap((record) => embedText(`${record.title} ${record.text} ${record.meta}`))
          );

          const engine = WasmSearchEngine.from_vectors(flat, dims, 16, 200, 64);
          const hits = JSON.parse(engine.search(new Float32Array(embedText('what city did the customer say they were opening in')), 4));

          const results = hits.map(([id, distance]) => ({
            ...records[id],
            similarity: Number((1 - distance).toFixed(3)),
          }));

          console.table(results);
          engine.free();
        }

        main();

React component version

The React version keeps the same index build but wires it into component state so the UI can query on input changes. That is usually how teams introduce semantic retrieval into an existing product: initialize once, keep the engine in memory, and map nearest-neighbor hits back to the original records.

import { useEffect, useState } from 'react';
        import init, { WasmSearchEngine } from 'altor-vec';

        const dims = 12;
        const records = [
  {
    "title": "Customer expansion plan",
    "text": "The customer plans to open a new office in Austin next quarter.",
    "meta": "account"
  },
  {
    "title": "Preferred renewal month",
    "text": "Procurement wants contract review to start in September.",
    "meta": "renewal"
  },
  {
    "title": "Feature request",
    "text": "Asked for weekly exports to S3 instead of manual CSV downloads.",
    "meta": "product"
  },
  {
    "title": "Stakeholder note",
    "text": "Security approval depends on SSO and audit log documentation.",
    "meta": "security"
  },
  {
    "title": "Support blocker",
    "text": "User cannot complete MFA because they changed phones.",
    "meta": "support"
  },
  {
    "title": "Budget guidance",
    "text": "Team expects to stay under a twenty seat plan this quarter.",
    "meta": "commercial"
  }
];

        function embedText(text) {
  const vector = new Float32Array(dims);
  for (const token of text.toLowerCase().split(/[^a-z0-9]+/).filter(Boolean)) {
    let hash = 2166136261;
    for (const char of token) {
      hash = Math.imul(hash ^ char.charCodeAt(0), 16777619);
    }
    const slot = Math.abs(hash) % dims;
    vector[slot] += 1;
    vector[(slot + token.length) % dims] += token.length / 10;
  }
  const magnitude = Math.hypot(...vector) || 1;
  return Array.from(vector, (value) => value / magnitude);
}

        export function ChatMemoryExample() {
          const [engine, setEngine] = useState(null);
          const [query, setQuery] = useState('');
          const [results, setResults] = useState([]);

          useEffect(() => {
            let cancelled = false;
            let instance;

            (async () => {
              await init();
              const flat = new Float32Array(
                records.flatMap((record) => embedText(`${record.title} ${record.text} ${record.meta}`))
              );
              instance = WasmSearchEngine.from_vectors(flat, dims, 16, 200, 64);
              if (!cancelled) setEngine(instance);
            })();

            return () => {
              cancelled = true;
              instance?.free();
            };
          }, []);

          useEffect(() => {
            if (!engine || query.trim().length < 2) {
              setResults([]);
              return;
            }

            const hits = JSON.parse(engine.search(new Float32Array(embedText(query)), 5));
            setResults(
              hits.map(([id, distance]) => ({
                ...records[id],
                similarity: Number((1 - distance).toFixed(3)),
              }))
            );
          }, [engine, query]);

          return (
            <section>
              <input
                value={query}
                onChange={(event) => setQuery(event.target.value)}
                placeholder="Search prior conversation memory"
              />
              <ul>
                {results.map((result) => (
                  <li key={result.title}>
                    <strong>{result.title}</strong> — {result.meta} (score {result.similarity})
                  </li>
                ))}
              </ul>
            </section>
          );
        }

How this example works

The pattern has three moving parts. First, you choose what text represents each record: title, description, metadata, or a chunk of content. Second, you turn that text into vectors and flatten them into one Float32Array. Third, you build the HNSW graph and query it with a vector created from the user input. The library returns nearest-neighbor IDs and distances, and your app decides how to display or post-process them.

Because the retrieval step is approximate nearest-neighbor search, it stays fast even as the dataset grows beyond trivial linear scans. The most important quality lever is still the embedding model. Better vectors usually matter more than micro-optimizing ANN parameters, so teams should benchmark retrieval quality on real user phrasing before shipping the experience widely.

When to use this pattern

This is a practical fit when the search corpus is small to medium, shipped with the app, and searched frequently enough that backend latency would be noticeable. Common examples include docs portals, embedded support widgets, local-first assistants, and curated catalogs.

Limitations

Memory retrieval needs guardrails. Old facts can become stale, and purely semantic recall may surface the wrong turn unless you layer in recency or thread filters.

Be especially careful about corpus size, update frequency, and data sensitivity. Browser vector search is excellent when those three constraints are favorable, but it is not the right answer when the dataset is huge, private, or changing constantly for every user.

FAQ

Can altor-vec store long-term memory for a chat app?

It can support the retrieval step, but you still need your own policy for what gets saved, updated, or forgotten.

Why use vector recall instead of raw transcript search?

Because users usually ask for a concept or fact, not an exact phrase from a past turn.

What extra ranking signals help most?

Recency, conversation boundaries, and memory type metadata usually make memory retrieval much more reliable.

Get started: npm install altor-vec · GitHub