Search
Aerostack Search provides vector similarity search. Store embeddings alongside your content, then find the most semantically similar results for any query.
Quick start
import { sdk } from '@aerostack/sdk'
// 1. Store content with embeddings
const text = 'Cloudflare Workers run V8 isolates at 300+ edge locations.'
const vector = await sdk.ai.embed(text)
await sdk.db.query(
'INSERT INTO knowledge (id, text, embedding) VALUES (?, ?, ?)',
[crypto.randomUUID(), text, JSON.stringify(vector)]
)
// 2. Search
const queryVector = await sdk.ai.embed('How does Cloudflare edge work?')
const results = await sdk.search.query(queryVector, {
table: 'knowledge',
limit: 5,
threshold: 0.7, // minimum similarity score (0–1)
})
results.forEach(r => {
console.log(`Score: ${r.score.toFixed(3)} — ${r.text}`)
})Methods
| Method | Description |
|---|---|
sdk.search.query(vector, options) | Find similar items by vector |
sdk.search.upsert(id, vector, metadata?) | Insert or update a vector |
sdk.search.delete(id) | Remove a vector |
Full RAG pipeline
app.post('/api/ask', async (c) => {
const { question } = await c.req.json()
// 1. Embed the question
const queryVector = await sdk.ai.embed(question)
// 2. Find relevant context
const results = await sdk.search.query(queryVector, {
table: 'knowledge',
limit: 5,
})
// 3. Format context
const context = results.map(r => r.text).join('\n\n')
// 4. Generate answer with context
const answer = await sdk.ai.complete({
model: 'gpt-4o-mini',
system: 'Answer questions based only on the provided context.',
prompt: `Context:\n${context}\n\nQuestion: ${question}`,
})
return c.json({
answer: answer.text,
sources: results.map(r => ({ id: r.id, score: r.score })),
})
})Search options
await sdk.search.query(vector, {
table: 'documents', // Database table with an embedding column
limit: 10, // max results (default: 10)
threshold: 0.5, // min cosine similarity (default: none)
filter: { // metadata filters
category: 'engineering',
published: true,
}
})The embedding column in your database table stores vectors as JSON arrays. Aerostack automatically indexes them for vector search — no extra configuration needed.
Use Cases
Knowledge base search
Let users search your help center or documentation by meaning instead of keywords. Embed each article on creation, then match user queries against the full corpus. A search for “billing not working” surfaces articles about payment issues, invoicing errors, and subscription troubleshooting.
const queryVector = await sdk.ai.embed(userQuery)
const articles = await sdk.search.query(queryVector, {
table: 'help_articles',
limit: 5,
threshold: 0.7,
filter: { published: true },
})Similar product recommendations
Show “customers also liked” suggestions by embedding product descriptions and finding the nearest neighbors. When a user views a product, query for the 5 most similar items and display them as recommendations — no collaborative filtering infrastructure required.
FAQ matching
Match incoming support questions against your existing FAQ database. When a user submits a ticket, find the most similar FAQ entry. If the similarity score is above a threshold, auto-reply with the FAQ answer; otherwise, route the ticket to a human agent.
Content discovery
Power a “related articles” or “more like this” feature on blogs, news sites, or learning platforms. Embed each piece of content on publish, and at read time, find the nearest neighbors to suggest further reading without manual tagging or curation.