Search on Liners uses a hybrid approach — we run traditional keyword matching and AI-powered semantic search at the same time, then merge the results. The idea is simple: keywords are great when you know exactly what you're looking for, but semantic and natural language search catches the products you'd miss otherwise.
QA Quinn keeps an eye on search quality to make sure results stay relevant.
When you type a query, two things happen in parallel:
Results from both are merged and ranked:
Searching for "send money abroad" will surface products related to international transfers and remittances — even if they don't contain those exact words anywhere in their listing.
Each product is turned into a numerical vector that captures what it does, who it's for, and where it operates. The embedding is built from:
We use Google text-embedding-005 (768 dimensions) for generating these embeddings. When a new product is added, it gets embedded automatically. When a product is updated, it trigger a re-embed.
Semantic search costs money (each query hits an embedding API), so we have daily usage limits to keep things sustainable:
| User Type | Daily Limit |
|---|---|
| Guest (not logged in) | 10 smart searches |
| Logged-in user | 50 smart searches |
| Admin | Unlimited |
When you've used up your quota:
To be fair, I see no reason why anyone should want to search more than 50 times in 24 hours, but I know humans can be unpredictable.
If the embedding API is ever unavailable (timeout, error, or anything unexpected), search gracefully falls back to keyword-only results. You won't see an error — it just works, minus the semantic layer.
LGTM Larry monitors for these fallbacks to make sure they're rare.