AI-Native Filtering
Natural language product filtering powered by Workers AI
╭──────────────────────────────────────────────────────────────╮
│ │
│ USER AGENT FILTERS │
│ │
│ "Show me chairs ┌─────────────┐ ┌──────────────┐ │
│ under $2000" ───▶ │ Workers AI │ ───▶ │ category: │ │
│ │ │ │ seating │ │
│ │ Reasoning │ │ price: <2000 │ │
│ │ Streaming │ │ status: any │ │
│ └─────────────┘ └──────────────┘ │
│ │ │ │
│ ▼ ▼ │
│ ╔═══════════════════════════════════╗ │
│ ║ 16 products in catalog ║ │
│ ╚═══════════════════════════════════╝ │
│ │
╰──────────────────────────────────────────────────────────────╯
Ask for what you want. Skip the filter taxonomy.
Abstract
Filter UIs have a problem. They ask users to learn a taxonomy they don't care about. Categories, materials, price ranges—each toggle is a decision the user must make before they can find what they want.
What if users could just say what they're looking for?
This experiment tests whether an AI agent can interpret natural language queries and apply the right filters. The user describes their intent. The agent does the clicking.
The Problem
Traditional filter UIs require users to think in the system's terms. "Seating" instead of "chairs." "In stock" instead of "available now." Each filter is a translation from what the user wants to what the system understands.
This creates friction. Users must learn the vocabulary. They must understand what combinations are valid. They must click through options to see what exists.
"The best interface is no interface. The next best is one that speaks your language."
Hypothesis
An agent with access to filter tools can interpret natural language better than a human navigating checkboxes. Not because the agent is smarter—but because it removes the translation step.
- User says: "Something for my living room under $2,000"
- Agent interprets: Categories: seating, tables. Max price: $2,000
- User gets: Relevant results without learning the taxonomy
Live Demo
Try it yourself. Type a query in natural language, or use the traditional toggles. Watch how the agent reasons through your request in real-time.
Bookshelf Unit
Console Table
Entryway Console
Floor Lamp
H-shaped Side Table
H-shaped Side Table Oak
Lounge Chair
Low Coffee Table
Mantis Chair
Mantis Chair Compact
Mantis Lounge Chair
Moon Tides Bedside Cabinet
Nightstand Cabinet
Pendant Light
Stone Side Table
Implementation
The architecture separates concerns into composable packages:
@create-something/canon/filtering
UI components: FilterTogglePanel, ProductGrid, AgentPanel. Headless—they render state but don't know how filtering happens.
Filter Agent
Workers AI with JSON Schema mode. Eight tools: filter_by_material, filter_by_category, filter_by_price_range, and more.
SSE Streaming
Agent reasoning streams to the frontend in real-time. Users see the agent think through their query.
Engineering Details
Performance characteristics and cost analysis for the AI-native filtering implementation.
Latency Breakdown
Streaming reduces perceived latency by ~60%. Users see reasoning begin within 200ms.
Cost Analysis (Verified)
Tool Definitions (JSON Schema Mode)
{
"tools": [
{ "name": "filter_by_material", "params": ["materials[]"] },
{ "name": "filter_by_category", "params": ["categories[]"] },
{ "name": "filter_by_price_range", "params": ["min?", "max?"] },
{ "name": "filter_by_status", "params": ["statuses[]"] },
{ "name": "search_by_name", "params": ["query"] },
{ "name": "sort_results", "params": ["field", "direction"] },
{ "name": "clear_filters", "params": [] },
{ "name": "final_response", "params": ["explanation"] }
],
"max_iterations": 5,
"response_format": "json_schema"
} JSON Schema mode ensures structured output. No parsing failures in 500+ test queries.
Token Budget Breakdown (Verified)
Catalog uses summarized metadata (categories, materials, price range), not full product details. This design choice keeps context small. Full product list would add ~260 more tokens.
Optimization Opportunities
Current bottleneck analysis and where Rust/caching would help at scale:
Rust WASM: When It Helps
Caching Strategies
Production Architecture (Proposed)
Query → [Semantic Cache Check (KV)]
↓ miss
→ [Rust WASM: Query Analysis]
→ [Rust WASM: Vector Index Lookup] → Top-K products
→ [LLM: Tool Selection on reduced context]
→ [Cache Write (KV)]
→ Response
Estimated latency reduction: 40-60% for cache hits
Estimated cost reduction: 80% for cache hitsVerdict: For this experiment (16 products), optimizations are premature. The 300-500ms inference time dominates. At scale (1000+ products), Rust WASM for vector indexing and KV-based semantic caching would provide meaningful improvements.
Bidirectional Sync
The agent and manual toggles share a single source of truth. When the agent applies filters, the toggles update. When users toggle manually, the agent context clears. This creates a unified experience—two input methods, one outcome.
"The interface recedes. The user describes intent. The system responds."
What We Learned
- Natural language works for structured domains. With only 16 products and 4 categories, the agent rarely misinterprets. The taxonomy is small enough to fit in context.
- Streaming builds trust. Showing the agent's reasoning helps users understand what's happening. Black-box results feel arbitrary; visible thinking feels collaborative.
- Manual filters remain useful. Some users want direct control. The bidirectional sync means they can start with natural language and refine with toggles.
Limitations
This experiment has constraints worth noting:
- Small catalog (16 products) — larger catalogs may need vector search
- Workers AI latency — streaming helps, but there's still a delay
- English only — natural language parsing assumes English input
- Structured attributes — "find something that matches my style" won't work yet
Conclusion
AI-native filtering isn't about replacing UI controls. It's about giving users a choice: describe what you want, or click through options. Both paths lead to the same result. The system adapts to the user, not the other way around.
For small, structured catalogs, natural language filtering works well. The agent translates intent into action. The toggles stay in sync. The user finds what they're looking for without learning a taxonomy.