Better search. Smarter agents.

The first web search tool truly engineered for AI agents.

Terminal
Works with
Claude Code
Codex
Cursor
Copilot
Gemini CLI
OpenCode
OpenClaw

Stop drowning your agents in web data.

Same answer. 30x less noise.

Without webref

Set up Stripe metered billing for my Next.js project

AI

I'll research how to implement Stripe metered billing in Next.js...

web_search+2.1k

"stripe metered billing nextjs setup"

web_fetch+8.4k

stripe.com/docs/billing/subscriptions/usage-based

web_fetch+12.2k

nextjs.org/docs/app/api-routes

web_fetch+6.8k

stackoverflow.com/questions/stripe-metered...

AI

I need more details on the usage records API...

web_search+1.9k

"stripe usage records API create node"

web_fetch+9.1k

stripe.com/docs/api/usage_records/create

context window81% used
degraded attentiondumb zone
40.5k tokens~$0.36

research $0.01 + context $0.35 · Opus 4.6, 5 cached turns

With webref

Set up Stripe metered billing for my Next.js project

AI

I'll research how to implement Stripe metered billing in Next.js...

webref+1.4k

"How to implement Stripe metered billing in Next.js with usage-based pricing"

3 searches15 sources analyzed

Stripe metered billing charges customers based on actual usage during a billing period. In Next.js, create an API route at /api/webhooks/stripe, configure a metered price in your dashboard, then report usage with stripe.subscriptionItems.createUsageRecord()...

AI

I have everything I need. Let me implement the billing integration...

context window3% used
1.4k tokens~$0.09

research $0.08 + context $0.01 · Opus 4.6, 5 cached turns

Deep research in under 20 seconds

Every query hits the live web — not a pre-built index.

Live

Always fresh

Every query searches and scrapes the live web. Not an index from last week. Not a cached snapshot. The actual web, right now.

docs.stripe.com2s ago
github.com3s ago
stackoverflow.com5s ago
developer.mozilla.org4s ago
reddit.com6s ago

Deep, not shallow

Multiple search strategies fan out in parallel, scrape full page content, then cross-reference and deduplicate into one cohesive answer.

0+

sources analyzed per query

Evidence, not summaries

Your agent sees why each claim is true — not just the conclusion.

What your agent actually gets

Click a query and watch it work.

What are the best LLMs right now?

Fits into your stack

Three ways to connect — from one-command agent setup to direct API access.

Skills

Recommended

One command gives every coding agent on your machine web research superpowers.

Claude Code
Codex
Cursor
Copilot
Gemini CLI
OpenCode
OpenClaw

Common questions

Is it only for research?

No. You can also pass any URL directly and get a clean, synthesized summary of that page. Useful when your agent already knows where to look — documentation pages, GitHub READMEs, specific articles. Just include the URL in your query and webref handles the rest.

Why not use cheaper tools?

Perplexity and other cheap tools are cheap per call — but the real cost is what happens after. Their noisy output fills your agent's context, and by query #5 it's forgetting your original instructions. webref outputs are optimized for LLMs: your agent stays sharp even after dozens of queries.

Isn't this just summarization?

No. Summarization loses information. webref restructures it. Search results often repeat the same facts across multiple sources with slightly different wording — we merge that into one cohesive document. The same technical details, just without the redundancy. Nothing useful gets cut.

What sources can it access?

The full web — docs, Stack Overflow, GitHub issues, blog posts, and sources that typical scrapers can't touch: YouTube transcripts, Reddit threads, forum discussions. All synthesized into the same clean format.

How do I verify accuracy?

Source links are embedded directly in the text as markdown. Your agent can follow any link to read the full page if it needs more depth. We're not just giving surface-level answers — we're giving your agent a clear path to dig deeper.

Is my data private?

By default, research and read history is recorded in your account so you can review activity and debug workflows. You can enable Privacy Mode from the usage dashboard to stop saving detailed history and traces while keeping aggregate usage counts. We never use your queries to train models or share them with third parties. Your API keys are hashed with bcrypt and never stored in plain text.

Have more questions? Read the docs

Let your agent research like you do

Install in seconds. 50 free credits. No subscription.

Works with your favorite coding agents

Claude Code
Codex
Cursor
Copilot
Gemini CLI
OpenCode
OpenClaw