Quickstart
You.com gives you real-time web intelligence through three core APIs: Search, Contents, and Research. This quickstart will get you searching the web and answering questions within minutes. Then, we’ll show you how to evaluate us.
What You.com offers
Returns real-time web and news results as structured, LLM-ready JSON. Use it to ground your AI in fresh information — feed search results directly into your prompt to answer questions without hallucination. Add the livecrawl parameter and each result comes back with its full page content, not just a snippet.
Give it a list of URLs, get back clean Markdown or HTML. No browser automation, no HTML parsing. One idea: pass your competitors’ pricing page URLs to a daily job, get clean Markdown back, and feed that to an LLM to monitor what changed.
Ask a complex question, get a thorough, well-cited answer. Research runs multiple searches, reads through the sources, and synthesizes everything — so you don’t have to. Control the depth with research_effort from lite to exhaustive.
Step 1: Get your API key
Sign in or create an account, then get an API key here: https://you.com/platform/api-keys. You’ll start with $100 in complimentary credits — no credit card required.
Step 2: Try the Search API
The Search API takes a natural language query and returns structured web and news results.
You’ll get back structured JSON like this:
Improve accuracy with livecrawl
Search results already include snippets — short, query-relevant text extracts from target pages. Use the livecrawl parameter to fetch full page content for each result as clean Markdown or HTML.
This will naturally increase latency, but massively improves knowledge accuracy.
Results that support live crawling will include a contents.markdown field with the full page.
For RAG pipelines that need deep context rather than surface-level snippets, this is the parameter to reach for.
Full Search API reference and all parameters
Step 3: Try the Contents API
The Contents API fetches content from URLs you specify, either as raw HTML, Markdown or both.
Each URL comes back as a structured object:
Full Contents API reference and all parameters
Step 4: Try the Research API
The Research API goes beyond a single web search. Give it a complex question and it runs multiple searches, reads through the sources, and synthesizes a thorough, citation-backed answer.
The response includes a Markdown-formatted answer with inline citations and the list of sources used:
Use research_effort to control how deep the API digs — lite for quick answers, standard for a good balance, deep or exhaustive when thoroughness matters more than speed.
Full Research API reference and all parameters
More ways to explore
Explore the APIs interactively right here in the docs
Use the SDKs
Benefit from ergonomic API access, type safety and easy readability.
Use a coding agent to write your integration
There are 2 easy ways to create context for your agent:
-
Add
/llms-full.txtto any URL path on this site to obtain the full content of a page in plain-text. For example,docs.you.com/llms-full.txtcontains complete documentation content including the full text of all pages. This includes the complete API reference, complete with raw OpenAPI specs and SDK code examples. -
Enable your agent to automatically discover and understand You.com APIs using our documentation-specific MCP server. Simply add the following wherever you store your MCP config:
Now your agent can automatically search the entirety of the You.com documentation as necessary.
Evaluate You.com
You’re now ready to evaluate. You.com provides an open-source evaluation framework and a trustworthy, reproducible methodology for benchmarking search APIs — so you can measure what actually matters: accuracy, latency, and information retrieval quality.
We’re the only search API provider with peer-reviewed evaluation research. Our methodology was presented at the Association for the Advancement of Artificial Intelligence (AAAI) 2026 conference and received the Best Paper Award — meaning the way we think about search evals has been independently validated by the research community. To read more about our research please read these articles:
- Stochasticity in Agentic Evaluations: Quantifying Inconsistency with Intraclass Correlation
- Randomness in AI Benchmarks: What Makes an Eval Trustworthy?
The open-source framework treats each search provider as a sampler: for every query, results are fetched from the API, synthesized into an answer by an LLM, then graded against ground truth. It supports various search providers giving you an apples-to-apples comparison on several benchmarks like:
- SimpleQA — factual question answering
- FRAMES — deep research and multi-hop reasoning
- Latency profiling — end-to-end measurement under real conditions
When starting your own evaluation, keep it simple: run count=10 with no filters on a representative query set, then layer in livecrawl if snippets aren’t providing enough context. The resources below take you further:
- How to Evaluate the Search API — methodology, dataset recommendations (SimpleQA, FRAMES, FreshQA), latency benchmarking, and a production checklist
- Agentic Web Search Playoffs — open-source benchmark comparing web search providers in agentic workflows
Our team can also design and run custom benchmarks tailored to your domain and quality bar. Talk to us
Pricing
Pricing is based on API calls, with additional costs for live crawling. See the full breakdown at you.com/platform/upgrade or reach out to [email protected].