Support demo · Watch the demo
Reasoning Layer · v1

Give your AI agents

Context that's thought through,
not looked up.

Vrin is the retrieval-time reasoning layer for AI agents. It curates the exact context an agent needs: reasoned, cited, and source-bound, before a single token is generated.

Model-agnostic
OpenAIAnthropicGooglexAIMistral
Outcomeswhat teams feel on day one
01
Up to 0%

First-response resolution

Your agent lands the right answer on the first try.

02
0% fewer

Follow-up questions needed

Cited, complete context. Less back-and-forth.

03
0× faster

Research time cut

Weeks of reading collapsed into seconds of reasoning.

04
0× richer

Context for complex questions

Multi-hop reasoning where vector search gives up.

Manifesto

Themodelisn'tthebottleneck.Thecontextis.Today'sLLMscanreason,synthesize,analyzebrilliantly,butonlyoverwhatyoufeedthem.Enterprisedecisionsdemandconnectingfactsacrossteams,timelines,andformats.Three-plushopsthattransformerscan'treachnativelyandvectorsearchdoesn'tevenattempt.Vrinisthereasoninglayerthatcuratescontextbeforethemodeleverseesit.Structured,cited,time-aware.Readythemomentyouragentasks.

§ 001 · on context
scroll to read
How Vrin thinks

Three moves between a question
and a trustworthy answer.

01Connect

Every source, one surface.
without the migration tax.

Point Vrin at where your knowledge already lives: Drive, Notion, Slack, SharePoint, PDFs, databases. Your team keeps working. Vrin keeps reading.

fig. 01Vrin knowledge graph: every enterprise source connected into one reasoning core
02Structure

Documents become a living graph.
not a bucket of chunks.

Vrin extracts entities, relationships, and timestamped facts, each one cited back to its source. Your knowledge graph grows with every ingest and self-heals across conflicting versions.

fig. 02
03Reason

Your agent asks. Vrin walks the graph.
three hops. one answer.

At query time, Vrin reasons across the graph, gathers the exact facts the question demands, and hands your agent a curated context pack: cited, time-aware, ready to generate.

fig. 03your question01fact 102fact 203fact 3cited answer→ 3 HOPS · 4 SOURCES · 1.8s
See it work

A walkthrough of Vrin
curating live context.

Watch Vrin connect to a real knowledge base, extract facts into a living graph, and answer a multi-hop question with traced sources.

Surfaces

Use Vrin the way
your agent already works.

One reasoning engine, four surfaces. Swap between them without re-indexing a single document.

vrin · sdkv1.2.0
from vrin import VRINClient

client = VRINClient(api_key="vrin_...")

# one call. graph traversal + vector + reasoning.
result = client.query(
    "How did Q4 revenue compare to Q3 across all subsidiaries?"
)

result.summary # cited, time-aware answer
result.sources # fact-level provenance
Drop Vrin into your agent in four lines.
Benchmarks

Reasoned context
outperforms
retrieved chunks.

Public, reproducible evaluations against the strongest systems on the leaderboard. Same documents, same questions. Every Vrin answer traceable back to source.

Reproducibility notes & run configs → blog

Leaderboard

MultiHop-RAG

metric: Semantic Accuracy (SA)

Vrin

95.1%

ChatGPT 5.2 (Thinking) [Oracle Context]

78.9%

Multi-Meta RAG (GPT-4)

63.0%

Multi-Meta RAG (Google PaLM)

61.0%

GPT-4 Baseline

56.0%

VrinLeaderboard competitors
For agent builders

The reasoning layer
inside every vertical agent.

One integration. Your customers get data sovereignty. You get a reasoning layer that keeps improving with every query.

Legal AI

Every clause, every precedent, traced.

Your agents cite the exact clause, version, and jurisdiction. Audit-grade by construction, not by hope.

audit-grade by construction
Financial AI

Numbers you can defend.

Reasoning across filings, earnings, and market data with temporal versioning. Know what was true on any given day.

Healthcare AI

Evidence-bound clinical context.

Connect clinical notes, literature, and guidelines with fact-level provenance. Every recommendation traces to evidence.

Customer Support

Resolution on the first response.

Your agent walks the graph of tickets, docs, and changelogs, arriving at the right policy before anyone has to escalate.

Enterprise Intelligence

Ask across every department.

Cross-team questions that used to require a human analyst. Now answered with the full org context, in seconds.

Building a vertical agent? Let's plug in Vrin.

OEM licensing · BYOC deployment · revenue-share available.

Partner programme
Pricing

Start free.
Scale when your agent does.

Every plan ships with the full reasoning engine. You choose the capacity and the deployment surface.

Builder

For individuals exploring reasoning APIs.

$0free forever
  • 100k chunks / 100k edges
  • 5k queries / month
  • Shared reasoning infra
  • API key authentication
  • Community support
Most chosen

Team

For product teams shipping agents with context.

Customscales with usage
  • 2M chunks / 3M edges
  • 100k queries / month
  • Dedicated indices
  • Full CBOM & TTL
  • Connectors: Postgres, Drive, Notion
  • Email support · 48h SLA

Enterprise

For regulated orgs with data-sovereignty requirements.

CustomBYOC · VPC · on-prem
  • Unlimited chunks / edges
  • Custom queries, SLA-backed
  • Private VPC or on-prem
  • SSO · SAML · SCIM
  • Data residency & audit packs
  • Dedicated TAM & DSE

Every plan includes

  • Multi-hop graph reasoning
  • Temporal fact versioning
  • Source-level provenance
  • Model-agnostic LLM routing
  • Cohere, Voyage, Bedrock embeddings

Security & deployment

  • Hybrid cloud · BYOC or shared
  • Enterprise keys never leave your VPC
  • SOC 2 Type II (in progress)
  • Air-gapped deployment available
  • EU, US, APAC data residency
Ready when your agent is

Give your agents a mind
to think with.

Three lines of code. A reasoning layer that stays with you from first query to enterprise-scale deployment.

• Model agnostic• Data sovereign• Cited by construction• Built in CA

vrin