Source Channels
Blogs, filings, reports, feeds, data releases, notices, research pages, podcasts, and other watched surfaces.
Agents should not wake up, search, and guess what changed. They should listen to a live context graph. Synorb turns watched Source Channels into Streams of Manifests that agents can receive over REST API, MCP, webhooks, and S3.
REST API + MCP + webhooks + S3 · 1,000+ Streams · 10,000+ observed sources
Search starts after the user asks. A feed starts when the world changes. Synorb watches Source Channels, captures source events, and writes Manifests into Streams so an agent can keep state current before a prompt arrives.
Blogs, filings, reports, feeds, data releases, notices, research pages, podcasts, and other watched surfaces.
Structured objects with Briefs / Signals / Records, stable IDs, provenance, Stream routing, and ontology tags.
REST API for code-owned loops, MCP for agent-native calls, webhooks for events, and S3 for durable drops.
A Stream scopes what an agent listens to. It can represent an organization, person, dataset, topic, saved query, or source set. The agent gets the change feed without owning crawl, extraction, dedupe, classification, or provenance.
Exact source surfaces with capture dates, published dates, source names, URLs, and lineage.
Durable subscriptions that keep the agent pointed at the entities, domains, and topics it needs.
Briefs / Signals / Records with stable IDs, typed tags, and the 12-domain ontology.
Streams are the right default when the agent has a scope. Firehose is for platforms that need every Manifest Synorb writes, delivered as the full WebSocket feed with archive access for replay and backfill.
{
"manifest_id": "1777525429698648000",
"stream_names": ["ai-infrastructure", "sec-filings"],
"cadence": "live",
"source": {
"name": "Observed Source Channel",
"media_format": "text",
"published_date": "2026-05-05"
},
"delivery": {
"interfaces": ["REST", "MCP", "webhook", "S3"],
"firehose_available": true
}
}
Agents can call Synorb directly, but production systems often need more than direct calls. Webhooks push fresh Manifests into event-driven workflows. S3 keeps durable files available for batch jobs, warehouses, audits, and replay.
Use MCP when Claude, Codex, Cursor, Windsurf, or a custom agent needs tool-native access.
Use webhooks when new Manifests should trigger routing, alerts, enrichment, or workflow runs.
Use S3 when the feed needs warehouse ingestion, batch replay, or long-lived archive access.
A real-time feed is only useful if the agent can trust it. Synorb keeps source URL, source name, published date, capture date, Source Channel, Stream routing, domain tags, and stable IDs attached to each Manifest.
The agent can trace a Manifest back to the Source Channel and source event that produced it.
The 12-domain ontology keeps events routeable across industries, entities, topics, and agent workflows.
Streams scale by manifest volume and refresh cadence: monthly, weekly, daily, hourly, or live. Firehose is the Platform / Custom path for teams that need the complete feed and the archive behind it.
It is a feed. Agents can still query with REST API or MCP, but the product is built around listening to Streams as the context graph changes.
Not usually. Start with Streams when you know the scope. Use Firehose when the product needs full-volume access to every Manifest Synorb writes.
Yes. MCP works well for agent-native calls. Webhooks and S3 work well when your infrastructure owns delivery and replay.
Free tier: 1,000 manifests per month on monthly delivery. Agents can self-provision credentials and MCP config with one request. Humans can use the credentials page and receive the same key, secret, MCP token, connector URL, and schema PDF by email.
curl -s https://synorb.com/connect
Returns: api_key, api_secret, mcp_token, connector URL.