AI operating memory · phase zero

AI cannot run safely if your context is scattered.

Policies in drives. Owners in chat. Support paths in people's heads. OpenData turns that context into AI operating memory: the shared layer Forge reads from, Factory builds with, and Pulse writes back to every day.

Why this comes first

Automation without context is a fancy way to make the wrong thing fast.

Every customer comes to us with context scattered across drives, wikis, ticket comments, and people's heads. Policies are stale. Owners are implicit. Support paths live in chat history. AI tools amplify everything, including the confusion.

So we open every deployment with a knowledge inventory. We don't replace your existing systems, we rationalize them, link them, and make them queryable by the same layer that sets the course, builds safely, and runs the workflows your teams create with AI.

The four memory layers

One AI operating memory. Four audiences served.

Each layer is tuned to a different reader. The same policy, owner, or support fact gets presented at the altitude each role needs.

pulse.opendata.tools/docs/altitudes Live
One fact, four altitudes
L3OAuth 2.0 token refresh handler, exponential backoff, 3 retries, dead-letter on 401.
L2Auth refresh: 3 attempts with backoff before re-auth flow surfaces to the user.
L1Stay signed in. We auto-refresh your session, you only re-enter credentials if it expires.
L0Single sign-on works. Set it up once.

Same fact, four readers. Marketing for the buyer · Customer for the user · Product for the team · Architecture for the engineer.

The knowledge inventory

What happens in week one.

1
Discover.

We crawl existing documentation systems, Notion, Confluence, Google Docs, GitHub, internal wikis, and inventory what exists, where it lives, and how stale it is.

2
Classify.

Each fact is mapped to owners, policies, systems, support paths, and the audience layer that needs it.

3
Connect.

Operating context gets wired into Forge, Factory, and Pulse. When a workflow needs a policy, owner, or support path, it pulls from the right layer.

4
Keep current.

From here on, launched workflows update the memory. Stale entries get flagged. Documentation stops being a project and becomes part of the operating layer.

Curious
Context Co-Authoring

Forge keeps the memory current.

Policy updates, support notes, owner changes, and runbook edits are drafted in context. The AI suggests; the human approves. Every artifact lands in the right layer with proper provenance.

pulse.opendata.tools/skills/doc-co-authoring Co-authoring
Live edit · runbook for incident triage

Human: when there's an alert, page the on-call

Forge: Page the on-call engineer via PagerDuty (not Slack, alerts shouldn't compete with chat traffic). Include: alert title, runbook URL, severity, and the most recent metric snapshot.

Human: if they don't ack in 10 min, escalate

Forge: Escalate to the secondary on-call after 10 minutes without acknowledgement. If no ack at 20 minutes, escalate to the engineering manager and open an incident channel #inc-YYYYMMDD.

Brand voice · "specific, action-led, no fluff" 3 corrections applied
API reference

For the engineers building on top.

Pulse data is queryable via REST on Professional and Enterprise. Forge definitions and Factory handoffs are webhook-driven and Model Context Protocol-compatible.

REST API

Read access to outputs, owners, policies, support paths, telemetry, and value records. JSON, JWT auth, fair rate limits.

Webhooks

Push notifications on output changes, anomaly detection, and connector health changes. Verified signatures, retry semantics, replay.

Custom Connectors

Build via Model Context Protocol, same standard our AI-native integrations use. Templates, harness, validation included.

Want an AI operating-memory inventory?

We'll inventory your existing documentation systems and come back with a layered map: what to keep, what to connect, what to retire, and which AI workflows need it first. Free for qualified pilots.

or