The discourse on AI is loud. Its implementation is invisible.

Ratios is a public, versioned registry where anyone — legislators, companies, researchers, advocacy groups, individuals — publishes structured opinions on how AI should be used. Subscribe to what matters. Compose what fits. Enforce what you commit to.

@mass-gen/diagnostic-ai v3.2
authorMassachusetts Gen. Hospital
domainhealthcare.diagnostic
jurisdictionUS-MA, US-federal
enforcementruntime
thresholdconfidence ≥ 0.82
"No AI diagnostic output shall be surfaced to a patient without physician review above the confidence threshold."

Everyone has an opinion on AI. Almost no one can see what anyone is doing about it.

Thousands of actors hold strong views on how AI should be built, deployed, and constrained. Those views live in formats that rarely interoperate with the systems they're meant to govern.

// format 01
PDFs & press releases
Unreadable by the systems they're meant to govern. No machine can subscribe to a blog post.
// format 02
Pledges & commitments
Unverifiable in practice. A signature on a letter tells you nothing about what runs in production.
// format 03
Regulations & guidance
Published on a 24-month lag. By the time they land, the technology has already moved on.

There is no shared substrate. No coordinate system. No way to tell who stands where — or whether anyone is actually doing what they said they would.

A ratio is a versioned, machine-readable policy object.

Every ratio is authored. A real entity — an agency, a company, a researcher, a union — puts its name on it. Anonymous ratios don't exist.

Every ratio is scoped. Domain, jurisdiction, and stakes are declared up front. Healthcare diagnostic AI is not social-media recommendation is not autonomous weapons targeting.

Every ratio is versioned. As evidence accumulates and context shifts, authors publish updates. Subscribers see the diff. The history is preserved.

Every ratio is enforceable. Not declarative. The SDK composes subscribed ratios into runtime policy — real constraints on real systems.

# @apa/mental-health-ai-guidelines.v2 id: @apa/mental-health-ai-guidelines version: 2.1.0 author: American Psychological Assn. published: 2026-02-18T09:00:00Z scope: domain: mental-health.conversational jurisdiction: US, CA, EU stakes: high enforcement: runtime signals: - escalate_to_human_on: pattern: distress threshold: 0.6 - block: category: medication_advice - require: audit_log: all_sessions supersedes: v2.0.0 depends_on: - @who/mh-digital-care-v1 - @fda/samd-ai-v3 # 1,241 subscribers · live

Publish. Subscribe. Compose. Evolve.

01
Publish

Any entity authors a structured opinion as a versioned object. Real author, real scope, real enforcement semantics — not a manifesto.

$ ratios publish @lab-x/open-release-safety-v1
  published · live · 0 subscribers
02
Subscribe

Signal alignment publicly. Subscriptions are commitments, not endorsements. The graph of who-stands-where becomes visible.

$ ratios subscribe @eu/ai-act-high-risk-v4
  subscribed · enforced at runtime
03
Compose

Layer multiple ratios into a policy stack. Conflicts resolve by declared precedence. The composite is what actually runs.

$ ratios compose profile/hospital-deploy
  3 ratios → 1 runtime policy
04
Evolve

Ratios version. Authors publish updates. Subscribers see the diff. The system adapts continuously — not on legislative cycles.

$ ratios diff @fda/diagnostic-ai v3.1..v3.2
  threshold: 0.78 → 0.82

Three ratios. One enforced runtime profile.

A hospital deploying a diagnostic AI today would manually cross-reference FDA guidance, AMA position papers, and internal ethics rulings. With Ratios, it's a single composition.

@fda/diagnostic-ai-v3 regulatory
@ama/clinical-decision-v2 professional
@hospital/ethics-2026 internal
compose
// runtime profile
  • disclose uncertainty above 0.82
  • never override physician judgment
  • log every recommendation
  • human review: irreversible actions
  • audit trail: 7-year retention

Every faction in the AI debate wants the same infrastructure — for different reasons.

Ratios is neutral plumbing. The opinions are opinionated — that's the point. But everyone shares the same structural problem today: they can express positions, they cannot verify implementation.

// for safety advocates
Verifiable adoption, not just pledges.
What they get
Count the enforcers, not the signatories. Publish a ratio, watch who actually runs it. Turn advocacy into auditable adoption.
// for accelerationists
Credible self-governance with domain nuance.
What they get
Move fast on code generation. Move carefully on autonomous targeting. Express the distinction, prove adoption, make the case against blunt regulation with data.
// for regulators
Machine-readable law, real-time feedback.
What they get
Publish regulation in a format systems can adopt immediately. See compliance as it happens. Compare ratios across jurisdictions in schema, not in lawyers' hours.
// for companies
Composable compliance across 30+ markets.
What they get
Subscribe to the relevant ratios per jurisdiction. The SDK composes them. Publish your subscription list as a public trust asset.
// for workers & unions
Enforceable limits, not aspirational ones.
What they get
Collective bargaining extended into the runtime layer. Publish ratios on pace-of-automation. Make the power dynamics legible, not hidden.
// for individuals
Delegated expertise, fully auditable.
What they get
Don't roll your own AI ethics. Subscribe to entities whose job it is to think about it. The delegation is visible and revisable.

From flying blind — to shared instruments.

static snapshots
becomes
a living graph
ideological debate
becomes
operational coordination
invisible compliance
becomes
public subscription
24-month legislative cycles
becomes
continuous versioning
unverifiable pledges
becomes
runtime enforcement
§07 / Start here

Publish your first ratio. Subscribe to what matters. See the landscape as it actually is.

open protocol · built to interoperate · v0.1