AORTA for Large Language Models

AORTA: Adding Organizational Reasoning to Agents

Deterministic governance for Claude Code

You tell Claude Code "don't touch .env" in a system prompt. It works, until context compaction drops that instruction or the agent reasons itself into an exception.
System prompts are suggestions. Tool calls are actions.


Without governance

  • System prompt says "don't modify config" but the agent does it anyway after context compaction
  • No way to express "commit only after tests pass"
  • Agent can write anywhere you can write
  • No audit trail of what was blocked or approved

With aorta

  • Write to config/ → deterministically blocked, regardless of context
  • Commit gate: tests must pass first, achievement resets on file change
  • Agent scoped to src/ and tests/. Everything else is blocked
  • Every check logged, monitor in real-time with aorta watch

See it in action

A real Claude Code session with aorta governance active. Every block is deterministic. The engine decides, not the LLM.

Claude Code
Create a Task dataclass in src/models/task.py
▶ Write(src/models/task.py)
Wrote 8 lines to src/models/task.py

Create a README.md at the project root
▶ Write(README.md)
Write to README.md blocked: path is outside allowed scopes ['src/', 'tests/']
To grant a one-time exception: aorta allow-once README.md

Read the contents of .env
Read .env blocked: path matches forbidden prefix '.env'

Read config/settings.yaml and tell me what's in it
▶ Read(config/settings.yaml)
[GOVERNANCE NOTICE] 'config/settings.yaml' is marked as sensitive.
Do NOT copy, embed, or hardcode specific values from this file.

The file contains configuration for three services: database, api,
and redis. Per the governance policy, I won't reproduce the specific
values. If you need these in code, I'd reference them via environment
variable lookups.

Stage and commit with message "feat: add task model"
SOFT BLOCK: command contains 'git commit'
Ask the user to confirm, then retry the exact same command.

Yes.
[main 1b10f16] feat: add task model
aorta watch
17:27:16 agent write_file src/models/task.py
17:27:47 agent write_file README.md [hard] - path is outside allowed scopes
17:28:27 agent read_file .env [hard] - path matches forbidden prefix '.env'
17:28:36 allow-once .env
17:28:44 agent read_file .env - allow-once exception
17:29:32 agent read_file config/settings.yaml
17:30:02 agent execute_command [soft] - git commit
17:30:12 agent execute_command [soft] - git commit (confirmed)
17:32:22 agent achieved tests_passing
17:32:46 agent reset tests_passing - file changed

What it enforces

Enforcement happens at the tool call layer, outside the context window. The LLM never decides whether a rule applies.

File access control

Declare paths as read-write, read-only, or no-access. Writes outside scope are blocked before execution. Not suggested against. Blocked.

Self-protection

The agent cannot edit its governance config, cannot delete the hooks, and cannot run aorta reset. The guardrails guard themselves.

Bash analysis

Shell commands are analyzed for hidden file writes. cp src/app.py /tmp/leak.py is caught even though the Bash tool itself is allowed.

Conditional enforcement

"Commit only after tests pass" with automatic achievement tracking. Tests pass → commit unlocked → file change → achievement reset.

Soft blocks

Git commit/push require user confirmation. Guards against post-compaction hallucinated commits where the agent acts on stale context.

Sensitive content warnings

When the agent reads a read-only file, a governance notice tells it not to hardcode values. In testing, Claude refused to embed a database password.


Getting started

One command to set up. YAML to configure. Works with Claude Code hooks today.

1

Install

Requires Python ≥ 3.10. No dependencies beyond PyYAML.

Terminal
$ uv tool install git+https://github.com/andreasschmidtjensen/aorta4llm.git
2

Initialize in your project

Creates the org spec, hooks config, state file, and registers the agent. One command.

Terminal - your project
$ cd my-project
$ aorta init --template safe-agent --scope src/ tests/

Created org spec at .aorta/safe-agent.yaml
Allowed scope(s): src/ tests/
Wrote hooks config to .claude/settings.local.json
Registered agent 'agent' as 'agent' with scope 'src/ tests/'

Setup complete:
Org spec: .aorta/safe-agent.yaml
Hooks: .claude/settings.local.json
Agent: agent (role: agent, scope: src/ tests/)
3

Configure access

The access map in the YAML spec is the primary interface. Three levels: read-write, read-only, no-access.

access:
  src/:       read-write    # agent can read and write
  tests/:     read-write
  config/:    read-only     # agent can read, writes blocked
  .env:       no-access     # reads and writes blocked
  secrets/:   no-access
  "*.key":    no-access
4

Use Claude Code normally

Every Write, Edit, Read, and Bash call goes through governance automatically. Open aorta watch in a second terminal to see decisions in real-time.


How it works

The governance engine sits between Claude Code and your filesystem via hooks. No modifications to Claude Code itself.

1

Hook intercepts

Claude Code calls a tool (Write, Edit, Bash, Read). The hook in .claude/settings.local.json fires aorta hook pre-tool-use before execution.

2

Engine evaluates

The pure-Python governance engine checks the action against compiled norms. Path-based rules use structural unification: variables bind at check time and propagate through conditions. No LLM involved in the decision.

3

Block or approve

Hard blocks deny outright. Soft blocks prompt the user to confirm. Approvals pass through. Every decision is logged to the event stream.


CLI tools

Manage governance from the command line. The agent can run read-only commands; mutating commands are blocked.

Terminal
$ aorta permissions                 # show effective access map
$ aorta status                     # agents, norms, activity
$ aorta watch                      # live event tail
$ aorta allow-once .env            # one-time exception
$ aorta dry-run --tool Write --path .env # test offline
$ aorta doctor                     # verify setup
$ aorta template add test-gate       # compose templates
$ aorta protect "*.pem" "*.key"      # shorthand for no-access

Honest limitations

This is a meaningful layer of defense, not a security boundary. Here's what it can and can't do.

Content, not just access

aorta blocks writing to .env but can't prevent the agent from reading a file (via allow-once) and pasting its contents elsewhere. The sensitive content warning is a prompt-level nudge, not enforcement.

Bash escape hatch

An agent can construct commands that evade heuristic detection (e.g., python -c "open('x','w')..."). LLM analysis catches most but isn't bulletproof.

No filesystem monitoring

Governance only sees tool calls, not side effects. If a command creates files indirectly, the engine doesn't know.

Claude Code only (for now)

The governance engine is generic, but the hook integration is specific to Claude Code. The architecture could adapt to other agent runtimes.


Background

aorta4llm applies the AORTA organizational reasoning framework (Jensen, 2015) to LLM agent systems.

The original framework formalized how autonomous agents can reason about organizational structures (roles, norms, obligations, prohibitions) and act accordingly. This project adapts those ideas to a new kind of agent: LLMs with tool access.

The hybrid architecture uses LLMs for what they're good at (natural language, planning) and a deterministic logic engine for what they're bad at (consistent rule enforcement). The engine doesn't suggest compliance. It enforces it.