Agent mental model
How agents use HASP handles, MCP tools, grants, and brokered runs without reading raw secret values.
An agent treats HASP as the route for brokered work.
Teach the agent one habit: ask for the work to happen, not for the value. The agent names a repo, a safe reference, and a command. HASP checks the project, grant, session, and target. The child process gets the secret if the request passes. The agent gets the command result and the audit trail keeps the record.
Use the Mental model for the operator view. This page shows the same flow from the agent side.
The agent sees handles
An agent working through HASP sees a smaller world than a shell with .env loaded. That smaller world is the point.
@OPENAI_API_KEY through HASP. Do not paste the value.
hasp_list, hasp_run, hasp_inject, hasp_redact
The agent can reason with these objects:
- named refs such as
@OPENAI_API_KEY - neutral aliases such as
secret_01 - manifest targets such as
server.integration - tool results from
hasp_list,hasp_targets, orhasp_target_explain - command output from
hasp_runorhasp_inject
The agent asks for plaintext only when the brokered route cannot run the task.
One run from the agent's seat
Imagine a repo with a test command that needs OPENAI_API_KEY.
Run the integration tests. Use HASP for OPENAI_API_KEY.
The user gives the agent a handle and a boundary.
hasp_list {"project_root":"."}
The agent confirms which refs this repo can name.
hasp_run {"env":{"OPENAI_API_KEY":"@OPENAI_API_KEY"},"command":["pnpm","test:integration"]}
The tool call names a command and a ref. It leaves plaintext behind HASP.
repo + session + grant + ref
The broker rejects the request if any check fails.
OPENAI_API_KEY=... pnpm test:integration
The process gets the environment value. The agent sees only output.
exit=0, tests passed, audit event written
The agent can summarize the run without learning the key.
The transcript can now show the whole decision trail:
User: Run the integration tests. Use HASP for OPENAI_API_KEY.Agent: I will call hasp_list, then run the test command with @OPENAI_API_KEY.Tool: hasp_list -> refs: @OPENAI_API_KEY, target: server.integrationTool: hasp_run -> exit 0Agent: The integration tests passed.
No one had to paste the key into the chat.
The prompt shape
Good prompts give the agent a task, a repo boundary, and a delivery rule.
Run the integration tests in this repo.Use HASP for @OPENAI_API_KEY.Use hasp_run or the server.integration target.Do not reveal, print, or write the secret value.If HASP refuses access, tell me which ref or grant is missing.
That prompt gives the agent enough room to work. It also tells the agent how to fail.
Use this shorter version when the repo already has targets:
Run the server.integration target through HASP and report the test result.Keep managed values out of the transcript.
Where MCP fits
MCP is the tool lane agents already understand. HASP occupies that lane as the broker between tool calls and vault values.
hasp mcp or hasp agent mcp <id>
There are two commands to know:
hasp mcpstarts the generic stdio MCP server.hasp agent mcp <agent-id>starts the same tool surface with profile-aware session setup.
Use the profile-aware command for first-class agents such as Codex CLI, Claude
Code, Cursor, Aider, Hermes, and OpenClaw. It opens or reuses a daemon-backed
session, labels the caller as agent:<id>, sets the project root when the
profile has one, and enables agent-safe mode for protected workflows.
Generic MCP config looks like this:
{
"mcpServers": {
"hasp": {
"command": "hasp",
"args": ["mcp"]
}
}
}
Profile-aware config uses the agent command:
{
"mcpServers": {
"hasp": {
"command": "hasp",
"args": ["agent", "mcp", "codex-cli"]
}
}
}
If the agent launches helpers that also call HASP, start the agent through
hasp agent launch <id> -- <command> or hasp agent shell <id>. The launcher
pushes HASP_SESSION_TOKEN and safe-mode metadata into the whole child process
tree, including helpers outside the MCP server process.
MCP handshake
A strict MCP client starts with initialize, then asks for tools, then calls
one tool at a time.
{"method":"initialize","params":{"protocolVersion":"2025-06-18"}}
HASP returns a negotiated protocol version, tool capability, and server name.
{"method":"tools/list"}
The agent sees tool names, descriptions, and JSON input schemas.
{"method":"tools/call","params":{"name":"hasp_run","arguments":{...}}}
HASP runs the brokered action or returns a JSON-RPC error.
Test the generic server without opening an agent:
printf '{"jsonrpc":"2.0","id":1,"method":"tools/list"}\n' | hasp mcp
Test a profile-aware server the same way:
printf '{"jsonrpc":"2.0","id":1,"method":"tools/list"}\n' | hasp agent mcp codex-cli
For a full handshake:
printf '%s\n' \
'{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2025-06-18","capabilities":{},"clientInfo":{"name":"docs-check","version":"0"}}}' \
'{"jsonrpc":"2.0","method":"notifications/initialized"}' \
'{"jsonrpc":"2.0","id":2,"method":"tools/list"}' \
| hasp mcp
The default response lists these tools:
| Tool | Agent use |
|---|---|
hasp_list |
List project-scoped refs and safe named refs. |
hasp_check |
Scan the project for managed values that leaked into files. |
hasp_targets |
List sanitized manifest targets, delivery kinds, refs, and prerequisite status. |
hasp_target_explain |
Explain one target without command argv or values. |
hasp_run |
Run a command with secret refs mapped into environment variables. |
hasp_inject |
Run a command with secret refs mapped to broker-owned credential files. |
hasp_secret_get |
Confirm metadata and get a safe named ref. It returns no raw value. |
hasp_redact |
Redact managed values from text before quoting logs. |
MCP arguments that matter
Most HASP MCP tools accept the same broker fields:
project_rootbinds the request to a repo. If omitted, HASP usesHASP_AGENT_PROJECT_ROOTor..session_tokenlets a wrapper pass an existing daemon-backed session. If omitted, profile-aware MCP opens one.host_labelnames the caller in audit events. Profile-aware MCP usesagent:<id>.grant_projectcan beonce,session, orwindow.grant_secretcan beonce,session, orwindow.
Use window for project approval when an agent will run several related tools
inside one repo. Use session for secret access when the same run will call a
test, retry, and inspect logs. Use once when the command has a narrow,
single-shot shape.
MCP tool calls
A basic brokered run uses three calls.
First, it asks what the repo can name:
{
"name": "hasp_list",
"arguments": {
"project_root": ".",
"grant_project": "window"
}
}
Then it runs the command with an environment ref:
{
"name": "hasp_run",
"arguments": {
"project_root": ".",
"grant_project": "window",
"grant_secret": "session",
"env": {
"OPENAI_API_KEY": "@OPENAI_API_KEY"
},
"command": ["pnpm", "test:integration"]
}
}
If the tool needs a credential file, the agent uses hasp_inject:
{
"name": "hasp_inject",
"arguments": {
"project_root": ".",
"grant_project": "window",
"grant_secret": "session",
"files": {
"GOOGLE_APPLICATION_CREDENTIALS": "@GCP_SERVICE_ACCOUNT"
},
"command": ["node", "scripts/sync.js"]
}
}
Those calls keep the agent focused on the job. The agent names a ref and a command. HASP owns resolution.
Tool results
hasp_run and hasp_inject return the command result:
{
"exit_code": 0,
"stdout": "...",
"stderr": "",
"stdout_truncated": false,
"stderr_truncated": false,
"stdout_bytes_omitted": 0,
"stderr_bytes_omitted": 0,
"redacted": true,
"suppressed": false
}
HASP streams command output through the redactor before the agent sees it. It
also caps each stream, so a noisy process stops at a bounded response size. If the
agent sees redacted: true, it can report that HASP removed a managed value
from the output. If it sees stdout_truncated or stderr_truncated, it reports
the visible output and the omitted byte count.
Manifest target calls add target metadata:
{
"target": "server.integration",
"manifest_hash": "..."
}
Targets keep the agent from inventing secret mappings. hasp_targets and
hasp_target_explain return sanitized descriptions, refs, delivery kinds,
destination names, prerequisite status, and the manifest identity. They leave
out raw values and repo-controlled command argv.
MCP execution refuses two target shapes:
- a target combined with extra
envorfilesmappings - a target that writes workspace-visible secret files, such as generated xcconfig output
Each refusal points the agent back to a human CLI flow for workspace-visible artifacts.
Trusted harness tools
The default MCP catalog avoids tools that accept raw values or mutate vault state. A trusted local harness can opt in with:
HASP_MCP_ENABLE_UNSAFE_SECRET_WRITE_TOOLS=1 hasp mcp
That adds:
hasp_capturehasp_secret_addhasp_secret_updatehasp_secret_deletehasp_secret_exposehasp_secret_hide
Keep those tools out of normal agent configs. They exist for local setup,
controlled evals, and migration harnesses. Day-to-day agents use hasp_run,
hasp_inject, hasp_secret_get, and hasp_redact.
Choose the path
hasp_list or hasp_targets.
hasp_run with env mappings.
hasp_inject with files mappings.
hasp_redact before quoting logs.
For repo-defined workflows, prefer manifest targets. Targets let the repo name the expected refs once, and the agent can call a stable target instead of rebuilding the mapping by memory.
{
"name": "hasp_run",
"arguments": {
"project_root": ".",
"target": "server.integration",
"grant_project": "window",
"grant_secret": "session",
"command": ["pnpm", "test:integration"]
}
}
If the agent only knows a secret name, it can ask for metadata first:
{
"name": "hasp_secret_get",
"arguments": {
"project_root": ".",
"name": "OPENAI_API_KEY"
}
}
The result can include named_reference: "@OPENAI_API_KEY" and
available_in_project: true. It still omits the raw value.
Blocked run reports
A failure report names the missing piece.
HASP refused the run because this repo cannot name @OPENAI_API_KEY.I did not receive the secret value.Please bind that ref to this project or give me a different target.
For an expired session:
HASP refused the run because the session grant expired.I can retry through hasp_run if you approve a new session grant.
For a tool that expects a file:
This command wants a credential file path.I can rerun it with hasp_inject using @GCP_SERVICE_ACCOUNT.
The agent reports the broker decision and stops there. Copying a key into .env changes the security story and leaves cleanup work behind.
MCP errors use JSON-RPC error responses. Read the message field:
{
"jsonrpc": "2.0",
"id": 7,
"error": {
"code": -32000,
"message": "target cannot be combined with explicit env or files mappings"
}
}
Translate that into a developer-facing report:
HASP refused the MCP tool call because the target already defines its delivery.I did not run the command.I can retry with the target alone or with explicit env mappings, but not both.
Transcript rules for agents
Use these rules in agent instructions, project docs, or a system prompt:
Use HASP refs for managed secrets.Prefer hasp agent mcp <profile> over plain hasp mcp when a profile exists.Call hasp_list before guessing a ref.Call hasp_targets before using a repo-defined workflow.Use hasp_run for environment variables.Use hasp_inject for credential files.Use hasp_secret_get for metadata, not plaintext.Quote command output only after redaction if it may contain a managed value.Ask for raw secret values only after a human approves plaintext access.
These rules give the agent a narrow protocol. They also give the developer a readable transcript: the agent asked for a ref, HASP checked policy, the command ran, and the agent reported the result.
Review an agent run
After the run, check three places:
- The transcript contains refs, commands, and output. Secret values stay out.
- The audit log shows the session, grant, reference, and delivery path.
- The repo stays clean. Run
hasp check-repoif the agent wrote files.
If all three pass, the agent used HASP the way a developer expects: enough access to finish the task, no raw value in the agent context.