A ready-to-run example is available here!The Secret Registry provides a secure way to handle sensitive data in your agent’s workspace. It automatically detects secret references in bash commands, injects them as environment variables when needed, and masks secret values in command outputs to prevent accidental exposure.
Injecting Secrets
Use theupdate_secrets() method to add secrets to your conversation.
Secrets can be provided as static strings or as callable functions that dynamically retrieve values, enabling integration with external secret stores and credential management systems:
Ready-to-run Example
This example is available on GitHub: examples/01_standalone_sdk/12_custom_secrets.py
examples/01_standalone_sdk/12_custom_secrets.py
The model name should follow the LiteLLM convention:
provider/model_name (e.g., anthropic/claude-sonnet-4-5-20250929, openai/gpt-4o).
The LLM_API_KEY should be the API key for your chosen provider.Next Steps
- MCP Integration - Connect to MCP
- Security Analyzer - Add security validation

