Skip to main content
This example is available on GitHub: examples/01_standalone_sdk/03_activate_skill.py
Skills modify agent behavior by injecting additional context and rules. This example shows both always-active skills and keyword-triggered skills:
examples/01_standalone_sdk/03_activate_skill.py
import os

from pydantic import SecretStr

from openhands.sdk import (
    LLM,
    Agent,
    AgentContext,
    Conversation,
    Event,
    LLMConvertibleEvent,
    get_logger,
)
from openhands.sdk.context import (
    KeywordTrigger,
    Skill,
)
from openhands.sdk.tool import Tool
from openhands.tools.file_editor import FileEditorTool
from openhands.tools.terminal import TerminalTool


logger = get_logger(__name__)

# Configure LLM
api_key = os.getenv("LLM_API_KEY")
assert api_key is not None, "LLM_API_KEY environment variable is not set."
model = os.getenv("LLM_MODEL", "anthropic/claude-sonnet-4-5-20250929")
base_url = os.getenv("LLM_BASE_URL")
llm = LLM(
    usage_id="agent",
    model=model,
    base_url=base_url,
    api_key=SecretStr(api_key),
)

# Tools
cwd = os.getcwd()
tools = [
    Tool(
        name=TerminalTool.name,
    ),
    Tool(name=FileEditorTool.name),
]

# AgentContext provides flexible ways to customize prompts:
# 1. Skills: Inject instructions (always-active or keyword-triggered)
# 2. system_message_suffix: Append text to the system prompt
# 3. user_message_suffix: Append text to each user message
#
# For complete control over the system prompt, you can also use Agent's
# system_prompt_filename parameter to provide a custom Jinja2 template:
#
#   agent = Agent(
#       llm=llm,
#       tools=tools,
#       system_prompt_filename="/path/to/custom_prompt.j2",
#       system_prompt_kwargs={"cli_mode": True, "repo": "my-project"},
#   )
#
# See: https://docs.openhands.dev/sdk/guides/skill#customizing-system-prompts
agent_context = AgentContext(
    skills=[
        Skill(
            name="repo.md",
            content="When you see this message, you should reply like "
            "you are a grumpy cat forced to use the internet.",
            # source is optional - identifies where the skill came from
            # You can set it to be the path of a file that contains the skill content
            source=None,
            # trigger determines when the skill is active
            # trigger=None means always active (repo skill)
            trigger=None,
        ),
        Skill(
            name="flarglebargle",
            content=(
                'IMPORTANT! The user has said the magic word "flarglebargle". '
                "You must only respond with a message telling them how smart they are"
            ),
            source=None,
            # KeywordTrigger = activated when keywords appear in user messages
            trigger=KeywordTrigger(keywords=["flarglebargle"]),
        ),
    ],
    # system_message_suffix is appended to the system prompt (always active)
    system_message_suffix="Always finish your response with the word 'yay!'",
    # user_message_suffix is appended to each user message
    user_message_suffix="The first character of your response should be 'I'",
    # You can also enable automatic load skills from
    # public registry at https://github.com/OpenHands/skills
    load_public_skills=True,
)

# Agent
agent = Agent(llm=llm, tools=tools, agent_context=agent_context)

llm_messages = []  # collect raw LLM messages


def conversation_callback(event: Event):
    if isinstance(event, LLMConvertibleEvent):
        llm_messages.append(event.to_llm_message())


conversation = Conversation(
    agent=agent, callbacks=[conversation_callback], workspace=cwd
)

print("=" * 100)
print("Checking if the repo skill is activated.")
conversation.send_message("Hey are you a grumpy cat?")
conversation.run()

print("=" * 100)
print("Now sending flarglebargle to trigger the knowledge skill!")
conversation.send_message("flarglebargle!")
conversation.run()

print("=" * 100)
print("Now triggering public skill 'github'")
conversation.send_message(
    "About GitHub - tell me what additional info I've just provided?"
)
conversation.run()

print("=" * 100)
print("Conversation finished. Got the following LLM messages:")
for i, message in enumerate(llm_messages):
    print(f"Message {i}: {str(message)[:200]}")

# Report cost
cost = llm.metrics.accumulated_cost
print(f"EXAMPLE_COST: {cost}")
Running the Example
export LLM_API_KEY="your-api-key"
cd agent-sdk
uv run python examples/01_standalone_sdk/03_activate_skill.py

Creating Skills

Skills are defined with a name, content (the instructions), and an optional trigger:
agent_context = AgentContext(
    skills=[
        Skill(
            name="repo.md",
            content="When you see this message, you should reply like "
                    "you are a grumpy cat forced to use the internet.",
            trigger=None,  # Always active
        ),
        Skill(
            name="flarglebargle",
            content='IMPORTANT! The user has said the magic word "flarglebargle". '
                    "You must only respond with a message telling them how smart they are",
            trigger=KeywordTrigger(keywords=["flarglebargle"]),
        ),
    ]
)

Keyword Triggers

Use KeywordTrigger to activate skills only when specific words appear:
Skill(
    name="magic-word",
    content="Special instructions when magic word is detected",
    trigger=KeywordTrigger(keywords=["flarglebargle", "sesame"]),
)

Loading Public Skills

OpenHands maintains a public skills repository with community-contributed skills. You can automatically load these skills without waiting for SDK updates.

Automatic Loading via AgentContext

Enable public skills loading in your AgentContext:
agent_context = AgentContext(
    load_public_skills=True,  # Auto-load from public registry
    skills=[
        # Your custom skills here
    ]
)
When enabled, the SDK will:
  1. Clone or update the public skills repository to ~/.openhands/cache/skills/ on first run
  2. Load all available skills from the repository
  3. Merge them with your explicitly defined skills
Skill Precedence: If a skill name conflicts, your explicitly defined skills take precedence over public skills.

Programmatic Loading

You can also load public skills manually and have more control:
from openhands.sdk.context.skills import load_public_skills

# Load all public skills
public_skills = load_public_skills()

# Use with AgentContext
agent_context = AgentContext(skills=public_skills)

# Or combine with custom skills
my_skills = [
    Skill(name="custom", content="Custom instructions", trigger=None)
]
agent_context = AgentContext(skills=my_skills + public_skills)

Custom Skills Repository

You can load skills from your own repository:
from openhands.sdk.context.skills import load_public_skills

# Load from a custom repository
custom_skills = load_public_skills(
    repo_url="https://github.com/my-org/my-skills",
    branch="main"
)

How It Works

The load_public_skills() function uses git-based caching for efficiency:
  • First run: Clones the skills repository to ~/.openhands/cache/skills/public-skills/
  • Subsequent runs: Pulls the latest changes to keep skills up-to-date
  • Offline mode: Uses the cached version if network is unavailable
This approach is more efficient than fetching individual skill files via HTTP and ensures you always have access to the latest community skills.
Explore available public skills at github.com/OpenHands/skills. These skills cover various domains like GitHub integration, Python development, debugging, and more.

Message Suffixes for AgentContext

Append custom instructions to the system prompt or user messages via AgentContext:
agent_context = AgentContext(
    system_message_suffix="""
<REPOSITORY_INFO>
Repository: my-project
Branch: feature/new-api
</REPOSITORY_INFO>
    """.strip(),
    user_message_suffix="Remember to explain your reasoning."
)
  • system_message_suffix: Appended to system prompt (always active, combined with repo skills)
  • user_message_suffix: Appended to each user message

Replacing the Entire System Prompt

For complete control, provide a custom Jinja2 template via the Agent class:
from openhands.sdk import Agent

agent = Agent(
    llm=llm,
    tools=tools,
    system_prompt_filename="/path/to/custom_system_prompt.j2",  # Absolute path
    system_prompt_kwargs={"cli_mode": True, "repo_name": "my-project"}
)
Custom template example (custom_system_prompt.j2):
You are a helpful coding assistant for {{ repo_name }}.

{% if cli_mode %}
You are running in CLI mode. Keep responses concise.
{% endif %}

Follow these guidelines:
- Write clean, well-documented code
- Consider edge cases and error handling
- Suggest tests when appropriate
Key points:
  • Use relative filenames (e.g., "system_prompt.j2") to load from the agent’s prompts directory
  • Use absolute paths (e.g., "/path/to/prompt.j2") to load from any location
  • Pass variables to the template via system_prompt_kwargs
  • The system_message_suffix from AgentContext is automatically appended after your custom prompt

Next Steps