services.agent_factory
Agent Factory Service.
Provides shared logic for creating agents with skills and codemode toolsets. Used by both app.py (CLI agents) and routes/agents.py (API agents).
create_skills_toolset
def create_skills_toolset(skills: list[str],
skills_path: str,
shared_sandbox: Any | None = None) -> Any | None
Create an AgentSkillsToolset with the specified skills.
Skills are loaded via two complementary mechanisms, tried in order:
Path-based (Variant 1 + 1c): walk skills_path recursively and
load every sub-directory that contains a SKILL.md file. Skill
scripts are read from the local filesystem, so the path must be
accessible at runtime. In the Datalayer SaaS Kubernetes pod the
entrypoint copies /opt/datalayer/skills/ to the shared emptyDir
volume (/mnt/shared-agent/skills/); the AGENT_RUNTIMES_SKILLS_FOLDER
env var then points here so both the agent-runtimes container (which
reads the SKILL.md files) and the Jupyter kernel container (which
executes the script code sent to it by the SandboxExecutor) can reach
the same files.
Module-based (Variant 1b): for each skill whose catalog spec has a
module field, import the Python package and locate the SKILL.md
via AgentSkill.from_module(). Works for both regular packages and
namespace packages (directories without __init__.py). Requires only
that agent-skills (or whatever provides the skills) is pip-installed
— no separate on-disk copy is needed. Script code is still read from
the installed package path and sent as a string to the sandbox for
execution, so scripts run on the sandbox side regardless of which
loading mechanism was used.
Package-based (Variant 2): for catalog specs with a package +
method field, import the package and wrap a Python callable directly
(no script file needed).
Arguments:
4 - List of skill name references to load (may include version suffix, e.g."crawl:0.0.1"``).7 - Path to a local skills directory scanned for SKILL.md files (path-based loading). SetAGENT_RUNTIMES_SKILLS_FOLDER=/mnt/shared-agent/skills`` in the Kubernetes pod to point at the shared volume.- ``0 - Optional shared sandbox for state persistence.
Returns:
AgentSkillsToolset instance, or None if agent-skills is not
available.
create_codemode_toolset
def create_codemode_toolset(mcp_servers: list[Any],
workspace_path: str,
generated_path: str,
skills_path: str,
allow_direct_tool_calls: bool = False,
shared_sandbox: Any | None = None,
mcp_proxy_url: str | None = None,
enable_discovery_tools: bool = True,
status_change_callback: Any | None = None,
sandbox_variant: str | None = None) -> Any | None
Create a CodemodeToolset with the specified MCP servers.
Arguments:
mcp_servers- List of MCP server objects to registerworkspace_path- Path to the workspace directorygenerated_path- Path to the generated code directoryskills_path- Path to the skills directoryallow_direct_tool_calls- Whether to allow direct tool callsshared_sandbox- Optional shared sandbox for state persistencemcp_proxy_url- Optional MCP proxy URL for Jupyter/remote executionenable_discovery_tools- Whether to enable discovery tools (default: True)sandbox_variant- Sandbox variant ('local-eval', 'local-jupyter', 'jupyter'). If None, reads from the CodeSandboxManager's current config.
Returns:
CodemodeToolset instance or None if codemode not available
initialize_codemode_toolset
async def initialize_codemode_toolset(codemode_toolset: Any) -> None
Initialize a codemode toolset (start and discover tools).
Arguments:
codemode_toolset- The CodemodeToolset instance to initialize
create_shared_sandbox
def create_shared_sandbox(
jupyter_sandbox_url: str | None = None) -> Any | None
Create a shared managed sandbox proxy.
The proxy always delegates to the manager's current sandbox, so when the manager is reconfigured (e.g. local-eval → local-jupyter), all consumers automatically use the new sandbox.
Arguments:
jupyter_sandbox_url- Optional Jupyter server URL (with token)
Returns:
ManagedSandbox proxy or None if code_sandboxes not available
generate_skills_prompt_section
def generate_skills_prompt_section(
skills_metadata: list[dict[str, Any]]) -> str
Generate a system prompt section describing available skills.
Produces a Markdown section that gives the LLM visibility into the
installed skills, their scripts, parameters, return values, and
usage examples so it can call run_skill() correctly without
needing to call list_skills() first for discovery.
Arguments:
skills_metadata- List of skill metadata dicts as built bywire_skills_into_codemode.
Returns:
A Markdown string suitable for appending to the system prompt. Returns an empty string if no skills are available.
wire_skills_into_codemode
def wire_skills_into_codemode(codemode_toolset: Any,
skills_toolset: Any) -> str
Wire skill bindings and routing into a codemode toolset.
This performs three things:
- Generates skill bindings under
generated/skills/so thatexecute_codecanfrom generated.skills import run_skill. - Sets a skill tool caller on the codemode executor so that
call_tool("skills__<name>", args)is routed to the skills toolset instead of the MCP registry. - Returns a system prompt section describing the installed skills, their scripts, parameters, and usage so the LLM has full visibility into the skill catalog.
Must be called after initialize_codemode_toolset so the
executor and codegen are ready.
Arguments:
0 - An initialisedCodemodeToolset`` instance.3 - An initialisedAgentSkillsToolset`` instance.
Returns:
A Markdown string for appending to the system prompt, or ""
if skills could not be wired.