FAQ

11 answers

Common questions.

Short answers about the current product state. No séance. No vibes-only architecture diagram. Just the machine, the agent, the tools, and the parts that are still landing.

01

What is Agent Machines?

Agent Machines is a per-account runtime for persistent agents. Each signed-in user can keep machines with durable Linux filesystems under /home/machine, so chats, working files, artifacts, learned skills, and cron schedules survive sleep and wake cycles.

02

How is this different from a regular chatbot?

A regular chatbot usually stores memory in browser state or a vendor-owned memory layer. Agent Machines persists operational state to a real machine filesystem: chat records, artifacts, USER.md, MEMORY.md, Hermes sessions, cron schedules, skills, and the runtime venv.

03

Which agents can I run?

Hermes and OpenClaw are the two agent runtimes represented in the app. Hermes is the default memory, cron, sessions, and MCP-native runtime. OpenClaw is the computer-use runtime with browser, screenshot, click, shell, and file operations. Both sit behind the same machine/gateway concept.

04

Which providers can host the machine?

Dedalus Machines is wired end-to-end today. The MachineProvider abstraction, setup UI, and user config schema also include Vercel Sandbox and Fly Machines, but those provisioners currently return explicit not-supported responses until their provider implementations land.

05

How do I get my own machine today?

Sign in with Clerk, add provider credentials in /dashboard/setup, pick the agent, provider, spec, and model, then provision the machine record. The browser flow creates the provider machine and stores it in your fleet; the reliable agent bootstrap path is still the matching root CLI deploy command until browser-driven bootstrap lands.

06

What tools and skills come pre-installed?

The public loadout tracks 23 built-in tools, 17 service routes, and 95 SKILL.md files. The surface includes terminal, filesystem, browser automation, web search, vision, image generation, code execution, cron, memory, sessions, Vercel, Stripe, Supabase, Linear, GitHub, Slack, PostHog, Sentry, Clerk, Firebase, Figma, Shopify, ClickHouse, Datadog, AWS, Cloudflare, and model providers.

07

Is Cursor required?

No. Cursor is optional delegation for code edits through cursor-bridge and @cursor/sdk. Without CURSOR_API_KEY, the rest of the machine still runs: chat, files, browser automation, skills, cron, memory, dashboard polling, artifacts, and provider lifecycle controls.

08

What are ~/.agent-machines, ~/.hermes, and /home/machine/hermes-machines?

~/.agent-machines is Agent Machines product data such as chats and artifacts. ~/.hermes is Hermes runtime state such as skills, crons, sessions, logs, MEMORY.md, USER.md, and config. /home/machine/hermes-machines is the git checkout used by reload-from-git.sh. They are separate on purpose.

09

What inference providers are supported?

Dedalus is the default OpenAI-compatible inference endpoint. Hermes is configured through model.base_url and model.default, so the CLI can point DEDALUS_CHAT_BASE_URL at another compatible /v1 endpoint when needed. The dashboard stores a model slug per machine.

10

What happens when a machine sleeps?

On supported providers, sleep pauses compute while preserving the persistent volume. The next wake resumes from disk: app artifacts, agent runtime state, skills, cron schedules, sessions, and the venv remain available.

11

Where does my data live?

Provider credentials and gateway bearers live in Clerk private metadata. Machine state lives on the provider machine under /home/machine, with Agent Machines product data under ~/.agent-machines and Hermes runtime data under ~/.hermes. The public client only sees redacted provider and machine status.