Reverse Engineering Claude Desktop's Cowork Mode: A Deep Dive into VM Isolation and Linux Possibilities
Analysis based on Claude Desktop version 1.1.799, extracted January 2026
I maintain claude-desktop-debian, a build script that repackages the Windows version of Claude Desktop for Linux. It works pretty well for the core chat functionality, MCP support, and the quick-entry popup. But there's one major feature that doesn't work at all on Linux: Cowork mode.
@chukfinley recently submitted a PR attempting to stub out the native macOS addon that powers Cowork. That got me curious: what exactly are we trying to replicate? What is Cowork? How does it work under the hood? And what would it take to get it working properly on Linux?
I pointed Claude at the minified JavaScript and asked it to reverse engineer the architecture. It spent some time extracting code, refactoring symbol names from things like tXe and Og to VMProcess and translateVMPathToHost, and documenting everything. Here's what it found.
Want the full technical reference? See the Complete Architecture Documentation for IPC APIs, symbol mappings, telemetry events, and VM image analysis.
Table of Contents
- What is Cowork?
- The Architecture: It's a VM, Not What You'd Expect
- Security: It's Not Just the VM
- End-to-End Flow: From App Launch to Response
- Digging into the Code
- More Than I Expected: Advanced Features
- The Anthropic Zoo: Internal Codenames
- Controlling Cowork: Settings and Configuration
- Paths Forward for Linux
- Plot Twist: x64 VM Images Exist
- What's Next
- References
What is Cowork?
Cowork (internally called "Local Agent Mode") is Claude Desktop's agentic mode. Instead of just chatting, Claude can actually do things on your computer - organize files, create documents, run commands, synthesize research across multiple sources.
You describe the outcome you want, grant access to specific folders, and Claude works autonomously until the task is complete. It's essentially Claude Code but with a GUI and without needing to touch the terminal.
The catch? It currently only works on macOS with Apple Silicon.
The Architecture: It's a VM, Not What You'd Expect
Here's the thing that surprised me: Cowork doesn't run directly on your Mac.
When you start a Cowork session, Claude Desktop boots up a lightweight Linux virtual machine using Apple's Virtualization Framework. Claude Code CLI runs inside that VM, completely isolated from your host system.
Simon Willison discovered this by having Claude Code reverse engineer the Claude Desktop app itself (very meta). The VM downloads a custom Linux rootfs and kernel, then communicates with the Electron app through stdio-based IPC.
┌─────────────────────────────────────────────────────────────────┐
│ Claude Desktop (Electron) │
│ ┌─────────────┐ ┌──────────────────┐ ┌──────────────┐ │
│ │ Renderer │────▶│ Main Process │────▶│@ant/claude- │ │
│ │ (Web UI) │ IPC │ (Node.js) │ │swift (Swift) │ │
│ └─────────────┘ └──────────────────┘ └──────┬───────┘ │
└───────────────────────────────────────────────────────┼──────────┘
│
┌─────────────▼─────────────┐
│ VZVirtualMachine │
│ ┌───────────────────┐ │
│ │ Linux VM │ │
│ │ ┌─────────────┐ │ │
│ │ │ Claude Code │ │ │
│ │ │ CLI │ │ │
│ │ └─────────────┘ │ │
│ └───────────────────┘ │
└───────────────────────────┘
Why a VM? Security. Claude can only access the specific folders you've mounted into the VM. It can't touch your system files, can't read your SSH keys, can't access anything you haven't explicitly granted. Even if you tell Claude to rm -rf /, it's blowing away the VM's filesystem, not yours.
This is the same isolation approach Docker Desktop uses on macOS - Apple Silicon's unified memory makes lightweight VMs surprisingly fast with minimal overhead.
Security: It's Not Just the VM
The security model goes deeper than just "run it in a VM." Looking at the error handling code, I found references to bubblewrap and seccomp - they're using additional sandboxing inside the Linux VM:
// Error categorization from the app (refactored for readability)
function categorizeError(errorText) {
const output = extractOutput(errorText);
if (errorText.includes("Killed") && errorText.includes("apply-seccomp"))
return { category: "seccomp_killed", rawOutput: output };
if (errorText.includes("Sandbox dependencies are not available") ||
errorText.includes("ripgrep") ||
errorText.includes("bubblewrap") ||
errorText.includes("socat"))
return { category: "sandbox_deps_missing", rawOutput: output };
if (errorText.includes("was not found") && errorText.includes("/sessions/"))
return { category: "mount_not_found", rawOutput: output };
if (errorText.includes("failed to unmount") ||
errorText.includes("device or resource busy"))
return { category: "mount_busy", rawOutput: output };
// ... more categories: vm_disconnected, filesystem_error, network_error, etc.
}
So the security stack looks like:
- VZVirtualMachine - Hardware-level hypervisor isolation
- Custom Linux rootfs - Minimal attack surface
- Bubblewrap - Namespace isolation inside the VM
- Seccomp filters - Syscall allowlisting
- Path validation - Block traversal and dangerous file types
- OAuth MITM proxy - Token approval for API access
- Network egress allowlist - Controlled outbound connections
Blocked File Extensions
The app prevents access to executable file types:
const blockedBinaryExtensions = [
".exe", ".com", ".msi", ".bin",
".app", ".dmg", ".pkg", ".jar"
];
Path Traversal Detection
Every file access goes through validation:
// Minified: Wst → Claude's refactored name: validateVMPathAccess
function validateVMPathAccess(sessionId, vmPath) {
// Must be a local session
if (!sessionId.startsWith("local_"))
throw new Error("Invalid session");
// Extract and validate session name from path
const sessionName = extractSessionName(vmPath);
const expectedName = getVMProcessName(sessionId);
if (!sessionName || sessionName !== expectedName)
throw new Error("Session mismatch");
// Normalize and check for traversal
const normalized = path.posix.normalize(vmPath);
const expectedPrefix = `/sessions/${sessionName}/`;
if (!normalized.startsWith(expectedPrefix))
throw new Error("Path traversal detected");
// Check file extension
const ext = path.extname(vmPath).toLowerCase();
if (blockedBinaryExtensions.includes(ext))
throw new Error("Blocked file type");
return { vmProcessName: sessionName, normalizedPath: normalized };
}
This is the kind of defense-in-depth that makes me feel okay about Cowork having access to my files. The VM can't escape to the host, and even inside the VM, it's further sandboxed.
End-to-End Flow: From App Launch to Response
Here's the complete journey of a Cowork request, from before you even click "Start" to seeing Claude's response:
Phase 1: Background Preparation (Before You Start)
┌─────────────────────────────────────────────────────────────────┐
│ App detects Cowork capability │
│ ↓ │
│ Warm bundle download starts in background │
│ ↓ │
│ Download rootfs.img.zst (~500MB compressed) │
│ ↓ │
│ Decompress with zstd → rootfs.img (~2GB) │
│ ↓ │
│ Verify SHA256 checksum │
│ ↓ │
│ Store in ~/Library/Application Support/Claude/vm_bundles/ │
└─────────────────────────────────────────────────────────────────┘
The app pre-downloads the VM image while you're chatting normally. When you actually start Cowork, it "promotes" the warm bundle instead of making you wait. (If you've ever noticed a claudevm.bundle directory taking up a few gigabytes in your Claude data folder, now you know what it is.)
Phase 2: VM Boot Sequence (When You Click "Start Cowork")
┌─────────────────────────────────────────────────────────────────┐
│ 1. Check for warm bundle │
│ ↓ (found) ↓ (not found) │
│ Promote to active Download now (blocking) │
│ ↓ ↓ │
│ ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←← │
│ ↓ │
│ 2. Load @ant/claude-swift native module │
│ ↓ │
│ 3. Prepare SDK (Claude Code CLI binary) │
│ ↓ │
│ 4. vmInterface.startVM(bundlePath, 8GB RAM) │
│ ↓ │
│ 5. VZVirtualMachine boots Linux kernel + rootfs │
│ ↓ │
│ 6. sdk-daemon starts inside VM (systemd service) │
│ ↓ │
│ 7. Poll isGuestConnected() every 100ms (up to 120s timeout) │
│ ↓ │
│ 8. Guest agent responds via vsock │
│ ↓ │
│ 9. vmInterface.installSdk() - install Claude Code CLI into VM │
│ ↓ │
│ 10. Start heartbeat monitoring │
│ ↓ │
│ VM State: READY │
└─────────────────────────────────────────────────────────────────┘
Steps 1-3 happen in parallel where possible. The whole boot takes a few seconds.
Note: The "SDK" referenced in step 9 is Claude Code CLI itself - it's not pre-installed in the VM image but copied in dynamically at startup.
Phase 3: Folder Mounting (When You Grant Access)
┌─────────────────────────────────────────────────────────────────┐
│ User selects folder: ~/Documents/ProjectX │
│ ↓ │
│ Renderer → IPC → Main Process │
│ ↓ │
│ Resolve symlinks: fs.realpath() │
│ ↓ │
│ Calculate relative path from $HOME │
│ ↓ │
│ vmInterface.mountPath(sessionId, "Documents/ProjectX", │
│ "ProjectX", "rw") │
│ ↓ │
│ VirtioFS mount created in VM │
│ ↓ │
│ VM path: /sessions/zealous-ramanujan/mnt/ProjectX │
│ Host path: /Users/you/Documents/ProjectX │
│ ↓ │
│ Path translation context updated │
└─────────────────────────────────────────────────────────────────┘
Mount modes: rw (read-write), rwd (read-write-delete, requires extra permission).
Phase 4: Message Handling (When You Send a Prompt)
┌─────────────────────────────────────────────────────────────────┐
│ User types: "Create a summary of all PDFs in ProjectX" │
│ ↓ │
│ Renderer: LocalAgentModeSessions.sendMessage(sessionId, msg) │
│ ↓ │
│ IPC to Main Process │
│ ↓ │
│ Build spawn configuration: │
│ - processName: "zealous-ramanujan" │
│ - additionalMounts: { ProjectX: "/Users/you/.../ProjectX" } │
│ - allowedDomains: ["api.anthropic.com", ...] │
│ - oauthToken: "sk-ant-..." │
│ ↓ │
│ Register OAuth token with MITM proxy (for API auth) │
│ ↓ │
│ vmInterface.spawn(processId, config) │
│ ↓ │
│ Create VMProcess instance (stdin buffer, stdout stream) │
│ ↓ │
│ Wait for spawn confirmation from VM │
│ ↓ │
│ Flush any buffered stdin │
└─────────────────────────────────────────────────────────────────┘
Phase 5: Inside the VM (Where the Magic Happens)
┌─────────────────────────────────────────────────────────────────┐
│ sdk-daemon receives spawn RPC via vsock │
│ ↓ │
│ Launch Claude Code CLI with bubblewrap sandbox: │
│ bwrap --unshare-all --share-net \ │
│ --ro-bind /usr /usr \ │
│ --bind /sessions/zealous-ramanujan/mnt/ProjectX \ │
│ /sessions/zealous-ramanujan/mnt/ProjectX \ │
│ --seccomp 3 3</path/to/filter.bpf \ │
│ -- claude ... │
│ ↓ │
│ Claude Code CLI starts │
│ ↓ │
│ Receives prompt via stdin (JSON-RPC style) │
│ ↓ │
│ Makes API call to api.anthropic.com │
│ → Request passes through SOCKS5 proxy (domain filtering) │
│ → OAuth token validated by MITM proxy │
│ ↓ │
│ Claude API returns response with tool calls │
│ ↓ │
│ Claude Code executes tools: │
│ - Read files from /sessions/.../mnt/ProjectX/ │
│ - Search with ripgrep │
│ - Write outputs │
│ ↓ │
│ Stream results to stdout │
└─────────────────────────────────────────────────────────────────┘
The seccomp filter blocks dangerous syscalls. The SOCKS5 proxy enforces the domain allowlist. The MITM proxy ensures only pre-approved OAuth tokens can authenticate.
Phase 6: Response Flow (Back to You)
┌─────────────────────────────────────────────────────────────────┐
│ Claude Code writes to stdout │
│ ↓ │
│ sdk-daemon captures output │
│ ↓ │
│ Sends via vsock to host │
│ ↓ │
│ @ant/claude-swift receives data │
│ ↓ │
│ Calls registered stdout callback │
│ ↓ │
│ VMProcess.pushStdout(data) → PassThrough stream │
│ ↓ │
│ Main Process parses response │
│ ↓ │
│ IPC: emit("onEvent", { type: "message", ... }) │
│ ↓ │
│ Renderer receives event │
│ ↓ │
│ UI updates with Claude's response │
│ ↓ │
│ If file was created: │
│ FileSystemWatcher detects change │
│ → emit("fsEvent", { type: "fs_file_created", ... }) │
│ → UI shows "New file: summary.md" │
└─────────────────────────────────────────────────────────────────┘
Phase 7: Session End (Cleanup)
┌─────────────────────────────────────────────────────────────────┐
│ User clicks "End Session" (or closes app) │
│ ↓ │
│ Kill all active VMProcess instances │
│ ↓ │
│ Stop heartbeat monitoring │
│ ↓ │
│ vmInterface.stopVM() │
│ ↓ │
│ VZVirtualMachine shuts down │
│ ↓ │
│ Clear VM instance ID │
│ ↓ │
│ Mounted folders are unmounted (host files untouched) │
│ ↓ │
│ Session data saved for history │
└─────────────────────────────────────────────────────────────────┘
The Key Bottleneck for Linux
Looking at this flow, the critical path runs through @ant/claude-swift - the native macOS addon. It handles:
- VM lifecycle (start/stop via VZVirtualMachine)
- vsock communication with the guest
- stdout/stderr event callbacks
- Folder mounting via VirtioFS
For Linux, we have two options:
- Stub it out - skip the VM, run Claude Code directly on the host
- Replace it - implement equivalent functionality using QEMU/KVM or Firecracker
Each approach has trade-offs. Stubbing is simpler but loses VM isolation. A full replacement gives security parity but requires significant engineering. More on the options below.
Digging into the Code
The minified JavaScript is... an experience. I extracted it from the Windows installer and ran it through prettier to make it readable. The core Cowork logic lives in .vite/build/index.js, weighing in at around 170,000 lines after beautification.
Here's what I pieced together:
The Swift Addon
The native module @ant/claude-swift is a Swift addon that interfaces with Apple's Virtualization Framework. It exposes a vm object with methods like:
startVM(bundlePath, ramGB)- Boot the Linux VMstopVM()- Shut it downspawn(sessionId, processName, command, args, ...)- Run a process in the VMwriteStdin(processId, data)- Send input to a running processmountPath(sessionId, hostPath, name, mode)- Mount a host folder into the VMsetEventCallbacks(stdout, stderr, exit, error)- Register callbacks for process output
That last one is key. The Swift addon pushes stdout/stderr data to registered callbacks rather than using Node's standard stream events.
VM Startup Sequence
When you start a Cowork session, here's what happens:
// Simplified from the actual minified code
async function startVM(options) {
// Step 1: Download VM bundle + SDK if needed
const [isFreshDownload, sdkResult] = await Promise.all([
downloadVMBundle(),
sdkManager.prepareForVM()
]);
// Step 2: Load the Swift addon
const vmInterface = await getVMInterface();
// Step 3: Start the VM (boots Linux)
await vmInterface.startVM(bundlePath, ramGB);
// Step 4: Wait for guest to connect
while (!await isGuestConnected()) {
await sleep(100); // Poll every 100ms
}
// Step 5: Install SDK in guest
await vmInterface.installSdk(sdkSubpath, sdkVersion);
}
The startup takes a few seconds while the VM boots and the guest agent connects. Once connected, processes can be spawned inside the VM.
Path Translation
Here's where it gets interesting. When Claude references a file path inside the VM, it looks like:
/sessions/zealous-bold-ramanujan/mnt/Documents/report.pdf
The app translates these VM paths back to host paths:
| VM Path | Host Path |
|---|---|
/sessions/<name>/mnt/<folder> |
User-selected folder on host |
/sessions/<name>/mnt/outputs |
Session output directory |
/sessions/<name>/mnt/uploads |
Uploaded files |
/sessions/<name>/mnt/.claude |
Claude config directory |
This translation happens in a function I've named translateVMPathToHost() (it was originally Og() in the minified code):
function translateVMPathToHost(vmPath, context) {
const sessionPrefix = `/sessions/${context.vmProcessName}/`;
if (!vmPath.startsWith(sessionPrefix)) return null;
const relativePath = vmPath.slice(sessionPrefix.length);
if (relativePath.startsWith("mnt/")) {
const mountName = relativePath.split("/")[1];
// Check user-selected folders
for (const folder of context.userSelectedFolders) {
if (path.basename(folder) === mountName) {
return path.join(folder, /* remaining path */);
}
}
}
return null;
}
Process Management
Each process running in the VM gets wrapped in a VMProcess class that handles stdin buffering, stdout forwarding, and cleanup:
class VMProcess extends EventEmitter {
constructor(processId, processName) {
this._stdinBuffer = []; // Buffer stdin until spawn confirmed
this._stdin = new PassThrough();
this._stdout = new PassThrough();
// ...
}
// Called by VM event callback when output arrives
pushStdout(data) {
this._stdout.push(data);
}
// Forward stdin to VM
setupStdinForwarding() {
this._stdin.on("data", (chunk) => {
if (!this._spawnConfirmed) {
this._stdinBuffer.push(chunk); // Buffer until ready
return;
}
vmInterface.writeStdin(this.id, chunk);
});
}
}
The important detail: stdout doesn't come from Node's child_process. It comes from the Swift addon calling a registered callback, which then pushes data into the PassThrough stream.
More Than I Expected: Advanced Features
Digging deeper, I found several features that surprised me. This isn't just a simple VM wrapper - there's real engineering here.
Warm Bundle System
The VM bundle (rootfs + kernel) is ~500MB compressed. Nobody wants to wait for that on first launch. So Anthropic built a warm bundle system that pre-downloads in the background:
// Telemetry events reveal the flow:
// lam_vm_warm_download_started - Background download begins
// lam_vm_warm_download_completed - Download finished (while you're chatting)
// lam_vm_warm_promote_completed - Next Cowork start uses pre-downloaded bundle
// On startup, it tries promotion first:
if (await promoteWarmBundle(bundleDir)) {
logger.info("[downloadVM] Warm bundle promoted successfully");
return false; // Skip download - already have it
}
// Only falls back to blocking download if no warm bundle exists
The download itself uses zstd compression and SHA256 verification:
const { sha256, bytesWritten } = await downloadWithTransform({
url: `${baseUrl}/rootfs.img.zst`,
outputPath: path.join(bundleDir, "rootfs.img"),
computeHash: true,
transform: zstd.createZstdDecompress(),
});
if (sha256 !== expectedChecksum) {
throw new Error("Checksum mismatch for VM bundle");
}
Smart. The first Cowork session might be slow, but subsequent ones start nearly instantly.
Heartbeat Monitoring & Auto-Recovery
What happens if the VM crashes? The app monitors it with a heartbeat protocol:
// Bidirectional heartbeat
const HeartbeatMessageType = {
heartbeat_request: "heartbeat_request",
heartbeat_response: "heartbeat_response",
};
// 30-second timeout before showing network error
const NETWORK_TIMEOUT_MS = 30000;
// On heartbeat failure: automatic restart
startHeartbeat({
onRestart: async () => {
logger.info("[VM:heartbeat] Heartbeat failure detected, restarting VM...");
await vmInterface.stopVM();
await startVM(options);
logger.info("[VM:heartbeat] VM restart completed successfully");
},
});
The VM restarts itself without user intervention. You might notice a brief pause, but you won't lose your session.
Knowledge Base Integration
Cowork supports mounting multiple knowledge bases into a session - collections of reference documents that Claude can search:
// Mount paths for knowledge bases
const kbMountPath = `/sessions/${sessionName}/mnt/.knowledge/${kbName}/`;
// Session tracks all mounted KBs
session.mountedKnowledgeBases = ["kb-123", "kb-456"];
session.knowledgeBaseMountPaths = new Map([
["project-docs", "/path/to/docs"],
["api-reference", "/path/to/api"],
]);
This explains the "Add context" feature in the Cowork UI - it's mounting directories as read-only knowledge bases that Claude can reference.
File System Watching
The app watches mounted directories for changes:
class FileSystemWatcher extends EventEmitter {
startWatching(sessionId, dirPath) {
const watcher = fs.watch(dirPath, { recursive: false }, (event, filename) => {
if (filename.startsWith(".")) return; // Skip hidden files
if (fileCreated) {
this.emit("fsEvent", {
type: "fs_file_created",
sessionId,
hostPath: fullPath,
fileName: filename,
});
}
});
}
}
This could enable features like "watch this folder and process new files as they appear" - though I haven't confirmed if that's exposed in the UI yet.
Two IPC Surfaces
I initially documented LocalAgentModeSessions for session control, but there's a second IPC interface: ClaudeVM for direct VM management:
const ClaudeVM = {
// Bundle management
download(), // Manually trigger download
getDownloadStatus(), // { status: "idle"|"downloading"|"ready", progress }
deleteAndReinstall(), // Nuke and re-download
// VM control
startVM(options), // Start with memory config
getRunningStatus(), // { running: boolean, instanceId }
// Events
onDownloadProgress(cb), // Progress updates during download
onStartupError(cb), // VM boot failures
// Mystery config
setYukonSilverConfig(cfg), // Undocumented internal config
};
The setYukonSilverConfig method references "YukonSilver" - one of several animal-themed internal codenames I found throughout the codebase.
The Anthropic Zoo: Internal Codenames
Digging through the code revealed a menagerie of internal feature codenames. Here's what I could decode:
yukonSilver - The VM/Cowork platform gate. The check function makes its purpose crystal clear:
function wj() {
return process.platform !== "darwin"
? { status: "unsupported", reason: "Darwin only" }
: process.arch !== "arm64"
? { status: "unsupported", reason: "arm64 only" }
: gj().major < 14
? { status: "unsupported", reason: "minimum macOS version not met" }
: { status: "supported" };
}
Requirements: macOS (darwin) + Apple Silicon (arm64) + macOS 14+ (Sonoma). There's also an enterprise override (secureVmFeaturesEnabled) that can disable it.
yukonSilverGems - Despite the name similarity to Google Gemini's "Gems" feature (custom AI assistants), this is just a dependent flag that checks if yukonSilver is supported. No Gemini integration here.
chillingSloth - Git worktrees! The chillingSlothLocation setting controls where worktrees are stored:
getWorktreeBasePath() {
const setting = getSetting("chillingSlothLocation");
if (typeof setting == "object" && "customPath" in setting)
return setting.customPath;
return path.join(app.getPath("home"), ".claude-worktrees");
}
Has enterprise/feature/local variants (chillingSlothEnterprise, chillingSlothFeat, chillingSlothLocal).
sparkleHedgehog - A UI prototype with appearance and scale settings. The dev menu describes it as "Enables sparkleHedgehog prototype" - possibly an experimental visual element or animation.
plushRaccoon - Has three configurable keyboard shortcuts (plushRaccoonOption1/2/3) with accelerator bindings. Development-only feature (returns "unavailable" in production builds).
quietPenguin / louderPenguin - macOS-only features, development builds only. The naming suggests they might be related to notification sounds or audio feedback? Pure speculation.
midnightOwl - Another prototype feature. Found in the Swift addon integration: Ht.midnightOwl.setEnabled(false). Unknown purpose.
desktopTopBar - Currently hardcoded to disabled: { status: "unsupported", reason: "feature_flag_disabled" }. A top bar UI element waiting in the wings.
dxt - Browser extensions system. The dev menu says "Allows loading browser extensions in the app" with a separate isDxtDirectoryEnabled for "the extensions directory feature."
The pattern is clear: Anthropic uses whimsical animal codenames for internal features, with status checks returning supported, unsupported (with reason), or unavailable. Some are gated by platform, some by build type (dev vs production), some by enterprise policy.
Controlling Cowork: Settings and Configuration
If you want to disable Cowork or reclaim the disk space from the VM bundle, here's what we found:
Disabling Cowork Entirely
Edit ~/.config/Claude/claude_desktop_config.json (Linux) or ~/Library/Application Support/Claude/claude_desktop_config.json (macOS):
{
"preferences": {
"secureVmFeaturesEnabled": false
}
}
Restart the app for changes to take effect. Enterprise admins can also enforce this via plist (macOS) or registry (Windows).
Preventing Warm Bundle Download
The autoDownloadInBackground setting controls whether the VM bundle pre-downloads. Good news: the default is false, so it won't download until you actually use Cowork.
If you've used Cowork and want to check/change this, open DevTools (Developer menu) and run:
// Check current config
await window.claude.ClaudeVM.getYukonSilverConfig()
// Disable background downloading
await window.claude.ClaudeVM.setYukonSilverConfig({
autoDownloadInBackground: false,
autoStartOnUserIntent: true,
memoryGB: 4
})
Reclaiming Disk Space
Already have a ~2GB VM bundle you want to delete?
Developer > Troubleshooting > Delete VM Bundle and Restart
This removes the downloaded VM files and restarts the app.
Debug Logging
For troubleshooting Cowork issues:
| Variable | Purpose |
|---|---|
COWORK_VM_DEBUG=1 |
Verbose VM debugging output |
CLAUDE_ENABLE_LOGGING=1 |
Comprehensive logging including VM |
COWORK_VM_DEBUG=1 claude-desktop 2>&1 | tee cowork-debug.log
Config File Locations
| Platform | Config File |
|---|---|
| Linux | ~/.config/Claude/claude_desktop_config.json |
| macOS | ~/Library/Application Support/Claude/claude_desktop_config.json |
| Windows | %APPDATA%\Claude\claude_desktop_config.json |
Note: There's no regular UI toggle for these settings - they're only accessible via the Developer menu or manual config editing.
Paths Forward for Linux
Based on this research, here are the viable approaches for Linux support, in order of increasing complexity:
Option 1: Stub the Swift Addon
The simplest approach: stub out @ant/claude-swift and spawn Claude Code CLI directly on the host. @chukfinley's PR takes this approach.
The PR uses node-pty instead of child_process.spawn() to avoid known Electron issues with stdout events:
const pty = require("node-pty");
const proc = pty.spawn(actualCommand, spawnArgs, {
name: "xterm-256color",
cols: 120,
rows: 40,
cwd: workDir,
env: spawnEnv,
});
proc.onData((data) => {
if (this.stdoutCallback) this.stdoutCallback(sessionId, data);
});
proc.onExit(({ exitCode, signal }) => {
if (this.exitCallback) this.exitCallback(sessionId, exitCode, signal);
});
The stub also handles:
- VM path translation (converting
/sessions/<name>/mnt/<folder>to real paths) - Filtering SDK-type MCP servers that require VM communication
- Terminal resize events
- OAuth token stubs (passthrough since there's no MITM proxy)
Trade-offs:
- ✅ Simplest to implement
- ✅ No VM overhead
- ❌ No isolation - Claude runs directly on your host
- ❌ No sandboxing - can access any file your user can
Option 2: Add Sandboxing with Bubblewrap
Running Claude CLI directly on the host works functionally, but it bypasses all the security isolation that makes Cowork safe. If we're going to ship this, we should at least add some containment.
Bubblewrap is a lightweight sandboxing tool used by Flatpak. It uses Linux namespaces (the same tech Docker uses) to isolate processes:
const bwrapArgs = [
"--unshare-all", // Unshare all namespaces
"--share-net", // Keep network access
"--die-with-parent", // Kill sandbox if parent dies
// Read-only root filesystem
"--ro-bind", "/usr", "/usr",
"--ro-bind", "/lib", "/lib",
"--ro-bind", "/bin", "/bin",
// Writable temp space
"--tmpfs", "/tmp",
"--proc", "/proc",
"--dev", "/dev",
];
// Mount user-selected folders
for (const [name, hostPath] of Object.entries(mounts)) {
bwrapArgs.push("--bind", hostPath, `/sessions/${sessionId}/mnt/${name}`);
}
bwrapArgs.push("--", "claude", ...args);
spawn("bwrap", bwrapArgs, { stdio: ["pipe", "pipe", "pipe"] });
This gives us namespace isolation without the overhead of a full VM. Claude can only see the folders you've explicitly mounted. It's not as bulletproof as a hypervisor boundary, but it's way better than nothing.
Option 3: Full VM with Firecracker
For true parity with macOS, we'd need actual VM isolation. Firecracker is a lightweight VMM built by AWS for Lambda and Fargate. It boots in ~125ms with less than 5MB memory overhead per microVM.
This would require:
- Bundling or downloading a Linux kernel and rootfs
- Managing VM lifecycle (boot, mount, spawn, shutdown)
- Setting up VirtioFS or 9p for folder sharing
- Probably a native addon to interface with KVM
It's the most work, but would give us the same security model as macOS. Might be overkill for an unofficial port, but it's the "right" way to do it.
Plot Twist: x64 VM Images Exist
@chukfinley discovered that Anthropic hosts VM images for both architectures:
ARM64: https://downloads.claude.ai/vms/linux/arm64/{commit_hash}/rootfs.img.zst
x64: https://downloads.claude.ai/vms/linux/x64/{commit_hash}/rootfs.img.zst
Wait, x64? That means a proper VM-based implementation on Linux x86_64 is actually viable - we wouldn't need to build our own rootfs.
I downloaded and mounted the x64 image (2.3GB compressed, 10GB decompressed) to see what's inside.
What's in the Box
Base system: Ubuntu 22.04.5 LTS with a 10GB ext4 partition.
The SDK Daemon: A Go binary (/usr/local/bin/sdk-daemon) that bridges the VM and host using vsock (virtio socket). This is the RPC layer that handles spawn, kill, mount, stdin/stdout forwarding. It uses goproxy internally - that's the MITM proxy for OAuth token validation we saw in the code.
[Unit]
Description=Claude SDK Daemon - vsock RPC bridge for process management
After=network.target
ExecStart=/usr/local/bin/sdk-daemon
The Sandbox Runtime: Here's the best part - Anthropic's sandboxing tool is open source:
npm install -g @anthropic-ai/sandbox-runtime
@anthropic-ai/sandbox-runtime (v0.0.28) provides the srt CLI that wraps processes with bubblewrap (Linux) or sandbox-exec (macOS). It includes pre-compiled seccomp filters for x64 and arm64.
Pre-installed tools: Everything Claude needs to work:
| Category | Tools |
|---|---|
| Sandbox | bubblewrap, socat, seccomp filters |
| Search | ripgrep |
| Runtime | Node.js 18+, Python 3.10 |
| Documents | camelot (PDF tables), OpenCV, lxml, python-docx |
What This Means
The barrier to a proper Linux VM implementation just got much lower:
- The rootfs exists - We don't need to build it
- vsock is the protocol - QEMU/KVM and Firecracker both support it
- The sandbox tooling is open source - We can use
srtdirectly for bubblewrap sandboxing even without a VM - Claude Code isn't pre-installed - It's installed via
installSdk()at runtime, which we'd need to replicate
For the stub approach, we could potentially use srt to add sandboxing without the full VM overhead. For the full VM approach, we'd need to implement a vsock-based launcher that speaks the same RPC protocol as the SDK daemon.
What's Next
@chukfinley's PR is the first attempt at Linux Cowork support - it takes the stub approach (Option 1). Whether we iterate on that or pursue a more complete solution remains to be seen.
Claude documented all its findings with refactored symbol names and explanations. The minified variable names like ff, tXe, and Og are mapped to meaningful names like swiftModuleCache, VMProcess, and translateVMPathToHost. Having an AI that can chew through 170,000 lines of minified JavaScript and produce coherent documentation is... useful.
A note on symbols: The minified variable names referenced throughout this post are from version 1.1.799. These symbols change with each release as the code is re-minified, so if you're exploring a different version, you'll need to re-trace the patterns rather than searching for the exact symbol names.
If you want to help, the repo is at github.com/aaddrick/claude-desktop-debian. My third kid is now 3 months old, so community help is very welcome.
References
- Simon Willison: First impressions of Claude Cowork
- Claude Cowork Architecture Deep Dive
- Getting Started with Cowork - Claude Help Center
- Apple Virtualization Framework
- Electron child_process stdout issue - why the stub uses node-pty
- node-pty - PTY library used by the stub for reliable I/O in Electron
- Bubblewrap sandboxing
- Firecracker microVMs
- @anthropic-ai/sandbox-runtime - Anthropic's open-source sandboxing tool
- Full Architecture Reference - Complete IPC API, telemetry events, VM image analysis, and symbol mappings
Written by aaddrick with research assistance from Claude Opus 4.5 via Claude Code