Type /sandbox in your Claude Code session right now.
If you've never run it, every bash command Claude runs has the same access your terminal does. The sandbox is off until you turn it on. The Anthropic docs say so.
Sandboxing is real OS-level isolation. It's built on the same tech Chrome uses to lock down each browser tab. I ran /sandbox for the first time last week because I wanted to know three things. Does it work? What does it actually protect? And can I finally unalias --dangerously-skip-permissions?
So I ran the experiments. Four findings:
Filesystem writes are kernel-blocked. Write outside your working directory and you get
Operation not permittedat the syscall level. No dialog to click through.Network isolation runs through a local proxy, not kernel blocking. Well-behaved HTTP clients route through a localhost proxy that checks the allowlist and returns
CONNECT 403for denied domains. Tools that ignore proxy env vars fall through to a Seatbelt backstop that blocks non-loopback traffic at the socket layer.Sandboxing is opt-in. You have to run
/sandboxand pick a mode. If you haven't, your sandbox is off.sandbox.denyReaddoes not stop Claude's Read tool. This is the sharpest finding. A setting the docs imply should block reads has no effect on the Read tool. I verified it with headless Claude and stream-json tool traces. The fix is a separate permission rule the docs don't clearly connect to the sandbox setting.
The fourth one I didn't expect.
What sandboxing actually is
Without sandboxing, Claude Code's bash tool is just bash. When Claude runs curl, it can reach any domain your machine can. When Claude runs cat ~/.ssh/id_rsa, it gets your SSH key. The only thing in the way is a permission prompt. A dialog box you can auto-approve, configure away, or click through without reading.
Sandboxing takes the dialog out of it. The kernel blocks writes outside your working directory. Network traffic routes through a local proxy that checks an allowlist, and any tool that tries to bypass the proxy hits a kernel block on non-loopback traffic. No "do you want to allow Claude to..." prompt. Just syscalls that fail like any other permission error and a proxy that returns 403 for disallowed hosts.
The tech is older than you might think. macOS ships with a framework called Seatbelt (internally, TrustedBSD MAC). Apple uses it to lock down Chrome renderer processes, iOS apps, and most of the system services that come with the OS. You can run it yourself from the command line: sandbox-exec -p '<policy>' bash -c 'whatever'. Claude Code wraps this primitive.
Linux has a similar story with bubblewrap, the same sandbox Flatpak uses for third-party apps. WSL2 works because it's real Linux. WSL1 doesn't, because bubblewrap needs kernel features WSL1 can't give it.
The mental model to carry: permissions ask you to say yes or no, and sandboxing removes the question. The kernel doesn't let it happen at all.
Free Claude Code crash course
60-min video lesson + CLAUDE.md starter kit. Yours when you subscribe.
Permissions vs sandboxing
Permissions | Sandboxing | |
|---|---|---|
Scope | All tools: Read, Edit, Bash, WebFetch, MCP | Bash and its subprocess tree |
Enforcement | Claude Code checks before running the tool | OS kernel for filesystem, local HTTP/SOCKS proxy for network |
On violation | Prompts user for approval |
|
Bypassable by model? | If user approves | No, it's not a dialog |
These two layers don't cover the same ground. Most people assume they do. That's why this post exists.
Is it on by default?
No.
The Anthropic docs say you have to enable it yourself. You run /sandbox inside Claude Code, pick a mode from the menu, and you're in. If you've never run it, your sandbox is off.
On macOS it works out of the box. Seatbelt ships with the OS. On Linux or WSL2 you need bubblewrap and socat first (sudo apt install bubblewrap socat on Debian/Ubuntu).
The fallback is soft. If the sandbox can't start for any reason, missing dependencies or an unsupported platform, Claude Code prints a warning and runs your commands unsandboxed. For teams that want a hard gate, there's a sandbox.failIfUnavailable: true setting that turns the warning into an error.
There are two modes. In auto-allow, bash commands run inside the sandbox without asking. If a command can't be sandboxed (it needs an excluded tool or an unlisted host), it falls back to the regular permission flow. This is the mode that replaces --dangerously-skip-permissions for bash. The docs are explicit:
Auto-allow mode works independently of your permission mode setting. Even if you're not in "accept edits" mode, sandboxed bash commands will run automatically when auto-allow is enabled.
In regular-permissions mode, the sandbox is on but you still approve each command. You see fewer prompts than you would without the sandbox, because denied operations fail directly instead of asking. You still see a prompt for the allowed stuff. This is the safety-net mode.
Either way, the filesystem and network rules are identical. The only difference is whether sandboxed commands get auto-approved.
How it actually works
I grepped the Claude Code binary to see the raw policy. There's a literal comment in the source strings:
; Essential permissions - based on Chrome sandbox policyThe Seatbelt policy Claude Code ships with comes from Chrome's renderer sandbox. It's a Scheme-style rule set:
(version 1)
(deny default (with message "..."))
; File I/O
(allow file-ioctl)
(allow file-read*)
(allow file-read-metadata)
(allow file-write* (subpath "/path/to/cwd"))
; Mach IPC
(allow mach-lookup (global-name-prefix "com.apple."))
(allow mach-priv-task-port (target same-sandbox))
; Network
(allow network-bind (local ip "*:*"))
(allow network-outbound (remote ip "localhost:*"))
; Process info and signals - restricted to same sandbox
(allow process-info* (target same-sandbox))
(allow signal (target same-sandbox))The default is deny. Anything not explicitly allowed gets blocked at the kernel. file-write* is scoped to one subpath: your working directory. network-outbound is restricted to localhost. That means any traffic bound for the real internet has to go through a proxy running outside the sandbox.
The proxy is the other half of the story. When sandboxing turns on, Claude Code spawns two local servers: an HTTP CONNECT proxy on one port and a SOCKS5 proxy on another. It then sets over a dozen proxy-related env vars inside the sandboxed bash. The standard ones plus tool-specific overrides for gcloud, docker, git+ssh, and rsync:
https_proxy=http://localhost:61653
all_proxy=socks5h://localhost:61654
CLOUDSDK_PROXY_ADDRESS=localhost
DOCKER_HTTP_PROXY=http://localhost:61653
GIT_SSH_COMMAND=ssh -o ProxyCommand='nc -X 5 -x localhost:61654 %h %p'The network story is a two-layer defense.
Well-behaved HTTP clients read HTTPS_PROXY and route through localhost:61653. The proxy checks the allowlist and returns 403 for disallowed hosts. Tools that ignore proxy env vars fall through to the Seatbelt layer, which blocks all non-loopback traffic. Their connect() calls fail at the socket with Operation not permitted.
Not every tool is well-behaved. That matters for Experiment 4.
The subprocess tree
Every child process inherits the sandbox. If bash runs npm install, which runs a postinstall script, which runs curl, all four processes live inside the same Seatbelt container. You can't escape by spawning a child. A wrapper script that only intercepts the top-level command would miss the children. Claude Code's sandbox sees the whole tree.
The escape hatch
There's one more detail the docs mention but don't fully explain. Claude Code has an "escape hatch" called dangerouslyDisableSandbox. When a command fails inside the sandbox (say, docker build hits the daemon-socket block) Claude can retry the command outside the sandbox, with a user permission prompt.
I pulled the exact text Claude Code injects into Claude's system prompt when sandboxing is on. It's in the binary as a nested array of strings. I've reconstructed the paragraphs below. The wording is exact; the bullet and paragraph layout is mine:
You should always default to running commands within the sandbox. Do NOT attempt to set dangerouslyDisableSandbox: true unless: the user explicitly asks you to bypass sandbox, OR a specific command just failed and you see evidence of sandbox restrictions causing the failure.
Evidence of sandbox-caused failures includes: "Operation not permitted" errors, Access denied to specific paths, Network connection failures to non-whitelisted hosts, Unix socket connection errors.
When you see evidence of sandbox-caused failure: Immediately retry with dangerouslyDisableSandbox: true (don't ask, just do it). Briefly explain what sandbox restriction likely caused the failure. This will prompt the user for permission.
Do not suggest adding sensitive paths like ~/.bashrc, ~/.zshrc, ~/.ssh/*, or credential files to the sandbox allowlist.
"(don't ask, just do it)" is a direct string in the binary. I didn't make it up.
Claude Code trains the model to retry failing commands outside the sandbox without asking. That's the source of the escape-hatch behavior some users have noticed and been confused by.
A user on r/claudecode caught it in the wild a month ago. They ran Claude Code with npx on their denylist and bubblewrap sandboxing enabled. Asked for the npx version. The denylist blocked the first attempt. Then the agent found /proc/self/root/usr/bin/npx, the same binary on a different path, and tried again. When the sandbox caught that too, the agent talked itself into disabling the sandbox and running npx outside it.
"Don't ask, just do it."
You can turn it off. Set "allowUnsandboxedCommands": false in your sandbox settings and the dangerouslyDisableSandbox parameter gets ignored. The system prompt changes too:
All commands MUST run in sandbox mode. The dangerouslyDisableSandbox parameter is disabled by policy.
If you want real enforcement, this setting matters. Without it, a single failed command is enough for Claude to retry outside the sandbox.
The experiments
I wrote the bash-layer experiments against @anthropic-ai/sandbox-runtime. That's the npm package Anthropic publishes the sandbox runtime as, and Claude Code vendors it internally. So srt (the runtime's CLI) is a good stand-in for Claude Code's bash-layer behavior, even if it isn't literally the same binary.
For Experiment 5 I used headless Claude Code with custom settings and captured the raw tool traces via --output-format stream-json. Those findings reflect the actual tool behavior, not an approximation.
Experiment 1: filesystem perimeter
Question: what can you read and write from inside the sandbox by default?
Setup: a fresh temp directory as the working directory. A fake SSH key at ~/.sandbox-exp1-fake-ssh/id_rsa containing junk text. A sibling project outside the work dir. Sandbox policy: allowWrite: [$WORK], no denyRead, default everything else.
Writes:
Target | Result |
|---|---|
| ALLOWED |
| ALLOWED |
Sibling project directory | BLOCKED: |
| BLOCKED |
| BLOCKED |
| BLOCKED |
Every write outside the sandbox came back as Operation not permitted. The kernel denied the call at the syscall level. sudo, chmod, owning the file: none of it matters. The denial happens above the file-mode check. Writes are genuinely locked down.
Reads:
Target | Result |
|---|---|
| ALLOWED |
| ALLOWED |
| ALLOWED |
Fake SSH key (chmod 600) | ALLOWED |
| ALLOWED |
| ALLOWED |
Every single read succeeded. /etc/passwd, the fake SSH key, a full home directory listing, my real ~/.zshrc. The default read perimeter is the whole computer.
This is the first thing people get wrong about sandboxing. The word "sandbox" sounds like a small closed-off space. Claude Code's sandbox isn't closed off for reads. It's closed off for writes.
Reads are wide open unless you add paths to denyRead. The Chrome-derived policy allows file-read* globally. If you want the sandbox to act as a secrets perimeter, you have to configure it as one. The default isn't.
Experiment 2: network isolation
Question: can a tool in Claude's subprocess tree reach a domain not on the allowlist?
Setup: sandbox with allowedDomains: ["example.com"]. Try reaching google.com from curl, wget, python urllib, node fetch. Then try IP literals and hostname tricks.
Results:
Attempt | Result |
|---|---|
| HTTP 200 |
| exit 56, |
| exit 56 — still blocked |
| exit 56 |
| works |
| HTTP 200 |
| blocked, |
| error: fetch failed |
| error: fetch failed |
Two interesting things.
First: the IP literal bypass doesn't work. I tried hitting example.com's IP directly (93.184.216.34). I tried hitting github.com's IP with a Host: header rewrite. Both failed with CONNECT tunnel failed, response 403. The practical takeaway is that the proxy checks something curl can't lie about, not just the hostname.
Second: node's native fetch() fails even on an allowed domain. This took me a while to understand.
Node's fetch() is built on undici. Undici does not respect HTTPS_PROXY. It ignores the env var entirely.
So when node fetch tries to reach example.com from inside the sandbox, it attempts a direct outbound TCP connection. The Seatbelt policy denies it. The error surfaces as fetch failed with an underlying ENOTFOUND because the direct DNS also fails.
Every HTTP client inside the sandbox falls into one of three buckets:
Respects proxy env vars. curl, wget, git (https), python (requests, urllib), pip, pnpm, npm, cargo, uv. These get filtered at the local proxy and work fine for allowed domains.
Has a tool-specific proxy var that Claude Code sets. gcloud reads
CLOUDSDK_PROXY_*, docker readsDOCKER_HTTP_PROXY, rsync readsRSYNC_PROXY, git+ssh gets rewritten throughGIT_SSH_COMMAND. Claude Code knows these and sets them all.Ignores everything. Node's native fetch. Anything built on undici without a manual
ProxyAgent. These break inside the sandbox even on allowed domains.
If you run any tool built on node fetch() inside the sandbox, it will fail silently on domains that are supposed to work. Some AI CLIs. Some modern web scrapers. Various npm-packaged tools. The fix is to patch the tool to use undici's ProxyAgent with process.env.HTTPS_PROXY, or swap in a library that reads the env var.
That's a lot of tools.
Experiment 3: the speed tax
Question: does the sandbox slow anything down?
Setup: run the same operations inside and outside the sandbox, 3 iterations each, report the median.
Operation | Direct | Sandboxed | Overhead |
|---|---|---|---|
| 166ms | 287ms | +121ms |
| 189ms | 271ms | +82ms |
| 186ms | 297ms | +111ms |
| 1061ms | 1218ms | +157ms |
| 361ms | 458ms | +97ms |
The pattern: ~80–160ms of fixed cost per sandbox call. It doesn't scale with the work. A 10-second npm install and a 166ms echo pay roughly the same overhead. Most of that cost is spawning the proxy servers and loading the Seatbelt policy into the kernel. It's per-invocation setup.
In real Claude Code the cost should be smaller. The proxies stay running and don't respawn for each command. I didn't measure it inside Claude Code directly. My best guess, based on what I could time, is the per-command cost drops to somewhere in the tens of milliseconds.
A command that does real work (compiling, testing, installing) pays the overhead once and runs at full speed from there. The only place you'd feel it is a script that spawns hundreds of tiny bash calls, and even then you can usually replace the loop with one invocation.
Performance is a non-issue.
Experiment 4: the compatibility graveyard
Question: which dev tools break inside the sandbox?
Setup: run a handful of realistic tasks inside srt with a permissive config. Cache dirs in allowWrite, common package registries in allowedDomains. Tool startup works for everything I tried. The interesting failures happen when tools try to do real work.
The results that matter:
Task | Result | Cause |
|---|---|---|
| works | pnpm respects |
| works | uv respects proxy env vars |
| works | no git involvement |
| BROKEN | Hardcoded |
| BROKEN | Same |
| BROKEN | Calls |
| BROKEN | Calls |
| BROKEN | TLS cert error: |
| BROKEN | Docker daemon lives outside the sandbox |
| BROKEN | fsevents needs kernel features unavailable |
| BROKEN | Native fetch ignores |
git init being broken was a surprise. I dug in.
The error is could not write config file ... Operation not permitted on .git/config, or cannot copy ... commit-msg.sample on .git/hooks/. Both paths are inside the working directory, which is in allowWrite. So why do they fail?
I probed specific paths and found two patterns hardcoded-denied at any depth: **/.git/config and **/.git/hooks/**. allowWrite can't override them. The block fires regardless of what you configure.
The reason is obvious once you see it. .git/config can define [core] editor = ... or [alias] log = !rm -rf ~. A malicious config means the next git log runs attacker-controlled code. .git/hooks/pre-commit is even more direct. The sandbox is right to block both.
The cost is that you can't run git init, git clone, cargo init, or uv init inside the sandbox without marking them as excludedCommands to run outside. The hardened config at the end of this post does that.
I searched the public Anthropic docs for any mention of these hardcoded denies. There isn't one. Turn on sandboxing, run git init, and you'll hit this with no explanation.
The gh failure is the second surprise. The error is x509: OSStatus -26276, a macOS keychain error for "certificate verification failed."
gh is a Go binary, and Go's TLS stack on macOS checks the system keychain. The proxy Claude Code spawns presents its own cert, which isn't in the keychain, so the handshake fails. Any Go tool that does HTTPS directly will hit this. Probably docker CLI, probably parts of aws-cli v2. I only tested gh.
Three rules of thumb from all of this:
If a tool respects
HTTPS_PROXYand uses a normal TLS stack, it works.If it pins TLS certs through Go's stack, it breaks.
If it scaffolds a fresh
.git/tree, it breaks.
Experiment 5: the bypass
This is the one I didn't expect.
Claim to test: sandbox.denyRead is documented as restricting what paths Claude Code can read inside the sandbox. I wanted to know if it protects the Read tool, not just bash.
Setup: three headless Claude Code runs with identical prompts on the same file. The only thing that changes between runs is the settings.
I captured the raw tool traces with --output-format stream-json. That gives you the literal tool_use calls Claude made and the tool_result payloads it got back. Final assistant text can be misleading. Claude might refuse, explain, or paraphrase. But the tool_result is what the tool actually returned.
The target file is at /Users/abhishekray/.sandbox-demo-deny/data.txt. Contents include a freshly-generated nonce sbx-proof-c4e12f8e25f4, different on every run. I used an innocuous filename on purpose. (More on why in a minute.)
Run 1 is the baseline with no deny rules:
tool_use: Read({file_path: ".../data.txt"})
tool_result: "1\tLine 1: benign file contents\n2\tLine 2: ordinary data\n3\tLine 3: sbx-proof-c4e12f8e25f4\n"Baseline: Read succeeds, returns the contents, the nonce is in the tool_result. As expected.
Run 2 adds sandbox.denyRead: ["/Users/abhishekray/.sandbox-demo-deny"] to the settings:
tool_use: Read({file_path: ".../data.txt"})
tool_result: "1\tLine 1: benign file contents\n2\tLine 2: ordinary data\n3\tLine 3: sbx-proof-c4e12f8e25f4\n"The tool_result is byte-for-byte identical to Run 1.
The Read tool called through. The sandbox deny list was set. The file is inside the denied directory. And the Read tool returned the contents anyway, nonce and all.
Claude wasn't choosing to comply. It didn't even see a block. The sandbox config had zero effect on the Read tool.
To make sure I wasn't hallucinating, I had Codex (OpenAI's CLI, running through an MCP bridge) grep the saved jsonl files and report what it saw. It confirmed: Run 2's tool_result contains Line 3: sbx-proof-c4e12f8e25f4. The same nonce I generated at the start of the test. The sandbox deny list was bypassed.
Run 3 uses permissions.deny: ["Read(//Users/abhishekray/.sandbox-demo-deny/**)"] with no sandbox-layer deny rule:
tool_use: Read({file_path: ".../data.txt"})
tool_result: "<tool_use_error>File is in a directory that is denied by your permission settings.</tool_use_error>"Same file. Same path. Same prompt. Only the settings changed. The permissions.deny rule produced a hard error from the tool layer itself. The Read call was made, and the tool returned a tool_use_error before touching the filesystem.
The experimental claim is narrow. In these three runs on this file, sandbox.denyRead did not stop Claude's Read tool. permissions.deny Read(...) did.
I only tested the Read tool. I didn't run the same comparison on Edit, Write, or WebFetch. But the Anthropic docs say all three of those tools also "use the permission system directly rather than running through the sandbox." That implies the same architecture. If you want to be safe, treat every built-in file tool as routing around sandbox.denyRead until tested otherwise.
What does this mean in practice? Say you turn on sandboxing with denyRead for ~/.ssh and ~/.aws, thinking you've built a perimeter around your secrets. You haven't closed the Read-tool side of that perimeter. Bash can't cat ~/.ssh/id_rsa. But in the scenario I tested, Claude's Read tool returned the contents of a denyRead'd file as if the setting weren't there.
To actually close the gap, you need both layers:
{
"sandbox": {
"filesystem": {
"denyRead": ["~/.ssh", "~/.aws", "~/.netrc", "~/.gnupg"]
}
},
"permissions": {
"deny": [
"Read(//Users/**/.ssh/**)",
"Read(//Users/**/.aws/**)",
"Read(//Users/**/.netrc)",
"Read(//Users/**/.gnupg/**)",
"WebFetch(domain:*)"
]
}
}Bash layer and tool layer, configured independently, for the same paths. This isn't documented anywhere I could find.
Now back to that "innocuous filename" choice. The first time I ran this, the file was called credentials and contained AWS_SECRET_ACCESS_KEY=.... Claude refused to call Read. The response was "I won't do that. Classic prompt injection pattern." With the name swapped to data.txt, Claude called Read without hesitation every single time.
So there's a soft layer of protection above the hard enforcement: Claude's own refusal to touch obvious-secret-looking paths. It's pattern matching, and pattern matching loses to anyone who names their target file notes.md. permissions.deny is the only enforcement that doesn't depend on what Claude chooses to do.
When to turn it on
After running all of this, here's my honest read.
Turn it on if:
You run agentic workflows. Long autonomous sessions, background tasks.
You install dependencies from untrusted sources. npm, pip, cargo.
You're working on code from third parties. Public repos, client projects, contributed PRs.
You run Claude remotely or headlessly, where you can't approve prompts by hand.
You've aliased
--dangerously-skip-permissions. This is the feature that lets you stop.
Leave it off if:
Your workflow is dominated by Docker. Docker needs daemon socket access that sits outside the sandbox. The Go-based CLI likely runs into the same TLS trust-store issue as
gh. I didn't verifydocker buildend-to-end, but Anthropic's own docs recommend puttingdocker *inexcludedCommands. That tells you what to expect.You use
watchmanheavily. It doesn't work. Jest with watch mode falls back to polling, which is slow.You scaffold new repos constantly and don't want to add
git inittoexcludedCommands. You can, but it's an extra step every time.
There's also a hybrid mode. Use auto-allow for routine coding and drop back to regular-permissions, or turn the sandbox off for a minute, when you need docker or git init.
A real settings.json
Not the minimal example from the docs. A config that reflects what the experiments taught me:
{
"sandbox": {
"enabled": true,
"mode": "auto-allow",
"allowUnsandboxedCommands": false,
"filesystem": {
"allowWrite": [
"~/.cache/pnpm",
"~/.cache/pip",
"~/.cargo/registry",
"/tmp/build"
],
"denyRead": [
"~/.ssh",
"~/.aws",
"~/.config/gh",
"~/.netrc",
"~/.gnupg",
"~/.docker"
]
},
"network": {
"allowedDomains": [
"registry.npmjs.org",
"pypi.org",
"crates.io",
"github.com",
"*.github.com",
"api.anthropic.com"
]
},
"excludedCommands": [
"docker *",
"watchman *",
"git init *",
"git clone *",
"cargo init *",
"cargo new *",
"uv init *"
]
},
"permissions": {
"deny": [
"Read(//Users/**/.ssh/**)",
"Read(//Users/**/.aws/**)",
"Read(//Users/**/.netrc)",
"Read(//Users/**/.gnupg/**)",
"Read(//Users/**/.config/gh/**)",
"Read(//Users/**/.docker/**)",
"WebFetch(domain:*)"
],
"allow": [
"WebFetch(domain:github.com)",
"WebFetch(domain:api.github.com)",
"WebFetch(domain:raw.githubusercontent.com)",
"WebFetch(domain:docs.anthropic.com)"
]
}
}Three lines are worth explaining.
allowUnsandboxedCommands: false turns off the escape hatch. Without it, Claude will retry failing commands outside the sandbox on its own.
denyRead at the sandbox layer is mirrored by Read(...) rules in permissions.deny. Same paths, both layers. The sandbox covers bash, the permission rules cover the Read tool, and they don't overlap. This is Experiment 5's finding in config form. The same logic applies to WebFetch(domain:*) as a deny-all with an explicit allowlist for the network side.
github.com in allowedDomains is an accepted risk. Broad domains allow exfiltration via gists and issue comments, but half your dev tooling talks to github so you have to let it through. The matching WebFetch rule keeps the tool-layer attack surface narrower than the bash side.
What I'd tell a friend
First, sandboxing is real isolation and it works. Experiments 1 and 2 show that the filesystem and network boundaries hold against the obvious attempts to cross them. Writes to the home directory, network requests to unlisted domains, IP literal bypasses, hostname tricks. All denied at the kernel or the proxy. The feature does what it says on the box.
Second, it's not on by default. If you've never run /sandbox, it's off. If you've been running --dangerously-skip-permissions for months because you got tired of permission prompts, this is the feature that lets you stop. It's a better trade than the one you're currently making.
Third, sandbox.denyRead does not protect you from the Read tool. permissions.deny Read(...) does. If you only configure the sandbox, bash can't read your SSH keys but Claude's Read tool can. The fix is to configure both layers for the same paths. The docs don't tell you this.
Sandboxing is the feature that would let me unalias --dangerously-skip-permissions. After these experiments, I'm turning it on.
With the config above. Not the minimal example from the docs. The defaults aren't the perimeter you think they are. With the right settings, they can be.
