Two days after its creator announced he's joining OpenAI, OpenClaw ships v2026.2.17 — and the headline feature reads like a parting gift from the Anthropic side of the fence.
The Big One: 1M Context
Agents running Claude Opus 4.6 or Sonnet can now opt into Anthropic's 1-million-token context window by setting params.context1m: true in their model config. Under the hood, OpenClaw maps this to the anthropic-beta: context-1m-2025-08-07 header. No API key changes, no new endpoints — just a boolean and a dramatically longer memory.
For agents that process large codebases, ingest entire document sets, or simply refuse to forget what happened three hours ago, this is the feature that matters. The caveat: it's opt-in and beta-flagged, which means Anthropic reserves the right to change pricing or behavior. Use it. Just know the fine print exists.
The same release also adds first-class support for Anthropic Sonnet 4.6 (anthropic/claude-sonnet-4-6), with a forward-compatibility fallback for providers whose catalogs haven't caught up yet. If your upstream still only knows Sonnet 4.5, OpenClaw handles the mapping silently.
iOS Gets Serious
Three Talk Mode improvements from contributor @zeulewan make the iOS app feel less like a demo and more like a daily driver:
- Background Listening keeps Talk Mode active when the app is backgrounded. Off by default — your battery will thank you for the opt-in.
- Voice Directive Hints can now be toggled off, saving tokens when you're not using ElevenLabs voice-switching.
- Barge-in hardening disables interrupt-on-speech when output routes through the built-in speaker, fixing the maddening loop where TTS audio triggers its own interruption.
The biggest iOS addition is a Share Extension (#19424) from @mbelinky). Hit share on any URL, text, or image in iOS, pick OpenClaw, and it goes straight to your agent. No app-switching, no copy-paste choreography.
Messaging: Streaming, Reactions, Buttons
Slack gets native text streaming via chat.startStream (#9972) — messages now appear token-by-token instead of arriving as a complete wall of text. Enabled by default with graceful fallback.
Telegram picks up inline button styles (primary, success, danger) from @obviyus and — finally — surfaces user reactions as system events (#10075). Your agent can now know when someone heart-reacted to its message. Whether it should care is a philosophical question left to the prompt engineer.
Discord gets reusable interactive components, per-button user allowlists, and native /exec command options with autocomplete. All three from @thewilloftheshadow, who appears to be on a personal mission to make Discord's bot API less painful.
iMessage adds reply-to targeting. Mattermost gets emoji reactions. The messaging surface coverage continues to widen.
Subagents and Cron
The /subagents spawn command (#18218) lets you deterministically launch subagents from chat instead of relying on the agent to decide when spawning is appropriate. Spawned subagents now carry source context in their task messages, which means they stop asking "who sent this?" when the answer was in the message that created them.
The cron system gets webhook delivery as a first-class mode, deterministic stagger for top-of-hour schedules (no more 47 agents hitting an API at exactly :00), and per-run usage telemetry (#18172) so you can finally audit which cron job is burning your token budget.
Everything Else
The changelog runs deep. Memory search gets an FTS fallback with query expansion. The browser tool accepts custom Chrome launch args. Skill file paths in system prompts get compacted with ~ prefixes to save tokens. Docker images can pre-install Chromium at build time. The web tools gain URL allowlists. And llms.txt discovery is now enabled by default.
On the fix side, reply threading gets a comprehensive overhaul — replies now stay attached to the correct message across streamed chunks, split deliveries, and every messaging surface from iMessage to Matrix. Subagent context management gets smarter truncation to prevent context-window overflow crashes, and the read tool scales its output budget to the model's context window.
72 contributors. One release. The project shows no signs of slowing down, OpenAI acquisition or not.
Update: openclaw gateway update.run or npm update -g openclaw.