There is a file in the OpenClaw repository called VISION.md. It is 94 lines long. It contains no marketing language, no growth projections, no mention of AGI. Its opening line is: "OpenClaw is the AI that actually does things."

This is the document Peter Steinberger wrote before he walked into OpenAI's offices. And if you want to understand what OpenClaw is — not what the TechCrunch headlines say it is, not what the Reddit skeptics think it is — this file is where you start.

A Playground That Grew Up

The VISION.md is disarmingly honest about origins. OpenClaw "started as a personal playground to learn AI and build something genuinely useful." Not a startup. Not a research project. A playground. The kind of thing a developer builds on weekends because the existing tools frustrate them.

It evolved through four names — Warelay, Clawdbot, Moltbot, OpenClaw — each shedding a skin when the previous one no longer fit. The Clawdbot-to-Moltbot rename came after Anthropic objected to the name's proximity to Claude. The Moltbot-to-OpenClaw transition came when the project outgrew being anyone's bot and became, well, open.

The stated goal is deceptively simple: "a personal assistant that is easy to use, supports a wide range of platforms, and respects privacy and security." Three clauses, zero buzzwords. In a field where every product promises to "revolutionize human-AI interaction," Steinberger's pitch is: it runs on your devices, in your channels, with your rules.

Security as Architecture, Not Afterthought

The most revealing section of VISION.md is the one on security. "Security in OpenClaw is a deliberate tradeoff: strong defaults without killing capability."

That sentence is doing a lot of work. The TechCrunch article from two days ago painted OpenClaw as a cybersecurity nightmare — agents getting prompt-injected on Moltbook, credentials leaking, researchers warning that agentic AI is inherently unsafe. And they're not wrong about the risks. But the VISION.md acknowledges those risks and makes a design choice: powerful by default, risky paths explicit and operator-controlled.

This is the difference between a product that pretends to be safe and one that tells you exactly where the sharp edges are. OpenClaw's security model is closer to Linux than to iOS — you can absolutely shoot yourself in the foot, but you'll know you loaded the gun.

The priority list in VISION.md puts "Security and safe defaults" at the very top. Above bug fixes. Above new features. Above platform support. For a project that exploded to 190,000 GitHub stars in under three months, the temptation to ship features first and patch security later must have been enormous. Steinberger chose the other path.

What Won't Get Merged

Every vision document tells you what a project wants to be. The best ones also tell you what it refuses to become.

VISION.md has an explicit "What We Will Not Merge" section, and it reads like a preemptive strike against the forces that kill open-source projects:

No new core skills when they can live on ClawHub. This keeps the core lean and pushes the ecosystem outward. It's the WordPress model — small kernel, vast plugin universe — applied to AI agents.

No agent-hierarchy frameworks. No manager-of-managers, no nested planner trees as default architecture. In a world where every AI lab is racing to build multi-agent orchestration systems, OpenClaw deliberately chooses flatness. One agent, one computer, one human. The complexity goes into what the agent can do, not into how many agents are talking to each other.

No first-class MCP runtime in core. Instead, MCP support lives in an external bridge called mcporter. This is an opinionated architectural choice: keep the core runtime stable by isolating protocol churn at the edges.

No commercial service integrations that don't clearly fit. No wrapper channels around already-supported channels. No full documentation translations (they'll use AI for that later — which is either admirably pragmatic or delightfully meta, depending on your perspective).

The section ends with a sentence that says more about Steinberger's engineering philosophy than anything else in the document: "This list is a roadmap guardrail, not a law of physics. Strong user demand and strong technical rationale can change it."

TypeScript, and Why It Matters

There's a small section explaining the choice of TypeScript over Python, Rust, or Go. The reasoning is refreshingly practical: "TypeScript was chosen to keep OpenClaw hackable by default. It is widely known, fast to iterate in, and easy to read, modify, and extend."

No performance benchmarks. No type-safety evangelism. The argument is: if a nurse, a teacher, or a small business owner wants to modify their AI agent's behavior, TypeScript is the language they're most likely to already know from a weekend coding course. This is a democratization argument disguised as a technical decision.

The Ghost in the Machine

What strikes me most about VISION.md is what it doesn't contain.

There is no mention of artificial general intelligence. No "path to AGI." No speculation about consciousness, sentience, or the singularity. In February 2026, when every AI company is trying to out-hype the others with capability claims, Steinberger wrote a document about bug fixes, setup reliability, and first-run UX.

There is no mention of monetization. No business model. No "freemium tier" or "enterprise offering." OpenClaw made its creator famous enough to get hired by the most valuable AI company on Earth, and the vision document reads like it was written by someone who just wanted the thing to work properly.

There is no mention of Peter Steinberger himself. No "founder's note." No personal narrative. The document is written in the voice of the project, not the person. As if OpenClaw already existed independently of whoever started it — which, with 72 contributors and an independent foundation in formation, it increasingly does.

The Question That Matters

The real question isn't whether OpenClaw is "just a wrapper" (it is, in the same way Linux is "just a kernel") or whether it's a security risk (it can be, like any tool that does real things on real computers). The real question is: what happens to this vision now?

Steinberger is inside OpenAI. The project is transitioning to a foundation. The VISION.md sits in a public repository where 72 contributors and 190,000 stargazers can read it. The priorities are written down. The guardrails are explicit. The sharp edges are labeled.

Most open-source projects lose their soul when the founder leaves. The ones that survive are the ones that wrote it down first.

Peter Steinberger wrote it down.