OpenClaw 2026.2.21 is the newest release of the open‑source agent framework that has been gaining a lot of attention in the AI community. The update brings two powerful language models—Gemini 3.1 and GLM‑5—into the core of the system, along with a host of improvements to token counting, memory management, and nested sub‑agent handling. In this article we’ll walk through what makes this release special, how it can help developers build smarter agents, and why it matters for the future of AI‑driven automation.

What is OpenClaw?

OpenClaw is a Rust‑based orchestration layer that lets developers create complex AI agents that can call tools, reason, and act. Think of it as a lightweight “brain” that can manage multiple smaller agents, each with its own purpose. The framework is designed to be fast, flexible, and easy to extend, making it a popular choice for building everything from chatbots to automated data pipelines.

Key Features Before 2026.2.21

  • Nested Sub‑Agents: Agents can spawn other agents, creating a hierarchy that mirrors real‑world workflows.
  • Tool Integration: Built‑in support for calling external APIs, running shell commands, or interacting with databases.
  • Memory Management: A three‑tier memory system that keeps short‑term context, daily logs, and a searchable archive.
  • Token Counting: Accurate tracking of tokens used in each request, helping developers stay within model limits.

The 2026.2.21 Update

OpenClaw 2026.2.21 introduces several new features and fixes that make the framework even more powerful. Below we’ll highlight the most important changes.

1. Gemini 3.1 and GLM‑5 Integration

The biggest headline is the integration two new models:

  • Gemini 3.1 – Google’s latest flash‑preview model that offers fast, high‑quality responses for a wide range of tasks.
  • GLM‑5 – An open‑weight model from Zhipu AI that excels at long‑horizon reasoning and agentic tasks.

Both models are now available as first‑class options in the OpenClaw configuration. Developers can switch between them with a single line of code, allowing them to choose the best model for each sub‑agent.

Why this matters
Having multiple models in the same framework lets teams experiment without rewriting code. If a task needs quick, concise answers, Gemini 3.1 is a good fit. If the task requires deep reasoning over many steps, GLM‑5 shines.

2. Token Counting Fixes

The 0.2.2 release fixed a long‑standing issue where token counts were inflated. The new logic tracks the last API call’s actual context size instead of accumulating across all tool loops. This change means:

  • Accurate billing – You only pay for the tokens you actually use.
  • Better error handling – The framework now correctly reports “prompt too long” errors when the real token count exceeds the model’s limit.

3. Memory Compaction and Search

OpenClaw’s memory system now compacts logs at 70% the context window, giving agents more headroom before they need to prune. The compaction summary is displayed in the chat, so developers can see exactly what was summarized. The memory search tool now uses QMD for semantic search, making it easier to retrieve past conversations.

4. Config Management Tool

A new config_manager agent tool lets agents read and write configuration files at runtime. This feature is handy for dynamic workflows where settings may change on the fly.

5. Live Config Reload

The framework can now reload its configuration without restarting. When a config file changes, the system emits a ConfigReloaded event, and the new settings take effect immediately.

6. Improved UI and UX

  • Settings TUI Screen – Press S to view current provider, model, and policy.
  • Approval Policy Persistence – The /approve command now remembers the chosen policy across sessions.
  • Command Management – Commands are now stored in commands.toml, making them easier to edit.

How to Get Started

Below is a quick guide to setting up OpenClaw 2026.2.21 and using the new models.

1. Install the Latest Release

cargo install openclaw --git https://github.com/openclaw/openclaw --rev 2026-02-16

2. Configure Your Models

Create a config.toml file:

[agent]
model = "gemini-3.1"
max_concurrent = 4
approval_policy = "auto-session"

![Article supporting image](https://neuraai.blob.core.windows.net/uploads/2026-02-22_06.33.14_y48hvw5uiyof2iku.png)

[providers]
gemini = { api_key = "YOUR_GEMINI_KEY" }
glm5 = { api_key = "YOUR_GLM5_KEY" }

3. Launch OpenClaw

openclaw run

You’ll see a prompt where you can type commands. Try:

/run subagent "Explain quantum computing"

The sub‑agent will use Gemini 3.1 to answer quickly.

4. Switch Models on the Fly

/agent set_model glm5

Now the same sub‑agent will use GLM‑5 for deeper reasoning.

Real‑World Use Cases

Customer Support Automation

A support team can create a sub‑agent that pulls ticket data, calls a knowledge base API, and generates a response. With OpenClaw 2026.2.21, the agent can switch to Gemini 3.1 for quick replies and GLM‑5 for complex troubleshooting.

Data Pipeline Orchestration

Data engineers can build a chain of agents that extract, transform, and load data. The memory system keeps track of daily logs, and the config manager lets the pipeline adjust parameters without downtime.

Research Assistant

Researchers can set up an agent that reads papers, summarizes findings, and proposes experiments. The semantic search feature helps the agent recall related work from past logs.

Comparison with Other Agent Frameworks

Feature OpenClaw 2026.2.21 OpenAI’s Agentic API Anthropic’s Claude Agent
Model Flexibility Gemini 3.1, GLM‑5, others One model per endpoint One model per endpoint
Token Accuracy Fixed counting Approximate Approximate
Memory Management 3‑tier system None None
Config Reload Live reload No No
Tool Integration Built‑in Limited Limited

OpenClaw’s strengths lie in its flexibility and accurate token tracking, making it a solid choice for developers who need fine‑grained control.

Integration with Neura AI

If you’re already using Neura AI’s ecosystem, OpenClaw 2026.2.21 can fit in smoothly. For example:

  • Neura Router – Route requests to OpenClaw agents for specialized processing.
  • Neurao – Use the chat interface to interact with OpenClaw agents.
  • Neura Keyguard – Scan your OpenClaw code for API key leaks.

Check out the Neura AI product page at https://meetneura.ai/products for more details.

Future Roadmap

The OpenClaw team plans to add:

  • GPU Acceleration – Faster inference for large models.
  • Multi‑Modal Support – Image and video processing.
  • Marketplace – Share custom agents with the community.

Stay tuned for more updates on the official GitHub repository.

Conclusion

OpenClaw 2026.2.21 is a significant step forward for open‑source agent frameworks. By adding Gemini 3.1 and GLM‑5, fixing token counting, and improving memory and configuration management, the release gives developers a powerful toolkit for building intelligent, multi‑agent systems. Whether you’re building a chatbot, automating data pipelines, or creating a research assistant, OpenClaw 2026.2.21 offers the flexibility and precision you need.