Skip to main content

GLM-5.1 is z.ai's Claude Code and OpenClaw wedge

z.ai did not just ship another benchmark chart. It put GLM-5.1 in front of every Coding Plan tier and documented the manual path into Claude Code, OpenClaw, and Cline.

Talia ReedMarch 27, 20266 min read
Editorial illustration of a z.ai wedge cutting into Claude Code territory and routing through OpenClaw and Cline-style configuration surfaces, with quota clocks and manual switch points made visibly part of the path.
ainewssilo.com
GLM-5.1 matters less as a new model name than as a cheap, documented way for z.ai to squat inside someone else's coding workflow.

GLM-5.1 would be an ordinary model-launch story if z.ai had simply tossed out a benchmark card and waited for the quote tweets. Instead, it did something more strategically annoying for the incumbents: it made GLM-5.1 available across Lite, Pro, and Max Coding Plan users, then published same-day instructions for getting it to run inside Claude Code, OpenClaw, and Cline-style setups.

The official Using GLM-5.1 in Coding Agent page is the tell. It reads less like a frontier-intelligence manifesto and more like a field manual for slipping z.ai deeper into tools developers already have open. That matters more than another round of chest-thumping about who is almost as good as Opus this week.

The distribution move is the point

The GLM Coding Plan overview says all plans now support GLM-5.1, while Max and Pro additionally support GLM-5. That distinction matters because GLM-5, GLM-5-Turbo, and GLM-5.1 are related, but they are not interchangeable.

What z.ai is really selling is distribution through familiar shells. The Claude Code guide explicitly pitches the Coding Plan as a cheaper, heavier-usage way to use Claude Code. The interface can still show Claude-flavored model labels while z.ai handles the backend mapping. That is not a default takeover of the Claude Code experience, and it is important not to overstate it. By default, Claude Code still maps its internal Opus and Sonnet slots to GLM-4.7, with Haiku mapped to GLM-4.5-Air. If you want GLM-5.1, z.ai tells you to edit ~/.claude/settings.json yourself.

That is the wedge. z.ai does not need developers to abandon Anthropic's terminal shell. It just needs them comfortable enough to reroute the engine.

Editorial concept image of a z.ai routing wedge entering Claude Code territory and branching into OpenClaw and Cline-style configuration surfaces through visible manual switch points.
Figure / 01 The pitch is not just "new model." It is "new route into the tools you already use."

We have already seen adjacent versions of this move in the broader AI action-not-answers battlefront. Whoever owns the tool surface does not always own the workload.

OpenClaw support exists, but bring a wrench

The OpenClaw angle is real, but it is not frictionless. z.ai's OpenClaw integration page says the Coding Plan supports OpenClaw with secondary scheduling and best-effort delivery, while coding-agent tasks get preemption priority under load. That is useful honesty. It is also a reminder that "supported" and "first-class" are cousins, not twins.

The manual setup path is even more revealing. The OpenClaw guide's provider setup section still lists a supported-model set that does not include GLM-5.1 in the selectable flow. The separate Using GLM-5.1 in Coding Agent page then tells existing OpenClaw users to manually add glm-5.1 to ~/.openclaw/openclaw.json, switch agents.defaults.model.primary to zai/glm-5.1, add a matching entry under agents.defaults.models, and restart the gateway.

That is usable. It is not exactly "click once and achieve enlightenment."

Still, the support matters because it plugs into a larger pattern around OpenClaw's OpenAI-compatible gateway and the scramble to become the model layer behind agent frameworks. z.ai is saying: here is how to wire our model into the agent loop you already care about.

Cline is the clearest version of that pitch. z.ai's docs tell users to choose an OpenAI-compatible provider, point the base URL at https://api.z.ai/api/coding/paas/v4, enter a custom model like glm-5.1, and tune the context window manually.

The economics are what make the pitch dangerous

The money part is what makes this sharper.

The Coding Plan overview starts Lite at $10 per month and Pro at $30, with 5-hour and weekly usage caps that vary by tier. z.ai also says the monthly available quota works out to roughly 15x to 30x the monthly subscription fee once weekly caps are factored in, though actual usage still depends on project complexity, repo size, and whether auto-accept is enabled.

The catch is the multiplier. z.ai says GLM-5.1, GLM-5, and GLM-5-Turbo are premium models designed to rival Claude Opus-level capability, so they deduct at 3x during peak hours and 2x during off-peak hours. Then comes the promotional sugar hit: through the end of April, GLM-5.1 and GLM-5-Turbo only consume 1x quota during off-peak hours.

Editorial concept image of a premium z.ai route pushing through Claude Code territory while quota clocks, pricing pressure, and switching lanes fan toward OpenClaw and Cline-style panels.
Figure / 02 GLM-5.1 looks most dangerous when you see it as a budgeted workflow choice, not a leaderboard flex.

That is why this is also an economics story. z.ai is effectively telling price-sensitive developers: route serious coding tasks through our top tier when it matters, keep lighter work on GLM-4.7 when it does not, and treat model choice like a budget control rather than a religious identity. It lands right in the territory we covered in open-weight inference economics, where model adoption is increasingly about operating cost, not just leaderboard vanity.

If you want to test that pitch yourself, this is an affiliate offer, not neutral reporting: sign up for z.ai here, and when you sign up and log in you get an instant 10% off your first order. It does not make the manual setup any less manual, but it does make the experiment slightly cheaper.

The Opus talk is real, and so is the benchmark fog

z.ai is clearly leaning on comparison language. The Coding Plan overview says GLM-5.1, GLM-5, and GLM-5-Turbo are designed to rival Claude Opus models. The older GLM-5 model page goes even further, saying GLM-5 approaches Claude Opus 4.5 in real programming scenarios and posts benchmark charts to make the case.

What z.ai has not done, at least in the materials tied to this launch, is provide a clean standalone GLM-5.1 benchmark explainer that settles the matter. That gap showed up immediately in discussion. One of the early HN replies asked, bluntly, what benchmark "Coding Evaluation" even referred to. Reddit users were no more serene. Some wanted real-repo tests before believing anything. Others treated the whole thing like a countdown to a future open-weights drop.

That skepticism is healthy. Benchmarks still have a weird habit of arriving dressed like certainty when they are really closer to campaign literature with better typography. See our benchmark trust recession.

So the careful version is this: z.ai is marketing GLM-5.1 as near-Opus for hard coding work, and the surrounding GLM family claims are plainly meant to support that story. That is not the same as proving GLM-5.1 beat Opus, or that the real-world gap is settled. It means the vendor wants you to try the swap.

Open source? Not cleanly, not today

Community chatter quickly blurred together "available in Coding Plan," "open source," and "open weights soon." Those are three different statements. Today, the first one is documented. The other two are not cleanly locked.

The best clue is z.ai's own docs index. It lists dedicated model pages for GLM-5 and GLM-5-Turbo, but GLM-5.1 shows up in the index as a Coding Agent usage page, not as a standalone model guide or a fresh open-weights release page. That does not prove an open-weight release will not happen. It does mean this launch should be described honestly: GLM-5.1 is a newly exposed Coding Plan option with official tool-routing docs, not a tidy open-source release you can already summarize with a victory stamp.

That nuance matters because the most interesting thing here is not licensing theater anyway. It is workflow capture. z.ai wants to become the pragmatic model behind Claude Code's growing agent loop, OpenClaw sessions, and custom OpenAI-compatible coding tools. If the model holds up on real repositories, and if the manual setup pain eases, that wedge gets sharper fast.

If not, GLM-5.1 risks becoming another benchmark chart in a trench coat.

Source file

Public source trail

These links anchor the package to the underlying reporting trail. They are not a substitute for judgment, but they do show where the reporting starts.

Primary sourcedocs.z.aiz.ai Docs
Using GLM-5.1 in Coding Agent

Core evidence that GLM-5.1 is now available to all Coding Plan users and that z.ai published manual switching instructions for Claude Code, OpenClaw, and Cline-style tools.

Primary sourcedocs.z.aiz.ai Docs
GLM Coding Plan Overview

Defines supported plans, supported tools, quota rules, and the limited-time off-peak 1x promotion through the end of April.

Primary sourcedocs.z.aiz.ai Docs
Claude Code integration guide

Shows that the default Claude Code mapping still points to GLM-4.7 and GLM-4.5-Air, with GLM-5.1 requiring manual custom mapping.

Primary sourcedocs.z.aiz.ai Docs
OpenClaw integration guide

Confirms OpenClaw support, fair-use caveats, and the extra manual JSON edit plus gateway restart needed to use GLM-5.1.

Primary sourcedocs.z.aiz.ai Docs
z.ai docs index (llms.txt)

Useful for distinguishing GLM-5.1 from GLM-5 and GLM-5-Turbo and for checking whether z.ai had published a standalone GLM-5.1 model guide or open-weight page at launch.

Primary sourcex.comX / z.ai
z.ai official launch post on X

Same-day official post that helped kick off the compatibility and pricing conversation around GLM-5.1.

Supporting reportinghn.algolia.comHacker News / Algolia
Hacker News discussion: GLM-5.1 Is Available

Captures immediate skepticism around undefined benchmark framing and what the launch claim actually means in practice.

Supporting reportingreddit.comReddit / r/LocalLLaMA
Reddit discussion: Glm 5.1 is out

Shows the early community reaction focusing on open-weights timing, day-one availability, and service reliability doubts.

Portrait illustration of Talia Reed

About the author

Talia Reed

Staff Writer

View author page

Talia reports on product surfaces, developer tools, platform shifts, category shifts, and the distribution choices that determine whether AI features become durable workflows. She looks for the moment where a launch stops being a demo and becomes an ecosystem move.

Published stories
18
Latest story
Mar 27, 2026
Base
New York

Reporting lens: Distribution is usually the story hiding inside the launch.. Signature: A feature matters when it changes someone else’s roadmap.

Related reads

More AI articles on the same topic.

GLM-5.1 is z.ai's Claude Code and OpenClaw wedge | AI News Silo