Claude Code signal, May 9: less friction, clearer control

Adam Olofsson HammareAdam Olofsson Hammare
Claude Code signal, May 9: less friction, clearer control

When a coding agent becomes useful in daily work, the news is rarely one spectacular demo. It is that login keeps working, connectors do not disappear, plan mode really blocks writes and someone on the team can understand what happened. The Claude Code updates on May 8–9 point in that direction: less magic, more operable work environment.

What actually changed in Claude Code

Anthropic published Claude Code 2.1.137 on May 9 with a targeted fix for the VS Code extension failing to activate on Windows. The day before, 2.1.136 arrived as a larger stability release with fixes for MCP servers, OAuth token refresh, plan mode, terminal rendering, plugin hooks and resumed sessions.

A coding agent is an AI tool that can read code, suggest changes and sometimes run tools inside a development environment. An agentic workflow means the AI does not only answer in chat; it takes several tool-using steps toward a goal, with files and controls involved.

Source: Anthropic Claude Code releases on GitHub

Source: Claude Code CHANGELOG.md

Why the MCP and OAuth fixes matter

MCP, the Model Context Protocol, is a way for AI tools to connect to external systems and data sources through standardized servers. Claude Code 2.1.136 includes several practical MCP fixes: servers from the .mcp.json configuration file, plugins, and Claude.ai connectors should no longer disappear after /clear, multiple remote servers should not lose OAuth refresh tokens at the same time, and MCP tool results should become visible when servers return content blocks.

For a small business, that means less time spent restarting and logging in again. For a school or admin-heavy operation, it is also a reminder: connections to student data, documents, finance tools or customer records need clear permissions, not just technical access.

Source: Claude Code 2.1.136 release notes

The control signal: plan mode and hard denies

The most important signal for non-technical teams is not that Claude Code can do more. It is that the control layer is becoming clearer. Version 2.1.136 fixes plan mode not blocking file writes when a matching Edit(...) rule existed, and adds settings.autoMode.hard_deny for rules that should always block an action regardless of user intent.

A plan mode is a safety step where the agent first describes what it intends to do before changes are executed. A sandbox is a bounded environment where tools can work without full access to the rest of the system. Both matter if small teams are going to let AI help with websites, document workflows or internal automation scripts.

This is a good Mindset Forge moment: do not start with the tool. Start by defining which actions AI must never perform without human review, such as changing prices, sending customer emails, deleting data or publishing material.

Source: Claude Code changelog, 2.1.136

Who this matters for

This is relevant even if you never open Claude Code yourself:

  • Small-business owners: you get a model for how AI work should be governed: plan first, change second, review before publication.
  • School leaders and educators: students and staff will meet tools that can act in files and systems, not only write text. Policy needs to arrive before usage scales.
  • Administrative teams: MCP-like connections can become the path from chat to workflow, but only when permissions and logging are clear.
  • Solo operators: steadier sessions and resume behavior reduce friction, but you still need a simple checklist for what AI may change.

What to test today

If you use Claude Code, update it and test in a harmless repository first. If you do not use Claude Code, use the signal as an exercise for how you would introduce an AI assistant into any workflow.

  • Check that the team knows which systems the AI tool may read from and write to.
  • Ask the agent to describe its plan before it changes files or processes.
  • Document three actions that always require human approval.
  • Test resume, context clearing and login before using the tool in live work.

For Hammer readers, this naturally maps to Tool Forge: build or choose tools only after control points, permissions, and ownership are clear.

Source: @anthropic-ai/claude-code in the npm registry

Try this prompt this week

Use this prompt in Claude Code inside a test project, or in a regular AI chat if you are not running the tool yourself. Do not run it directly against production, customer data or school data.

You are my review partner for a safe agentic coding workflow.

Start from this task: [describe the change, for example "update the contact form"].

Before you suggest or make changes, create a short risk plan with:
1. Which files, systems or data you need to read.
2. Which changes you want to make and why.
3. Which actions must be blocked or require human approval.
4. How we can test the change without sending real forms, emails or customer data.
5. A rollback plan if something goes wrong.

Stop after the plan and wait for approval.

A good answer should:

  • Name concrete files, systems, or forms instead of generic phrases.
  • Separate reading, changing, testing and publishing.
  • Suggest safe tests without fake leads or live sends.
  • Make it clear where a human should approve the next step.

What we will watch next

The next signal to watch is whether Claude Code keeps moving control from “trust the agent” toward policy, hooks, plugin rules and reviewable workflows. At run time, the npm registry also listed a 2.1.138 version while the GitHub releases page and the latest dist-tag still pointed to 2.1.137; that makes it a watch item until clear release notes are available. That is where small Nordic organizations get real value: not when AI looks the smartest, but when it can be adopted without losing accountability, privacy or calm operations.