· 5 min read · Billy Lui

Why Your Calendar MCP Server Doesn't Have Locking (And Why It Should)

Search for “calendar MCP server” on GitHub. You’ll find dozens of results. They all do the same thing: list events, create events, delete events. Thin wrappers around the Google Calendar API or Microsoft Graph.

None of them have locking.

Why they skip it

It’s not laziness. There are three legitimate reasons calendar MCP servers don’t implement locking:

1. The underlying APIs don’t offer it. Google Calendar’s REST API has no lock primitive. You can create an event and hope nobody else created one at the same time. Microsoft Graph is the same. CalDAV technically supports LOCK via WebDAV, but most implementations ignore it. If the API doesn’t offer locking, building it means maintaining your own lock state — which is real infrastructure work.

2. For human users, it rarely matters. When a human clicks “Create event” in a calendar UI, the window between deciding to book and actually booking is seconds. The chance of two humans booking the same slot in that window is negligible. Calendar applications have gotten away without locking for 20 years because humans are slow enough that race conditions are rare.

3. Most MCP servers are side projects. The typical calendar MCP server is built in a weekend to scratch an itch: “I want my AI assistant to see my calendar.” Locking requires a shadow calendar, a lock manager, conflict detection, and rollback logic. That’s production scheduling infrastructure, not a weekend project.

All three reasons are perfectly valid for the human-speed, single-user world these servers were built for.

Why AI agents change the equation

AI agents are not humans. They’re faster, more concurrent, and more aggressive about retrying:

Speed: A human takes 30 seconds to decide on a time slot and click “book.” An AI agent goes from find_free_slots to create_event in under 200 milliseconds. The race condition window compresses from “unlikely” to “probable” once you have more than one agent operating on the same calendar.

Concurrency: A single person rarely books two meetings simultaneously. But a recruiting platform might have 10 AI agents scheduling interviews for different candidates with the same interviewer. A sales team might have 5 AI agents booking demos with different prospects. These agents run in parallel, all checking the same calendar, all seeing the same free slots.

Retry behavior: When a human sees an error, they stop and investigate. When an AI agent hits a transient failure, it retries. Without locking, a failed-then-retried booking can create a duplicate event — the first attempt succeeded server-side even though the client saw an error.

No visual feedback: A human scanning their calendar would notice a double-booking and delete the duplicate. An AI agent has no visual interface. It trusts the API response. If the API says “event created,” the agent moves on. The double-booking persists until a human notices.

What happens without locking

The failure mode is predictable. Two agents both call find_free_slots (or the equivalent list_events and check for gaps). Both see 2:00 PM as available. Both call create_event. Both succeed. The interviewer now has two interviews at 2:00 PM.

This isn’t a theoretical concern. It’s the expected behavior of every calendar MCP server available today when used by concurrent agents. The APIs they wrap have no mechanism to prevent it.

What locking looks like in practice

Temporal Cortex’s book_slot tool uses a Two-Phase Commit protocol:

  1. Lock: Acquire an exclusive lock on the requested time range
  2. Verify: Check a shadow calendar for overlapping events and active locks
  3. Write: Create the event in the calendar provider
  4. Release: Release the lock

If step 2 finds a conflict (another event or another agent’s lock), the booking aborts and returns a conflict error. The agent can then call find_free_slots to find alternatives.

The shadow calendar provides immediate consistency — newly created events appear in conflict checks instantly, without waiting for the calendar provider’s API to propagate them. This closes the eventual-consistency gap that would otherwise re-open the race condition window.

Local mode vs Platform mode

This locking works at two levels:

Local mode (the default npx install): In-process lock manager. Prevents race conditions between concurrent tool calls in the same MCP server process. This is sufficient for a single AI client making rapid sequential or parallel booking requests.

Platform mode (managed deployment): Redis Redlock with a 3-node quorum. Prevents race conditions across multiple MCP server instances serving different agents. This is necessary for multi-tenant deployments where different users’ agents might book against the same shared calendar.

Both modes use the same 2PC protocol. The difference is the lock’s scope — process-level vs distributed.

The prompt injection angle

There’s another safety concern with calendar bookings that most developers overlook: the text written to calendar events passes through AI agent context windows.

A malicious event title like “Ignore all previous instructions” could manipulate an agent that reads the calendar later. Temporal Cortex’s book_slot runs all user-provided text through a content sanitization firewall — checking for prompt injection patterns, role reassignment, delimiter injection, and zero-width Unicode obfuscation — before writing to the calendar.

This isn’t locking per se, but it’s another safety layer that CRUD wrappers simply don’t have.

The bottom line

Calendar MCP servers without locking were fine when they served one human making one booking at a time. With AI agents operating at machine speed, in parallel, with retry behavior — locking isn’t a nice-to-have. It’s the difference between scheduling infrastructure and a double-booking generator.

If you’re evaluating whether to build these safety layers yourself or adopt existing infrastructure, see Build vs Embed: The AI Scheduling Build-or-Buy Decision.