· 6 min read · Billy Lui

Why We Open-Sourced Our Time Engine but Not Our Safety Layer

When we started Temporal Cortex, we had a choice every infrastructure company faces: what do you open-source, and what do you keep commercial?

The default answer in developer tools is “open-source the core, monetize the cloud.” But that framing is too vague. “The core” can mean anything. We needed a sharper principle, and we found one: open the computation, commercialize the coordination.

The trust argument for open computation

The Truth Engine determines when your meetings happen. It expands recurring event rules, converts timezones, calculates durations across DST boundaries, and resolves natural language time expressions into precise timestamps. If it’s wrong, your meetings are wrong.

You shouldn’t have to trust us on this. You should be able to read the code, run the tests, and verify the output yourself.

This is the same argument that drives open-source cryptography. Nobody trusts a proprietary encryption algorithm — the security community established decades ago that algorithms must be public and auditable. The secret is the key, not the method.

For temporal computation, the “key” is your calendar data. The “method” is the RRULE expansion algorithm, the timezone database lookup, the DST transition logic. The method should be public. The 9,000+ property-based tests should be runnable by anyone. The edge cases we handle (and the ones we don’t yet) should be visible to every developer who depends on this code.

What’s open

Truth Engine — The Rust library that powers all temporal computation. RRULE expansion, timezone conversion, duration calculation, DST handling. Available on crates.io, npm (via WASM), and PyPI (via PyO3). MIT/Apache-2.0 dual licensed.

TOON — The token-efficient data format for AI agent communication. Encoder, decoder, CLI tools, and specification. Rust core with JavaScript and Python bindings.

MCP documentation and configuration — The public MCP repository contains the full tool reference, directory configurations, Docker setup, and issue tracker. Anyone can see exactly what the MCP server does and how to configure it.

Agent Skills — The procedural knowledge that teaches AI agents the correct scheduling workflow. Four skills covering temporal context, calendar operations, and scheduling. Published on ClawHub, compatible with 26+ agent platforms.

All of this lives in three public repositories under the temporal-cortex GitHub organization: Core, MCP (docs), and Skills.

What’s commercial

Safety Layer — Content filtering and prompt injection detection for calendar events. When an AI agent writes to a calendar, the event title and description become part of future agents’ context. A malicious event title can be a prompt injection vector. The Safety Layer catches these before they’re written. This includes pattern detection for system instruction overrides, role reassignment, delimiter injection, and Unicode obfuscation.

Metering — Usage tracking, rate limiting, and billing integration for multi-tenant deployments. Per-tool call counting, tenant isolation, and quota enforcement.

API server — The managed API at api.temporal-cortex.com. Authentication (Clerk JWT + API keys), tenant provisioning, provider OAuth management, and the scheduling endpoints that power public booking pages.

Portal — The web dashboard where teams manage their calendars, providers, API keys, and scheduling rules.

All of this lives in a private repository: Platform.

Why this boundary

The boundary isn’t arbitrary. It follows a principle: things that need trust should be open; things that need sustainability should be commercial.

Temporal computation needs trust. If a developer can’t verify that FREQ=WEEKLY;INTERVAL=2;BYDAY=TU,TH expands correctly, they can’t build on it. The Truth Engine being open-source isn’t generosity — it’s a prerequisite for adoption.

The Safety Layer, metering, and multi-tenant infrastructure need sustainability. These are operationally expensive to maintain: security patterns evolve, rate limiting requires continuous tuning, and managed infrastructure has real hosting costs. Making these commercial funds their ongoing development.

This is the same boundary that PostHog, GitLab, and Supabase drew. PostHog open-sources the analytics engine and commercializes the managed platform. GitLab open-sources the CI/CD core and commercializes security scanning and compliance. Supabase open-sources the Postgres tooling and commercializes the hosted platform with auth and storage.

In each case, the open layer creates trust and adoption. The commercial layer creates sustainability. Neither works without the other. A closed computation engine that nobody can audit won’t get adopted. An open everything with no revenue model won’t get maintained.

Where contributions are welcome

The open repositories accept contributions, and there are specific areas where community input is particularly valuable.

Truth Engine edge cases — RRULE expansion has a long tail of unusual combinations. If you find an RRULE that the Truth Engine handles incorrectly, that’s a high-value bug report. Even better: a failing test case that demonstrates the expected behavior.

TOON format optimizations — The format specification is stable, but there are always opportunities to reduce token counts further for specific data shapes. Benchmarks comparing TOON vs JSON token counts across different calendar response types are useful.

Agent Skills for new platforms — The Skills follow the open Agent Skills specification. If you’re using an agent platform that isn’t covered by the existing configurations, a new skill preset is a welcome contribution.

MCP directory configurations — The MCP docs repo contains configurations for Smithery, Glama, and other MCP directories. New directory support or updated configurations help with discoverability.

Bug reports with reproduction steps — For any of the open repositories. Especially timezone edge cases — there are always more timezone edge cases.

The practical effect

For a developer building a scheduling agent, this boundary means:

  • Zero-cost start: Install the MCP server locally, connect a calendar, build your agent. No account, no API key, no cost. The computation layer is free and open.
  • Inspect everything: If resolve_datetime returns an unexpected result for “third Thursday of next month,” you can read the Rust source, run the test suite, and file a bug with a reproduction case. You’re not debugging a black box.
  • Pay when you scale: When you need multi-tenant safety, rate limiting, managed infrastructure, and a hosted API, the Platform is there. The commercial layer adds coordination on top of the open computation layer — it doesn’t replace it.

The build-vs-embed decision maps directly to this boundary. If you’re at Tier 2 (temporal computation), everything you need is open-source. If you’re at Tier 3 (autonomous booking with safety and multi-tenancy), the commercial Platform adds the infrastructure layer.

Open-core is a commitment, not a marketing term

Open-core done wrong is “source-available with a permissive license on the parts nobody cares about.” Open-core done right is a genuine split: the part that needs community trust is truly open, and the part that funds development is truly commercial.

We chose to put the hardest technical work — 9,000+ property-based tests, RFC 5545 compliance, cross-platform Rust/WASM/Python bindings — on the open side. That’s where scrutiny creates the most value. A community-verified temporal computation engine is more trustworthy than a proprietary one, full stop.

The commercial side funds the people who write those tests, fix those edge cases, and maintain those bindings. Sustainability and openness aren’t in tension. They’re symbiotic. The open code earns the trust. The commercial platform earns the revenue. Both keep the project alive.