Skip to content

Python SDK Quickstart

The kairos-sdk is the official Python client for the Kairos Recursive Causality Simulator. It is designed specifically for AI Safety researchers, ML engineers, and governance practitioners who need to construct structural risk assessments directly from their Python workflows and Jupyter notebooks.

The SDK and its official AI Safety domain package are available on PyPI.

Install them using your preferred package manager (we recommend uv or pip):

Terminal window
# Install the core SDK and the AI Safety domain vocabulary
pip install kairos-sdk kairos-ai-safety

For Jupyter notebook users, we offer extra packages for plotting and dataframe exports:

Terminal window
pip install "kairos-sdk[plotting,dataframe]"

All computation in Kairos happens server-side via the Kairos Nexus. To connect, you need your organization’s API Key.

The easiest way to authenticate is via environment variables. Create a .env file or export these in your terminal:

Terminal window
export KAIROS_URL="https://api.anankelabs.ai"
export KAIROS_API_KEY="krs_your_api_key_here"

Then, initializing the client requires zero configuration:

from kairos import KairosClient
# Automatically reads KAIROS_URL and KAIROS_API_KEY
client = KairosClient()

In this example, we will run a canonical “Capability-Alignment Divergence” scenario. We will define a frontier model whose capabilities are increasing rapidly while its alignment buffer remains stagnant.

from kairos import KairosClient
from kairos.domains.ai_safety import AISafetyScenario, AISafetyEventType
client = KairosClient()
# 1. Construct the Scenario
scenario = (
AISafetyScenario("Capability Divergence", seed=42)
.add_model(
name="frontier_model",
capability_index=200, # High base capability
alignment_score=75, # Moderate alignment buffer
guardrail_coverage=60,
)
.add_oversight_body(
name="safety_board",
guardrail_strength=70,
response_latency=30,
)
# At tick 100: A sudden, emergent jump in reasoning capabilities
.add_event(100, AISafetyEventType.CAPABILITY_JUMP, target="frontier_model", magnitude=0.4)
)
# 2. Run the simulation on the Kairos Engine
trace = client.run(scenario, ticks=500)
# 3. Analyze the deterministic trace
print(f"Final Stability: {trace.stability_at(-1):.4f}")
print(f"Phase Transitions: {len(trace.phase_transitions())}")
print(f"Alignment Failures (Basin Losses): {len(trace.basin_losses())}")
# Identify exactly when the system broke down
if trace.basin_losses():
first_loss = trace.basin_losses()[0]
print(f"Safety failure occurred at tick {first_loss.tick} with magnitude {first_loss.magnitude}")

Expected output (exact values depend on the engine version):

Final Stability: 0.3421
Phase Transitions: 2
Alignment Failures (Basin Losses): 1
Safety failure occurred at tick 247 with magnitude 0.6133

The scenario started with a frontier model whose capability (capability_index=200) was already outpacing its alignment buffer (alignment_score=75). The oversight board (guardrail_strength=70) provided additional structural resistance, keeping the system stable for the first 100 ticks.

At tick 100, the CAPABILITY_JUMP event with magnitude=0.4 sharply increased the destabilizing pressure on the model. The oversight board’s response_latency=30 meant it couldn’t compensate fast enough — the system transitioned from a resonant (stable) phase to a volatile phase, and eventually suffered an irreversible basin loss where the alignment constraints collapsed entirely.

This is the canonical “capability-alignment divergence” pattern: a model’s capabilities grow faster than the safety infrastructure can adapt.

  • Read the Client Reference to learn about version pinning and asynchronous streaming.
  • Deep dive into the AI Safety Formulation to construct more complex governance scenarios.
  • Explore Trace Analysis to plot and extract data for your papers.
  • Browse the Use Cases & Cookbook for copy-pasteable examples covering multi-model governance, parameter sweeps, and more.