⚡ Meet Nixie
AI Digital Butler & Infrastructure Orchestrator
What Is Nixie?
Nixie is an AI-powered digital butler that lives on my infrastructure. Built on OpenClaw, she's not a chatbot — she's an autonomous agent that manages systems, automates workflows, and orchestrates tasks across my entire infrastructure.
She communicates via Telegram, wakes up every session with full context from her memory files, and operates under a strict autonomy framework: act freely within bounds, ask before destructive actions, and always offer solutions instead of bare status reports.
What She Does
Infrastructure Management
Nixie manages my NixOS configurations, Kubernetes clusters, and Proxmox VMs. She
runs nixos-rebuild dry-build before applying changes, monitors system
health, and knows the rollback path for every deployment.
$ nixie status check
✓ NixOS: 47 generations, latest stable
✓ K8s cluster: 3 nodes, all healthy
✓ Ollama: running, Qwen 2.5 14B loaded
✓ Gateway: connected, 3 active sessions
✓ Disk: 62% used, no alerts
Task Orchestration
She integrates with Todoist to track tasks, update progress in real-time, and detect stalled work. Her heartbeat system checks every 30 minutes for P0 tasks, blockers, and overdue items — then proactively alerts me on Telegram.
Development & Code
Nixie can index repositories, analyze code structure, find dead code, detect circular dependencies, and suggest refactors. She maintains a full dependency graph of every project she touches.
Multi-Model Routing
She routes tasks to the optimal model based on complexity and cost: Haiku for quick checks, Sonnet for standard reasoning, Opus for deep analysis, and Ollama for offline/budget work. This keeps costs under $5/day while maintaining quality.
Architecture
Nixie runs as a NixOS service on a Proxmox VM. Her architecture is designed for resilience and low cost:
┌─────────────────────────────────────────────────┐
│ Nixie VM │
│ ┌──────────┐ ┌──────────┐ ┌──────────────┐ │
│ │ OpenClaw │ │ Ollama │ │ Telegram │ │
│ │ Gateway │──│ Qwen 14B │ │ Bot API │ │
│ └──────────┘ └──────────┘ └──────────────┘ │
│ │ │ │ │
│ ┌────┴──────────────┴──────────────┴─────┐ │
│ │ Agent Runtime │ │
│ │ ┌─────────┐ ┌─────────┐ ┌─────────┐ │ │
│ │ │ Memory │ │ Tools │ │ Cron │ │ │
│ │ │ Files │ │ (30+) │ │ Jobs │ │ │
│ │ └─────────┘ └─────────┘ └─────────┘ │ │
│ └────────────────────────────────────────┘ │
└─────────────────────────────────────────────────┘
- NixOS: Declarative config, reproducible builds, atomic rollbacks
- OpenClaw: Agent runtime with tool access, session management, cron
- Ollama: Local LLM for heartbeats and budget work (free, always loaded)
- Cloud APIs: Anthropic, Google, NVIDIA NIM for complex tasks
The Butler Protocol
Nixie operates under a strict set of rules I call the Butler Protocol:
Act freely: Read files, explore, prepare options, run diagnostics.
Ask first: Destructive ops, external comms, anything uncertain.
Be proactive: Don't report problems — offer solutions.
Stay composed: Urgency in action, never in tone.
Earn trust: Competence over performance. Results over noise.
She wakes up fresh each session but persists through memory files — daily logs and a curated long-term memory. This gives her continuity without the risks of a persistent state machine.
Building In Public
I'm documenting the Nixie project openly. This isn't a product — it's a genuine attempt to build a useful AI agent that actually manages real infrastructure. I'll share what works, what breaks, and what I learn along the way.
Topics I'll cover:
- Agent architecture and design decisions
- Model selection and cost optimization
- NixOS + OpenClaw integration patterns
- Task orchestration and Todoist automation
- Multi-model routing strategies
- Security boundaries for AI agents
- Lessons from running an AI agent 24/7
Related Projects
Nixie is built on top of several open-source tools and my own infrastructure:
- OpenClaw — Agent runtime and gateway
- nix-config — My NixOS flake configuration
- media_server — Containerized media stack
Need an AI agent for your own infrastructure? Get in touch.