DeckBot: AI presentation assistant

November 30, 2025
DeckBot: AI presentation assistant

Introduction: AI Doesn’t Understand Systems—It Understands Structures

Modern language models do not "understand" a business process, a data center, a workflow platform, a CRM system, or a Kubernetes cluster. They only understand structured representations of those systems.

LLMs manipulate text—especially structured, rule-governed text—with superhuman fluency. This single fact reshapes the entire integration landscape:

If you can express a system as code (or code-like declarative artifacts), then AI can operate on that system with remarkable precision.

This is the core insight behind the Everything as Code paradigm. Today, the biggest engineering wins come from treating code not as a product, but as the universal interface for AI-driven orchestration.

Why Code? The Real Cognitive Interface

Language models are statistical machines that operate on token streams. When we reduce a system to a textual specification, we achieve three multiplier effects:

  1. Machine-operable structure (the model can parse, transform, and generate).
  2. Human reviewability (engineers can diff and reason about changes).
  3. Automatic validation (DSLs, schemas, and test harnesses act as guardrails).

LLMs are terrible at raw, unstructured interaction with complex systems. LLMs are excellent at iteratively editing declarative artifacts, with validation after each step.

This is the same reason Infrastructure-as-Code overtook imperative shell scripting. Declarative representations give both humans and machines a stable substrate for controlled, incremental improvement.

At Anthus OS we treat this substrate as the interface layer itself.

Everything as Code as an Integration Strategy

Our core engineering pattern for integrating AI with legacy systems is:

  1. Extract the system’s observable state into an editable representation
  2. Normalize it into a declarative form
  3. Expose that representation to the language model
  4. Allow the model to propose mutations
  5. Validate those mutations using schemas, interpreters, and tests
  6. Use a virtualization layer to apply the approved changes to the real system

The virtualization layer is the heart of the pattern. It acts as the shell around the kernel—your real system remains untouched until a validated code diff has been approved.

This pattern is simple, but profound. It gives us a way to integrate AI with practically anything:

  • Cloud infrastructure
  • Workflow engines
  • CRM and ERP systems
  • Internal tools
  • ML pipelines
  • Knowledge bases
  • API gateways
  • IoT systems
  • Business processes
  • Human-in-the-loop systems

Anything can be turned into a symbolic world the model can live inside.

[Diagram of the Everything-as-Code Architecture: System → Virtualization Layer → DSL/YAML/JSON → LLM Editor → Validation Layer → Real System]

The Token Economy: Why DSLs Matter Even More for AI

Every token in a prompt costs money. Every token in a prompt adds entropy. Every token in a prompt competes against the context window.

A domain-specific language (DSL) is a way of compressing meaning without losing precision.

Instead of feeding the model 4,000 tokens of AWS metadata, we feed it 200 tokens of distilled state expressed in a grammar we control.

Why DSLs amplify LLM performance:

  • They reduce long-range ambiguity
  • They eliminate conversational drift
  • They encode invariants as syntax
  • They force the model into structured edits
  • They enable deterministic validation
  • They radically reduce token footprint
  • They make hallucinations impossible by construction

Even YAML and JSON behave as lightweight DSLs. But custom grammars—implemented with tools like ANTLR, YACC, or Lark—give you sharper semantics and greater compression.

[Before and after comparison: Raw system metadata vs. compressed DSL representation]

Context Engineering as an Engineering Discipline

At Anthus we think of context engineering with LLMs the way other companies think about compiler optimization. It is not merely a matter of feeding the model "some context." Context is the harness the model lives inside.

A well-designed harness does things like:

  • Build a view of the world on demand
  • Enforce invariants declaratively
  • Provide goals, constraints, and schemas
  • Replace free-form editing with structured transformations
  • Construct observational context from the real system at call-time

We often generate context dynamically using templates. The model is therefore never flying blind—it is placed into the cockpit with the exact gauges we choose to expose.

You can think of a prompt as a runtime environment for the model.

Everything-as-Code is the mechanism that makes that runtime stable, deterministic, and safe.

The Virtualization Layer: The Hidden Engine

To integrate AI with an existing system, you need a layer that:

  • Reads the real system’s state
  • Encodes it into your DSL (or YAML/JSON)
  • Validates that encoding
  • Exposes it to the model
  • Accepts proposed edits
  • Validates those edits via schema or interpreter
  • Applies the valid diff to the system

This layer is analogous to /proc in Linux: a virtual filesystem that describes the world in textual form. Our work at Anthus OS often involves building this translation layer before we build any agents or workflows.

Once the virtualization layer exists, a model can operate the system like a shell.

[Diagram comparing the Linux /proc filesystem to an Anthus-style virtualization layer representing AWS/GCP/CRM process states]

Case Study: DeckBot

We recently built DeckBot, a specialized AI coding assistant that automates the presentation workflow. It combines Marp for declarative slide definitions with Nano Banana Pro 2 for custom visuals.

It is a perfect example of the "Everything as Code" philosophy:

  1. Everything as Code (EaC): Presentations are stored as Markdown files, making them version-control friendly and easy for LLMs to read and edit. By treating slide decks as software projects (Markdown source code), we gain fine-grained control, version control, and the ability to collaborate with AI coding assistants.

  2. Give an Agent a Tool: Instead of trying to pre-program every possible behavior, we give an AI agent the right tools (file editing, image generation, compilation) and let it figure out how to use them to solve your problems.

DeckBot Logo

While tools like NotebookLM are pushing boundaries with instant deck generation, they often act as black boxes: the result might look great, but you can't easily edit the details later. DeckBot is about ownership and control. Because your presentation is treated as a software project (Markdown source code), you have complete freedom. You can open the files and tweak every detail manually, or ask the assistant to refactor entire sections. You own the source, so you control the output.

Features

  • Everything as Code: Presentations are stored as Markdown files, making them version-control friendly and easy for LLMs to read and edit.
  • AI Coding Assistant: A built-in REPL powered by Google's Gemini models that acts as your pair programmer for slides. It can write content, organize structure, and manage files.
  • Nano Banana Image Generation: Integrated "Nano Banana" (Google Imagen) support. Just ask for an image, and the agent will generate multiple candidates for you to choose from.
  • Interactive Workflow:
    • Chat: Discuss your high-level ideas and let the agent draft the slides.
    • Visualize: Generate custom images on the fly.
    • Preview: Live preview your deck using the Marp CLI.
    • Export: Compile your finished deck to HTML or PDF.

Behavior Driven Development (BDD)

This project uses Behavior Driven Development (using behave) as a primary tool for collaboration between humans and AI.

  • Human-Readable Specs: BDD feature files (.feature) serve as a clear, unambiguous contract between the user and the AI. They describe what the software should do in plain English.
  • Stability & Reusability: By codifying behavior into tests, we ensure that new features don't break existing ones. This allows us to build reliable, reusable software with the same effort often spent on "throwaway" scripts.
  • AI Alignment: For AI-assisted coding, BDD provides a perfect feedback loop. If the behavior specs pass, the implementation is correct, regardless of the underlying code structure. This allows us to focus on desired outcomes and feature stability rather than getting bogged down in implementation details.

Agents as Code, Workflows as Code, Evaluations as Code

Our internal infrastructure (Anthus OS) follows this philosophy everywhere:

  • Agent definitions → YAML/DSL
  • Scorecard configurations → declarative JSON
  • Dataset definitions → structured metadata files
  • Evaluation harnesses → typed config objects
  • Human feedback loops → symbolic state machines
  • Operational playbooks → structured procedural descriptions

The benefit is not aesthetics. The benefit is that LLMs treat these artifacts as living documents that can be revised, optimized, and extended.

This turns the AI system into something akin to a self-adjusting compiler pipeline. Human engineers set the constraints; the models iterate inside those constraints.

Why This Matters for Real AI Engineering

Everything-as-Code transforms AI integration from "ad-hoc model calls" into a software discipline:

  • Repeatability: A given DSL artifact produces a deterministic action.
  • Traceability: Every agent decision is an auditable diff.
  • Safety: Nothing reaches production without validation.
  • Modularity: Representation and execution are decoupled.
  • Scalability: Any subsystem can be exposed to AI by giving it a representation.
  • Performance: Token-efficient grammars dramatically reduce cost.
  • Generalization: Once the pattern exists, you can integrate AI with anything—regardless of legacy constraints.

Failure Modes (And How to Avoid Them)

Everything-as-Code is powerful, but easy to misuse.

1. Representation Drift If your DSL view drifts from the underlying system, the model optimizes fictional states. Solution: Make the virtualization layer authoritative and refreshed on demand.

2. Overpowered DSLs If your DSL is too expressive, the model becomes dangerous. Solution: Restrict grammar; allow only safe operations; require tests.

3. Insufficient Validation LLMs will confidently emit invalid structures. Solution: Schemas + interpreters + test harnesses before applying changes.

4. Human Unreadability A DSL that humans can’t read will fail culturally. Solution: Keep the semantics small, declarative, and diff-friendly.

Why This Is Not a Trend—It’s an Operating Principle

The movement toward Everything-as-Code is not a fad. It’s a structural response to the nature of modern AI. It’s the only reliably scalable interface between:

  • the symbolic/textual world
  • the real operational world
  • the model’s reasoning capabilities

By turning systems into representations, we turn them into environments that models can inhabit safely. Once they inhabit those environments, they can reason, explore alternatives, optimize them, and help us orchestrate change.

This is not "configuration." This is cognitive infrastructure.

Conclusion: Engineering in a Representational World

At Anthus OS AI Solutions, we approach integration work with one first question:

How do we represent this system as code?

From that representation flows:

  • safe mutation
  • orchestration
  • automation
  • observability
  • verification
  • agentic operation
  • higher-order reasoning
  • business process optimization

Everything as Code is not just a technique. It is the operating principle for building AI-driven systems in the real world.

And it’s how we unify the messy, stateful, legacy universe with the elegant, symbolic, iterative world that language models inhabit.