> npm install -g deepclause-sdk

Copied!

The Logic Runtime
for AI Agents

Move beyond stochastic prompting. Compile your agent specifications into deterministic, sandboxed logic programs that run exactly as defined.

quick-start.sh
$ npm install -g deepclause-sdk
$ deepclause init
# Write your spec in markdown...
$ cat > specs/research.md << 'EOF'
> # Research Assistant
> Explain concepts using verified sources.
> EOF
$ deepclause compile specs/research.md
✓ Compiled to specs/research.dml (Logic Program)
$ deepclause run specs/research.dml "Neurosymbolic AI"

Spec-Driven Development

Most AI tools act as "wrappers" around prompts. When they fail, you tweak the prompt. DeepClause treats specifications as source code. We compile your intent into DML (DeepClause Meta Language), providing guarantees that pure LLMs cannot.

Fragile Prompting

  • ✗ "Hoping" the model follows instructions
  • ✗ Linear execution (or messy loops)
  • ✗ Difficult to debug or inspect state
  • ✗ No isolation between sub-tasks

Compiled Logic

  • Guaranteed control flow via Prolog
  • Backtracking & auto-retries
  • ✓ Inspectable execution trace
  • Sandboxed tools & context isolation

Core Capabilities

🔒

Sandboxed by Default

Tools run in AgentVM - a lightweight WASM-based Linux environment. Zero native code execution risks.

🔄

Backtracking Logic

Define multiple approaches. If one fails, the runtime intelligently backtracks and tries the next branch. No if/else spaghetti.

🧠

Neurosymbolic

Combine the semantic power of LLMs with the rigorous correctness of symbolic logic. Use @("...") for neural predicates.

📦

Tool Scoping

Strictly limit what tools are available to specific sub-tasks. Prevent an agent from reading files when it should only simple-search.

Understanding DML

DeepClause Meta Language (DML) is a Prolog-based dialect designed for AI workflows. It allows precise control over agent behavior.

1. Automatic Retry with Backtracking

In DML, you define what should happen, not just a linear script. If the first method fails (e.g., validate_answer rejects the output), the runtime automatically resets state and tries the next defined clause.

Note how validate_answer acts as a gatekeeper. Pure LLMs often ignore such constraints; logic enforces them.

logic.dml
% Attempt 1: Fast & Concise
agent_main(Q) :-
    system("Answer concisely."),
    task("Answer: {Q}"),
    validate_answer, % Fails if answer is poor
    answer("Done").

% Attempt 2: Thorough (Fallback)
agent_main(Q) :-
    system("Be detailed."),
    task("Deep research: {Q}"),
    answer("Done").

2. Isolated Contexts

Standard agents pollute their context window with every step. DML allows you to spawn isolated sub-tasks using prompt/N that don't share history.

Use this for unbiased critiques or parallel exploration where one thought path shouldn't influence another.

isolation.dml
agent_main(Topic) :-
    system("You are a researcher."),
    task("Research {Topic}", Findings),

    % Fresh context - no bias from main chat
    prompt("Critique this: {Findings}", Critique),

    % Back to main context to resolve
    task("Fix issues: {Critique}").