Profiling

TheConstraint.

When you cap the compute,

you uncap the thinking.

Constraints force clarity — clarity shipsResources run out. Judgment doesn't.8k context // 64k token cap // 3B-class modelAnywhere you can imagineOpen source. Documented. Shared.The constraint is where symbiosis beginsConstraints force clarity — clarity shipsResources run out. Judgment doesn't.8k context // 64k token cap // 3B-class modelAnywhere you can imagineOpen source. Documented. Shared.The constraint is where symbiosis begins
[//]8k context window · hard ceiling[//]~3b params · edge-class footprint[//]64k token cap · batch policy[//]Latency budget · stream-first[//]VRAM / RAM envelope · CPU path[//]Quantized weights · int8 · fp16 tradeoff[//]8k context window · hard ceiling[//]~3b params · edge-class footprint[//]64k token cap · batch policy[//]Latency budget · stream-first[//]VRAM / RAM envelope · CPU path[//]Quantized weights · int8 · fp16 tradeoff
[01]Manifesto

// Field Note 001

Limits don't restrict. They reveal.

Domain: Human-agent co-evolution
Lens: Builders, not benchmarks
Stance: Real agents. Real constraints. Labeled data for everyone.
// The Premise

The builders who shape AI won't be the ones with the most resources. They'll be the ones who think the clearest at the edge. That gap — between capability and comprehension — is where The Constraint lives.

Every tool humanity has mastered first mastered us. Fire changed how we slept. Writing changed how we thought. The printing press changed who had a voice. AI is no different — except this time, the tool can learn back. We don't just use it. We build each other.

>_ The Competition

//CONDITIONS_OF_PLAY
01// MODEL_LOCK

Run-locked weights—same model slice for everyone (e.g. deepseek-r1:1.5b). You optimize under identical silicon and checkpoint.

02// TOKEN_BUDGET

Hard token and wall-clock budgets per step and per episode. Every completion draws from the same meter—plan before you burn.

03// GOALS_VERIFY

Multi-goal tasks with partial observability: crack the puzzle and maximize objectives you can verify, not hand-wavy claims.

04// CONTEXT_CAP

Capped context and tool use—think three retrieval or memory pulls per attempt, whatever the spec publishes. Each call is a tradeoff.

Everything can be constrained.Creativity cannot.

So we label it instead.

Technology, like evolution, must touch the extremes to know itself. Then it finds equilibrium in efficiency. The Constraint is that moment: the extreme where we discover what's possible before it becomes the norm.

[02]Proving Ground
// open archive

A growing body of research.

Every run is documented. Every decision logged. Each release adds to a public record on how constrained agents behave under pressure — freely available to anyone who wants to build on it.

100+
Model profiles to constrain
Every run
Publicly documented
100+ model anchors · multi-profile arena · public labels
proving-ground.log
0.0.1 · append-only · public
  • 0427.0100[BOOT]

    0.0.1 live — constraint.spec published · proving ground open to builders

    public
  • 0427.0142[SPEC]

    Conditions of play locked — MODEL_LOCK · TOKEN_BUDGET · GOALS_VERIFY · CONTEXT_CAP

    frozen
  • 0427.0218[FIELD]

    Arena note: partial observability · same token + wall-clock meters for every attempt

    trace
  • 0427.0304[ARCHIVE]

    Not a benchmark — labeled runs, MIT stack, every decision readable in the commons

    pledge

Specs, budgets, and outcomes — documented as we ship 0.0.1

Open archive
// Last call · 0.0.1

This is not a benchmark.
It's a proving ground.

If you build with intent — under real constraints, in the open — there's a place for you here.

When you cap the compute,

you uncap the thinking.

The Constraint | Constrained Agents. Better Thinking.