09 · Collaborate

Bring a hard problem.

The lab accepts three external collaborations per semester — industry, government, or academic. Submissions are reviewed against fit, feasibility, and public benefit. We say no often, and we publish why.

  • 3 / sem. external slots
  • 14 days review SLA
  • Open current cycle
INTAKE · 2026-Q2 open
01 Inquiry 02 Scoping 03 Review 04 Kickoff 3 / sem. 14-day SLA
Cycle 2026 Spring
Closes 2026-05-31
Slots 2 of 3

What we mean by “hard problem”

A hard problem is one where the right answer is not yet known to the lab and where the work would be defensible to a reviewer in five years. Three things make a problem hard for our purposes:

  • The data are real. Synthetic-only problems are interesting but rarely shippable; we want the friction of real measurement.
  • The adversary is credible. A held-out test set is not an adversary. Distributional shift, attackers, biology, or physics are.
  • Public benefit is plausible. The lab does not take work whose only beneficiary is the proposer's competitive moat.
i

A "no" is information, not a rejection.

When we decline a proposal, we publish a redacted reasoning note. Other groups have used these to refine their own proposals or to pick up problems we passed on.

What a collaboration buys you

Accepting a collaboration is not a soft handshake — it is a commitment of compute, reviewer time, and publication runway. A successful proposal receives:

Compute allocation
90 days on aurora-n7 with a dedicated quota. Continuation is reviewed at the 60-day mark.
Reviewer rotation
Two reviewers from the relevant working group attached to the project for the duration. Reviews are content-addressed.
Replication harness
Your work runs through the same chain of custody as internal projects. Every figure ships with a replicate.sh.
Publication runway
If the work converges, the lab co-authors and walks the submission through peer review with the same standards as internal output.

The intake pipeline

The four-stage intake is intentionally short. We do not run a long evaluation — we run a sharp one.

Stage Duration Output
01 Inquiry 1–2 days A reviewer is assigned; a one-page response confirms whether the fit is plausible.
02 Scoping 5–7 days A scoping brief describing the adversary, the data, and the success criterion.
03 Review ≤ 14 days The working group runs a fit-and-feasibility review against the scoping brief.
04 Kickoff 1 week Compute provisioned, reviewers attached, replication harness opened.

What we will not take

To save proposers time, the lab is explicit about the work it does not accept:

  • Closed-source extensions to closed-source systems. If neither side of the work can be open, the collaboration cannot exit through the chain.
  • Pure benchmark-chasing. A proposal whose only deliverable is a SOTA number on a leaderboard is not a research surface.
  • Predictive policing, mass surveillance, autonomous weapons. The lab does not contribute to systems that surveil populations, automate carceral decisions, or close the loop on use of force.
  • Compliance theatre. Work whose primary purpose is to launder a deployed system through a research signature.

How to open a proposal

The intake door is a single email with a one-page brief. Brevity is rewarded — the working group reviews dozens of these per cycle and a tight statement of the adversary, the data, and the success criterion will land faster than a thirty-page deck.

Send to scott.weeden@gmail.com with subject line “Research collaboration”. Include:

  • The proposing organisation and the named PI.
  • One paragraph on the adversary — why is this problem hard?
  • One paragraph on the data — provenance, scale, access.
  • One paragraph on the success criterion — what would convince you that the work has converged?
  • Public benefit statement — how does this work, if it succeeds, leave the world better than it found it?

You will hear from a reviewer within two working days, regardless of outcome.

Continue exploring the lab

Open the rest of the research surface.

Each tile of the bento grid is a primitive of the workflow — instrumentation, modelling, peer review, publication. Together they form a transparent chain of custody from raw signal to citable claim.