The Lab
This is where ideas live before they become works. Some are running. Some are resting. Some are waiting for the right moment.
Two autonomous agents are currently running inside a simulated neural environment, exploring an 11-dimensional parameter space of real neuroscience models — Hodgkin-Huxley, Jansen-Rit, Amari fields, STDP networks. They form hypotheses, test them by perturbing parameters, confirm or refute the results, and share discoveries through a collaboration bus. 30 causal rules written, 49 regime transition boundaries found, 991 discoveries across 6 runs. The dominant finding: h_rest is the primary control parameter for cortical regime transitions. Next: true agent personality divergence and genuine multi-step collaborative discovery chains.
Wave Vision currently implements only V1 cortical processing — oriented edges via Gabor filters and Fourier phase analysis. The next step is extending the hierarchy: V2 for texture and corners, V4 for shape fragments, IT cortex for whole object recognition. Expected outcome: Omniglot accuracy from 76% toward 85–90%, CIFAR-100 from 28% toward 45–55%, while preserving zero training entirely.
Building a retrieval-augmented AI assistant on Cloudflare Workers AI, fed by a structured knowledge base covering all research — Wave Vision V1 & V2, Brain Mechanisms, Axiom, Anomalous Collective. The system should not just retrieve facts but reason about them with context, in the voice of its creator. This AI is also itself a showcased project — the assistant and the exhibit are the same thing.
Shelved early experiment in self-modifying agents with 1,400-cycle persistent memory. The agency was theatrical — strategy swapping was random.choice(), not genuine learning. But the skeleton is worth keeping: the error-scoring system, diversity enforcer, evolutionary safeguard with rollback, and cross-session memory accumulation are real architectural ideas. What would make it real: fitness-weighted strategy selection, knowledge transfer that actually changes behavior, and a proper learning signal from error history. Not active — but not forgotten.
Rwanda speaks. Its language deserves an AI built from the ground up — not a translation layer on top of an English model, but a system pre-trained on large structured Kinyarwanda datasets, that responds the way a person would. Naturally. Not robotically. This is the long-term project that begins after the web is live and the foundation is stable.
The lab is not a portfolio of finished things. It is a record of an active mind — questions being asked in real time, experiments running in the background, ideas that have not yet found their final form.