Research Papers
22 papers — methodology, data, and conclusions you can verify
Ground: Evidence-Based Claims for AI Code Analysis
Computation-Constrained Verification Prevents False Positives in Agentic Development
A tool that blocks AI agents from claiming code is dead, duplicated, or orphaned without first computing the evidence. Now with AI-native features: batch analysis, incremental diff mode, structured fix output, and fix verification. Rated 10/10 by agent testing across two production codebases.
Recursive Language Models: Context as Environment Variable
Implementing MIT CSAIL's RLM pattern for processing arbitrarily large codebases through programmatic context navigation
This paper documents the implementation and empirical validation of Recursive Language Models (RLMs) based on MIT CSAIL research. We identified critical bugs, validated the pattern against the original repository, and demonstrated practical application for codebase analysis—processing 157K characters to find 165+ DRY violations.
Animation Spec Architecture: One Source, Two Renderers
Shared Specifications for Svelte and Remotion
A methodology for maintaining visual consistency between web animations (Svelte) and video exports (Remotion) through shared animation specifications that define what happens, while each renderer decides how.
Teaching Modalities: Finding the Right Medium for CREATE SOMETHING
Comparing Spritz, Motion Graphics, and Interactive Learning
An experiment exploring three modalities for teaching the CREATE SOMETHING philosophy: RSVP speed reading (Spritz), Vox-style motion graphics (Remotion), and interactive structured learning paths.
Agent SDK Gemini Tools Integration
Grounding AI in Codebase Reality
Technical analysis of integrating bash and file_read tools within the Agent SDK Gemini provider, focusing on implementation, safety, agentic loop patterns, and impact on research paper quality.
Beads Cross-Session Memory Patterns
Agent-Native Issue Tracking for Work Persistence
Analysis of Beads as an agent-native issue tracking system designed to ensure work persistence across AI agent sessions through Git-committed state, dependency tracking, and workflow molecules.
Webflow Dashboard Refactor: From Next.js to SvelteKit
How Autonomous AI Workflows Completed 40% Missing Features in 83 Minutes
Complete refactoring from Next.js/Vercel to SvelteKit/Cloudflare, achieving 100% feature parity while migrating infrastructure. A case study in autonomous AI workflows and systematic feature implementation.
Intellectual Genealogy: The Three Lineages
Philosophy, Writing, and Systems Thinking Foundations
Documents the complete intellectual genealogy of CREATE SOMETHING: the philosophy lineage (Heidegger → Gadamer → Rams), writing lineage (Orwell → Zinsser → Fenton/Lee), and systems lineage (Wiener → Meadows → Senge).
Spec-Driven Development: A Meta-Experiment in Agent Orchestration
When the Specification Becomes the Session: Building NBA Live Analytics as Methodology Validation
A meta-experiment testing whether structured specifications can effectively guide agent-based development, producing both working software and methodology documentation as equally important artifacts.
The Norvig Partnership: When Empiricism Validates Phenomenology
How Peter Norvig's Advent of Code 2025 Experiments Confirm Heideggerian Predictions About AI-Human Collaboration
Peter Norvig's empirical findings—"maybe 20 times faster" with LLM assistance—mark the Zuhandenheit moment when a tool recedes so completely that it becomes inseparable from practice itself.
The Cumulative State Anti-Pattern
When "Current" Masquerades as "Ever"
How ambiguous field semantics in database design create invisible bugs that punish users for legitimate actions. A case study from Webflow template validation.
The Subtractive Studio: Philosophy as Infrastructure
Most Agencies Add. CREATE SOMETHING Removes What Obscures.
A positioning paper establishing CREATE SOMETHING's differentiation: philosophy as infrastructure, not marketing. The Subtractive Triad applied to agency practice.