CREATE SOMETHING .io

Research for teams building automation they can defend.

CREATE SOMETHING .io turns experiments, papers, and field notes into a usable research layer for operators. The goal is not content volume. It is evidence you can carry into the next build, review, or production decision.

Patterns, benchmarks, and operator notes tied back to real builds.

21 published experiments + papers
1 research categories
6 featured artifacts to inspect first
3 database / automation / judgment layers
What the research is for

The research layer should make the next operating decision easier.

This is where CREATE SOMETHING documents what held up in practice, what failed under pressure, and what deserves to be carried forward into the product, policy, or delivery layer.

Tooling and runtime comparisons

Measure cost, speed, and maintenance drag across AI-native stacks instead of repeating the same intuition every quarter.

  • Cloudflare-native execution and orchestration notes
  • Model and framework tradeoffs grounded in implementation work
  • Comparisons optimized for operators, not abstract leaderboard chatter

Judgment encoded as operating documents

The research output is not just prose. It is policy packs, release checks, contracts, and runbooks that can move into delivery.

  • Database / Automation / Judgment is treated as an operating frame
  • Evidence rolls forward into specs and policy artifacts
  • What gets published should be usable by the next build

Field notes for people who answer for the outcome

This property is tuned for the person who has to explain why a workflow exists, where it breaks, and what should happen next.

  • Research is written for implementation and review, not content farming
  • Failure modes matter as much as feature lists
  • The goal is operational clarity, not thought-leadership theater

Featured Work

Experiments, field notes, and patterns worth inspecting first.

Apr 25, 2026

Webflow Analyzer Lineage: From Detection to Governed Review

A git-history-backed experiment tracing how Webflow analysis expanded from plagiarism detection into browser-backed MCP review, policy-grounded operations, and creator-facing submission assistance.

12 min intermediate
WebflowAnalyzerGit HistoryMCP
Open artifact
Jan 31, 2026

AI-Native Filtering: Natural Language Product Discovery

Experiment demonstrating AI-native frontend filtering where users describe what they want in natural language, and an agent applies the appropriate filters.

10 min intermediate
AI-NativeWorkers AID1Filtering
Open artifact
Jan 21, 2026

Shape-Aware ASCII Renderer: 6D Character Matching

High-quality ASCII rendering using 6D shape vectors and contrast enhancement. Characters are matched by shape, not just brightness—resulting in sharp edges and crisp contours.

8 min intermediate
ASCIIRenderingCanvas3D
Open artifact
Jan 20, 2026

Living Arena GPU: WebGPU Crowd Simulation

WebGPU-accelerated crowd simulation with 8,000+ agents showing emergent behaviors—bottleneck formation, wave propagation, and panic spreading through social force models.

8 min advanced
WebGPUCompute ShadersCrowd SimulationSocial Force Model
Open artifact
Jan 20, 2026

Webflow Plagiarism Detection: Agent-Native Algorithms

A multi-layer plagiarism detection system combining classic CS algorithms (MinHash, LSH, PageRank, Bayesian) with AI tiers, exposed as MCP tools for team AI agents.

15 min advanced
PlagiarismMinHashLSHPageRank
Open artifact
Jan 16, 2026

Living Arena: AI-Native Automation at Scale

What if your building could help people without them having to ask? A visualization of arena systems—security, lighting, HVAC, scheduling—all breathing as one, with humans always in control.

12 min intermediate
AI-NativeAutomationArenaHuman-in-the-Loop
Open artifact
Research stack

Start with the methodology, then inspect the work.

If you want the operating frame behind the papers, start with the methodology and then move into the experiment and paper archive.