Research for teams building automation they can defend.
CREATE SOMETHING .io turns experiments, papers, and field notes into a usable research layer for operators. The goal is not content volume. It is evidence you can carry into the next build, review, or production decision.
Patterns, benchmarks, and operator notes tied back to real builds.
The research layer should make the next operating decision easier.
This is where CREATE SOMETHING documents what held up in practice, what failed under pressure, and what deserves to be carried forward into the product, policy, or delivery layer.
Workflow evidence before opinion
Patterns start with operator pain, implementation evidence, and runtime behavior before they become a positioning claim.
- Experiments stay tied to the workflow that produced them
- Claims are easier to defend when the artifact trail exists
- Reusable patterns get published only after they survive contact
Tooling and runtime comparisons
Measure cost, speed, and maintenance drag across AI-native stacks instead of repeating the same intuition every quarter.
- Cloudflare-native execution and orchestration notes
- Model and framework tradeoffs grounded in implementation work
- Comparisons optimized for operators, not abstract leaderboard chatter
Judgment encoded as operating documents
The research output is not just prose. It is policy packs, release checks, contracts, and runbooks that can move into delivery.
- Database / Automation / Judgment is treated as an operating frame
- Evidence rolls forward into specs and policy artifacts
- What gets published should be usable by the next build
Field notes for people who answer for the outcome
This property is tuned for the person who has to explain why a workflow exists, where it breaks, and what should happen next.
- Research is written for implementation and review, not content farming
- Failure modes matter as much as feature lists
- The goal is operational clarity, not thought-leadership theater
Featured Work
Experiments, field notes, and patterns worth inspecting first.
Webflow Analyzer Lineage: From Detection to Governed Review
A git-history-backed experiment tracing how Webflow analysis expanded from plagiarism detection into browser-backed MCP review, policy-grounded operations, and creator-facing submission assistance.
AI-Native Filtering: Natural Language Product Discovery
Experiment demonstrating AI-native frontend filtering where users describe what they want in natural language, and an agent applies the appropriate filters.
Shape-Aware ASCII Renderer: 6D Character Matching
High-quality ASCII rendering using 6D shape vectors and contrast enhancement. Characters are matched by shape, not just brightness—resulting in sharp edges and crisp contours.
Living Arena GPU: WebGPU Crowd Simulation
WebGPU-accelerated crowd simulation with 8,000+ agents showing emergent behaviors—bottleneck formation, wave propagation, and panic spreading through social force models.
Webflow Plagiarism Detection: Agent-Native Algorithms
A multi-layer plagiarism detection system combining classic CS algorithms (MinHash, LSH, PageRank, Bayesian) with AI tiers, exposed as MCP tools for team AI agents.
Living Arena: AI-Native Automation at Scale
What if your building could help people without them having to ask? A visualization of arena systems—security, lighting, HVAC, scheduling—all breathing as one, with humans always in control.
.io does the reading so the rest of CREATE SOMETHING can move faster.
Research only matters if it transfers cleanly into practice, delivery, or philosophy. That handoff is the point of the network.
Start with the methodology, then inspect the work.
If you want the operating frame behind the papers, start with the methodology and then move into the experiment and paper archive.