Agentic Visualization Experiment
Research experiment demonstrating autonomous UI components that embody Edward Tufte's principles for displaying quantitative information. Components make intelligent decisions about data presentation, transforming visualization from manual craft into intelligent revelation.
Abstract
This paper presents agentic visualization: autonomous UI components that embody expert knowledge and make intelligent decisions about data presentation. We demonstrate how Edward Tufte's principles for displaying quantitative information can be encoded into self-governing components that automatically scale data, choose appropriate visual encodings, and compose into higher-order insights. Through hermeneutic analysis, we reveal that traditional charting libraries present users with choices, while agentic components make decisions—transforming visualization from manual craft into intelligent revelation. Our implementation, the @create-something/tufte package, demonstrates this approach through eight components tested with both controlled samples and live analytics, achieving what we term "visualization as unconcealment" (Heideggerian aletheia).
1. Introduction
Data visualization occupies a curious position in software engineering: universally acknowledged as critical, yet implemented through libraries that demand extensive configuration and offer limitless ways to violate established principles. D3, Chart.js, and similar tools provide mechanisms but not judgment—developers must manually enforce Tufte's data-ink ratio, choose appropriate scales, and compose multiple views into coherent dashboards.
This research asks: What if components could embody expertise?
We propose "agentic visualization"—components that don't just render data but understand how to reveal it. These components automatically:
- Scale multiple data series to shared ranges for valid comparison
- Calculate proportions and format percentages without manual intervention
- Choose visual hierarchies (color, opacity, stroke weight) based on data relationships
- Hide labels when space is insufficient, preventing visual clutter
- Compose into higher-order patterns (small multiples, integrated dashboards)
The intellectual foundation emerged through applying Heidegger's hermeneutic circle to the question "what visualizations can we now create?" This interpretive approach revealed that we don't create visualizations—we reveal them through components that understand both data and design principles.
Contributions: (1) A theoretical framework for agentic visualization components, (2) Eight implemented components embodying Tufte's principles, (3) Demonstration of data source flexibility through sample/live toggle, (4) Evidence that design expertise can be encoded into autonomous software.
2. Methodology: Hermeneutic Component Design
Traditional software development follows specification → implementation → testing. This research instead employed Heidegger's hermeneutic circle—an interpretive method where understanding emerges through iterative engagement with the whole and its parts.
2.1 Hermeneutic Process
Beginning with four base components (Sparkline, MetricCard, HighDensityTable, DailyGrid), we asked: "What visualizations can we now create?" This question revealed missing compositional possibilities:
- ComparativeSparklines: Multiple series needed shared scaling for valid comparison
- DistributionBar: Proportional relationships were invisible in tables
- TrendIndicator: Direction and magnitude of change required semantic encoding
- HourlyHeatmap: Temporal patterns demanded small multiples grid
Each new component revealed further compositional possibilities—the hermeneutic circle in action. This process demonstrated that visualization needs emerge from data relationships, not predefined chart types.
2.2 Encoding Tufte's Principles
Edward Tufte's "The Visual Display of Quantitative Information" (1983) establishes six principles:
- Maximize data-ink ratio (remove all decoration)
- Show data variation, not design variation
- Use small multiples for comparison across dimensions
- Integrate text and graphics (no separate legends)
- Display high-density information (maximize insights per unit area)
- Remove chartjunk (no 3D, gradients, heavy borders)
Rather than documenting these as guidelines, we implemented them as component constraints. For example, ComparativeSparklines cannot render without automatic shared scaling—violating Tufte's "valid comparison" principle is structurally impossible.
2.3 Agentic Decision-Making
Components make autonomous decisions through reactive calculations:
- Automatic scaling:
$: range = max - min || 1 - Proportion calculation:
$: percentage = (count / total) * 100 - Contextual formatting: Labels only appear when segment width >= 8%
- Semantic colors: Green/red/gray based on direction, not manual assignment
This transforms developers from "chart configurers" to "data providers"—the component handles visualization intelligence.
3. Implementation
3.1 Technical Architecture
The @create-something/tufte package is implemented in SvelteKit 5 with TypeScript, distributed as an npm package. Components use:
- Svelte reactive statements ($:) for autonomous calculations
- SVG path generation for resolution-independent graphics
- Tailwind CSS with CREATE SOMETHING design tokens
- TypeScript interfaces for type-safe data contracts
3.2 Component Examples
// ComparativeSparklines: Shared scaling example
$: allValues = series.flatMap(s => s.data.map(d => d.count));
$: max = Math.max(...allValues, 1);
$: min = Math.min(...allValues, 0);
$: range = max - min || 1;
// DistributionBar: Automatic proportions
$: total = segments.reduce((sum, s) => sum + s.count, 0);
$: segmentsWithPercentages = segments.map(s => ({
...s,
percentage: (s.count / total) * 100,
width: (s.count / total) * 100
}));
3.3 Data Source Flexibility
To demonstrate component flexibility, this page implements a Sample/Live toggle. Components receive data through props—they don't know or care whether it's controlled samples or live analytics from Cloudflare D1. This proves the components are truly autonomous: they work with any data source that matches the interface.
// Reactive data switching
$: propertyData = dataSource === 'live' && liveAnalytics
? getLivePropertyData()
: propertyData;
4. Results: Component Demonstrations
The following sections demonstrate each component with controlled sample data representing CREATE SOMETHING property analytics. Each component automatically handles scaling, formatting, and visual encoding without manual configuration.
4.1 Comparative Trends
ComparativeSparklines component - Multiple data series overlaid for direct comparison
Reveals: .io and .agency are growing, .space is declining, .ltd is emerging. Relative performance visible at a glance.
4.2 Proportional Distribution
DistributionBar component - Visual representation of how total is divided
Reveals: .io dominates (47%), followed by .agency (24%), .space (17%), and .ltd (11%). Distribution imbalance immediately obvious.
4.3 Change Direction & Magnitude
TrendIndicator component - Shows whether metrics are up, down, or flat
Reveals: Directional changes at a glance - growing (green ↑), declining (red ↓), or stable (gray →)
4.4 Integrated Comparison Dashboard
Composition - MetricCard + TrendIndicator + comparative context
Reveals: Complete picture - absolute values, trends, and directional changes in unified view
5. Discussion
5.1 What the Hermeneutic Circle Revealed
Compositional Power
- • Components combine to reveal multi-dimensional insights
- • Same data, different lenses: trends, proportions, changes
- • Patterns invisible in tables become obvious visually
- • No manual calculation - components interpret automatically
Agentic Behavior
- • Automatic scaling (ComparativeSparklines uses shared range)
- • Intelligent formatting (percentages, colors, labels)
- • Contextual decisions (show labels only when space permits)
- • Self-documenting (tooltips, legends generated automatically)
5.2 Implications for Software Design
This research demonstrates that design expertise—previously considered the domain of human judgment—can be encoded into autonomous software components. The implications extend beyond visualization:
- UI Components: Form validation, accessibility, responsive layouts could make expert decisions
- API Design: Clients that automatically handle retries, caching, rate limiting
- Database Queries: ORMs that understand indexing strategies and query optimization
- Testing Frameworks: Tests that know which assertions matter for given code patterns
The pattern is consistent: identify expert knowledge → encode as component constraints → eliminate manual configuration.
5.3 Visualization as Unconcealment
Heidegger's concept of aletheia (unconcealment) describes truth not as correctness but as revealing what was hidden. Traditional charts require humans to manually uncover patterns. Agentic components reveal patterns automatically—the data's truth emerges through intelligent encoding rather than human effort.
This shifts the developer's role from "chart configuration" to "data provision." The component decides how to reveal; the developer provides what to reveal.
5.4 Limitations and Future Work
Current components handle quantitative time-series data. Future work could extend to:
- Hierarchical data (treemaps, sunburst charts with automatic layout)
- Network graphs (force-directed layouts with automatic clustering)
- Geographic data (map projections chosen based on data distribution)
- Multivariate analysis (automatic dimensionality reduction visualization)
The limiting factor is not technical implementation but identification of expert heuristics to encode. Each domain requires hermeneutic analysis to reveal what decisions components should make autonomously.
6. References
All visualizations powered by @create-something/tufte — agentic components embodying Tufte's principles. View methodology or analytics dashboard.