Experiment

Living Arena

What if your building could help people without them having to ask? The lights guide you to your seat. The air feels right before you notice. And through it all, safety comes first—always.

Second Quarter NBA Western Conference Finals 18,847 / 19,500
LOT ALOT BLOT C - VIPSHUTTLEAICROWD BUILDINGNORTH ENTRANCESOUTH ENTRANCEWESTEASTLIVESCENARIO 1 OF 7Getting crowded at Gate APre-GameWatch: Crowds flowing from parking toward north entranceCURRENT ATTENDANCE18,847
Security monitoring
16 Sensors Active
4 Cameras Online
Lighting event
100% Court Intensity
12 Zones Active
HVAC nominal
Main Floor 72°F
Upper Bowl 74°F
Concourse 71°F
VIP Suites 70°F
Activity Stream
Security Perimeter scan complete 2s ago
HVAC Zone 2 adjusting +2° 15s ago
Lighting Court lights at 100% 30s ago

When Systems Talk to Each Other

Something interesting happens when lighting, security, and climate work together. The building starts to feel like it's paying attention.

Trigger Event Getting crowded at Gate A
Security Opens another screening lane
Lighting Brightens the path to Gate B
HVAC Cools down the area people are heading to
Signs Gently suggests the shorter line at Gate B
Security supervisor gets a heads up—can change the plan anytime
People move faster. Lines stay safe. Nobody had to radio anyone.

Safety First. Everything Else Second.

The building can make your experience better—but not if it means making you less safe. Every suggestion the system makes gets checked against one question: does this keep people secure? If there's ever a conflict, safety wins. No exceptions.

The System Shows Its Thinking

This isn't a black box. When the AI makes a decision, you can see why. Every time.

Situation Upper Bowl getting warm
What the system noticed:
  • Temperature rising in Zone 2 (currently 76°F, target 72°F)
  • Crowd density: 94% occupied
  • Game phase: 3rd quarter (high energy)
  • Weather outside: 85°F, sunny
  • Similar past events: cooling takes ~8 minutes to feel
Decision Start cooling now—by the time people feel it, it'll be right
Confidence
87%
Could wait, but past patterns show complaints start at 77°F

Why this matters: You can question it. You can override it. You can understand why it did what it did. That's the difference between automation you trust and automation you tolerate.

One Change, Everything Adapts

When something changes, the whole building thinks it through—not just the part that noticed.

18:45:00 Weather forecast changed: thunderstorm in 2 hours
The system thinks through everything at once:
Scheduling
Was: Roof open for sunset view
Now: Roof closing at 19:30 (30 min before storm)
Because: Guest comfort + equipment protection
HVAC
Was: Natural ventilation mode
Now: Switch to AC at 19:15
Because: Humidity will spike when roof closes
Lighting
Was: Sunset ambiance scheduled
Now: Indoor event lighting ready
Because: Natural light will decrease faster than planned
Parking
Was: Standard exit flow
Now: Alert: covered parking fills first post-game
Because: People will want shelter
Concessions
Was: Normal staffing
Now: Extra staff for indoor rush
Because: Fewer people will want to leave during storm
Communications
Was: No alerts needed
Now: Gentle announcement at 19:00 about roof closing
Because: No surprises—people appreciate knowing

Ops manager reviewed full plan in 45 seconds, approved with one change: earlier announcement

12 seconds to generate plan, 45 seconds for human review

The old way: Six different people get six different alerts. They each make changes. Things get missed. Things conflict. It takes an hour of coordination.

AI-native: One coherent plan, generated in seconds, reviewed by one person, executed across everything. The systems already know how to work together.

Here's What Actually Happens

Things go wrong. Sensors break. The AI gets confused. That's okay—what matters is what happens next.

Live Incident Log
Success Failure Override Escalation
19:42:15 HVAC Human involved
Zone 3 sensor reported -40°F (impossible reading)
Resolution: Auto-flagged as sensor malfunction • Maintenance dispatched
System learned: Added plausibility bounds to sensor readings
19:38:22 Lighting Human involved
System suggested dimming for "intimate moment" during timeout
Resolution: Security supervisor overrode: visibility required for crowd monitoring
System learned: Security constraints now override ambiance suggestions
19:31:07 Security Human involved
Unusual movement pattern detected near VIP entrance
Resolution: Alert sent to nearest officer • Verified as lost child reunited with parent
System learned: Pattern logged for future training (not a threat)
19:24:51 Wayfinding
Digital sign #47 unresponsive to redirect command
Resolution: Fallback: Adjacent signs compensated • Hardware ticket created
System learned: Added redundancy check before committing to single-sign strategies
19:15:33 Security Human involved
AI confidence below threshold for crowd behavior classification
Resolution: Escalated to human operator who identified flash mob (harmless)
System learned: New pattern category added: coordinated harmless gatherings
AI
Action
Human

Human in the Loop — Always

The AI proposes. Humans dispose. Every critical decision requires human confirmation. Every failure is logged. Every override teaches the system. Every escalation is an admission: "I don't know enough—help me."

Critical actions require explicit human approval
Confidence scores shown on all AI decisions
Escalation timers prevent AI from waiting too long
Every failure logged and reviewed by humans

This System Is Not Perfect

Sensors fail. Predictions are wrong. Edge cases exist. The value of AI-native automation isn't perfection—it's speed of detection, transparency of failure, rapid human escalation, and continuous learning. The building gets smarter every day, but humans remain in control.

What We're Trying to Show

Buildings can be helpful without being creepy. They can learn without pretending to be smarter than they are. They can make things easier while keeping people safe—and keeping people in charge. When something goes wrong, you'll know. When the system isn't sure, it asks. When you override it, it listens. That's the kind of automation we believe in: honest, humble, and always getting better.

AI-Native Patterns
explainable-reasoning holistic-updates human-in-the-loop confidence-transparency cross-system-thinking graceful-escalation continuous-learning honest-failures