AXON
An experimental look at how AI signals might help reviewers move beyond scrubbing footage and toward understanding behavior, context, and escalation.
Background
From July to November 2025, the DEMS design team experimented with bringing AI tools into our everyday workflow. What started as a weekly sharing session around tools like Cursor, Figma Make, and v0 gradually turned into a three month experiment focused on using AI to turn product ideas into something tangible and valuable.
I had a clear sense of what I wanted to built (or try) It started from a simple frustration I kept coming back to. Evidence.com already has AI signals like Transcripts and People detections, but they live in different places. Reviewing footage still means a lot of scrubbing, guessing, and mentally stitching things together.
So, I started exploring concepts with v0. What if AI did more than detecting events? What if it helped reviewers make sense of behavior, escalation, and context inside the Evidence player? The whole exercise is not about shipping a feature. It was about exploring whether that idea even held up.
Workflow pain points
Let's take a look at the current pain points from the perspective of an Investigator working through a case in Evidence.com.
The investigator spends hours manually scrubbing footage to connect events and behaviors across long timelines.
There's no unified view or layers on video evidence to detect linked patterns across multiple pieces of evidence.
Critical context is often missed because AI insights are not surfaced in real time.
Evidence reviewers need flexibility to tailor their workspace. Yet the current Evidence Player does not allow selecting or hiding tracks to suit specific workflows, limiting efficiency and customization.
Signals exist, but they are spread across transcripts, markers, and detections, leaving USERS to do the real analysis themselves.
So the problem I was trying to solve is not about missing data, but missing understanding.
AI signals exist today, but they are scattered across the Evidence Detail page, leaving users to manually connect the dots.
What is missing is a centralized way to interpret those signals into meaningful insight about behavior and escalation.
The inspirations
I was drawn to the idea of Precrime in Minority Report and the original work by Philip K Dick, especially how signals can be interpreted, mishandled, or used to tell a convenient story. What stuck with me was not the prediction itself, but the ethical tension when technology starts drawing conclusions without enough context or challenge. That concern quietly shaped this experiment and how careful I tried to be when exploring what it means to turn signals into insight.
Another concern was how I would eventually turn this into a slide deck for a future presentation 🤦🏻♂️ but that is a problem for another time.
The appeal was never the futuristic gestures, it was the promise of seeing the full story come into focus
(Minority Report, 2002. © 20th Century Fox)
How I explored the idea
Early inputs and rough thinking laid out side by side. I used AI to analyze reference materials, sketch assumptions, and pressure test ideas
I treated this as a lightweight exploration anchored by a hypothetical goal from the PRD. Instead of designing toward a deliverable, I used v0 to quickly explore layout ideas, interaction patterns, and structural directions.
In terms of business proposition, another angle that helped make the case stronger was the ecosystem drive. The detection and analysis experience only really works when the input comes from Axon owned devices, like Body worn cameras and Fleet cameras.. Making that dependency explicit clarified the value story: better insights come from tighter device integration, thus encourages agencies to stay within the Axon ecosystem.
The project's north star, from left to right:
smart Analysis that adds real context
a review experience that adapts to the case
and deeper insights made possible by Axon’s first party devices.
When it came to the design work, v0 acted as a vibe design accelerator. It helped me materialize ideas fast, but the value came from reacting to the output, editing it down, and deciding what actually made sense. It helped speed up exploration, not decision making.
What emerged
A few patterns surfaced consistently as I explored the space.
Centralizing signals reduced mental overhead almost immediately, but AI insights only accelerated workflow when constraints and intent were clearly defined.
What broke down
A few ideas fell apart once I slowed down and really looked at them. Some AI analysis sounded confident while quietly hiding uncertainty (something anyone who has used ChatGPT for more than a few times a day will recognize, and rage possibly) The more aggressively insights were surfaced, the more it raised questions around trust and ethics, especially when it came to monitoring officer biometrics or using existing case evidence for deeper analysis.
Learnings
This did not end with a feature to ship, but it gave me clarity. I got a better feel for where AI genuinely helps and where it starts to fall apart once you look closer.
v0 helped me move fast and make ideas visible before getting attached. I also learned, very quickly, how expensive curiosity can be. Watching token usage climb was strangely educational and a good reminder to be intentional about when speed is worth the cost.
In the end, v0 worked best as a thinking aid. The real value came from shaping and editing ideas, not generating endlessly.
product demo walkthrough
Closing
Long before AI, this story was already asking who gets to decide what the data means.
This experiment shifted how I think about AI in design. I stopped seeing it as something you add at the end and started seeing it as something that changes how you explore ideas from the start.
I kept coming back to the concerns raised in the original work by Philip K. Dick. Signals can feel authoritative long before they are actually understood. How they are framed and interpreted matters as much as the signal itself. That idea stayed with me throughout this work.
As AI gets more capable, the role of design judgment does not shrink. It becomes even more important.
PASSWORD PROTECTED CONTENT
Enter passcode to continue
PREVIOUS PROJECT
Axon Embedded Livestream Player (2023)
CREDITS
Anh Quan Huynh (Project Lead)
team
DEMS / Playback Experience
NEXT PROJECT
Axon Evidence Player 2.0: Controls Study (2025)







