Over the last quarter, I’ve introduced Sora to a few clients as part of early-stage product development work. The goal wasn’t to chase novelty—it was to compress timelines, clarify decisions, and move from guesswork to grounded feedback faster.
Here’s how we’re using it: generate high-quality images, build short 5–10 second video clips, and drop those into relevant contexts. Think: prototype assets living inside a real-world user scenario. Not just a Figma board or a pitch deck—something that feels alive.
The results have been significant.
When teams can see early concepts play out in situ, they stop debating hypotheticals. Stakeholders get aligned earlier. Design teams make sharper calls. Feedback from target segments gets clearer. The entire product conversation levels up because the artifact itself is stronger.
This isn’t a gimmick. It’s a wedge to reduce drag between concept and validation.
That said, we’re still working around some of the usual limitations—mainly object persistence. Most generative platforms, including Sora, still struggle with continuity from frame to frame when prompting detailed sequences. If Veo 3 (or whatever’s next) solves that, it’ll take generative video from a fast sketch tool to something you can actually prototype end-to-end experiences with—across marketing, training, onboarding, or even commerce.
Until then, we’re focused on using the current state of the tools to do what matters: speed up alignment, sharpen execution, and make better decisions faster.
No polish. No waste. Just acceleration where it counts.