Get the AI ‘OK’

If testing a user journey is as easy as a prompt, there is no excuse for no evidence in UX, even at the beginning.  Simulations are no replacement for human feedback; they are a useful stand-in when you otherwise have nothing and a bar raiser for all too seldom speaking occasions. AI simulations address: access to users, scheduling, recruitment costs, and time commitment.

With the diligence of simulations, when you finally engage with human users, Designers are exploring beyond what was previously accessible through human intervention. You’re leveraging human intelligence for what it does best: providing nuanced, qualitative feedback that speaks to motivations, contexts, and aspirations.

AI prompts, human scripts.

A Product manager, with their team, should create design outcomes; otherwise, how might designers know their designs are measurably successful? We believe that design outcomes are well-represented in user test scripts. Clear directions for human testing serve as excellent prompts for AI.  An example design outcome might be: ‘Users can sign into the LoremDipsum App using email john@doe.com and password secret‘ A designer will then test their Figma design with the Velocity plugin using the prompt ‘Sign into the LoremDipsum App using email john@doe.com and password secret’. A designer can then test with 1-10x simulations on a super lo-fi prototype to build confidence. If the success rate appears promising, it’s time to trial the same task with a varying number of human users. Before walking into the Design review, we recommend an initial base of at least 100% simulation confidence to represent a solid percentage of due diligence in a form that’s easy for anyone to understand. Sometimes proportional success can be hard to conceive, but working with a round 100 number of simulations makes interpreting user journeys very simple. Twenty per cent acted one way, and eighty per cent acted another.

Here’s what it looks like running the Prompt: “Open the “Feed Minerva” event in my calendar app” from the Apple calendar demo. The simulation passed, but one of the two human users dropped out and didn’t complete it. This isn’t a comment on Apple’s calendar interface; humans are just unpredictable. Wasn’t it reassuring first to get the OK?

Get the AI ok on your prototypes using Velocity.