Usability Testing in the Metaverse
Tags:
Agency Work.
Technology.
User Research.
Values:
Brian Salts-Halcomb
Research Team
This spring, a long-term client approached EchoUser with a project straight out of a science fiction movie — or better said, a project straight out of the mid-pandemic 2022 SciFi experience that is real life. Our researchers have decades of experience tweaking research methods to meet client goals and constraints, but this one pushed us significantly.
The client had been working stealthily for years on augmented and virtual reality (AR/VR), envisioning the future of virtual workplace collaboration. With the rapid shift to remote work, their prototypes suddenly had more urgency. They were coming out of stealth mode. Their customers were eager for new ways to have remote interaction feel authentic and to utilize 3D in presentation and product development. Was “…infinity and beyond” suddenly here and now?
The research method was fairly standard: a usability test, whereby participants are asked to complete different tasks and the product is assessed for its efficiency, effectiveness, and user satisfaction. However, the research protocol was highly customized. As is often the case, new technology immediately stretched common practices for user testing, and brought up key questions.
The learnings were rapid, and the testing involved vital piloting sessions, where we tweaked the approach. Ultimately, we encountered some key principles to take forward into future similar projects:
We solved for getting the participants the headsets by inviting them into a physical office. In hindsight, the control over the space and technology ended up being important in countless ways; while so much usability work today can be done remotely with common tools like webcams and screen share, we needed that in person support.
On the other side of the coin, we had a remote researcher moderate the test, donning their own headset. This allowed us to join and observe the participant in their AR environment, and was one tactic in helping us to form the collaborative experience important to the product. This mix of in person and virtual is something we’ll continue in the future.
Initially, we had a tech heavy protocol with multiple computers and headsets, screen-casting, and web conferencing. Yet much of that tech we introduced was not important to the user experience we were testing: there was so much tech to juggle, and if we misstepped, we were likely to be testing our research experience more so than the client’s product. In this situation, we learned to be ruthless with establishing contingencies and be ready to ditch any system that was not vital to what we wanted to learn.
For example, after seeing that the process we were using to record the participant view potentially impacted the headsets’ performance, we designed a contingency plan to evaluate the research on audio recording and the researchers’ observations alone. While we didn’t end up needing the contingency, it was a vital step to thinking through what was essential and what we could potentially strip away.
We learned we’re at a stage where most people still have an initial “wow” reaction to VR/AR products. It truly is the movies come to life. Yet we needed to work hard to get beyond that first impression to the real product experience. There were a few important tactics:
Many of these are good tactics for any research project, but became so much more vital here. AR/VR brings the merging of our physical and digital lives, and demanded a research approach that thoughtfully merged the two.
So, EchoUser has officially completed its first mission to the metaverse. (Side note: if you’re still wondering what that word even means, check out McKinsey’s recent podcast series).
Our report back is that while much was the same, the research approach needed a fresh set of eyes and constraints. What did we miss? What else have you learned in applying UX practice to new realities?
Published on July 19, 2022