Sélectionner une langue
Here on Red Hat’s User Experience Design (UXD) team, we believe in open conversations. As a team of designers working on the developer's perspective of the Red Hat OpenShift 4 web console, we relied on open conversation to gather our users’ information and feedback to help us provide innovative solutions to the OpenShift and developer community.
We ended up creating a loosely structured process for improving our outreach to customers, community members, and evangelists for user research and usability testing. This sketch later turned it into a periodic ritual for our design team to deliver value back to our users in return. Here are the steps in this process:
Analyze and disperse
Let’s take a closer look at these steps so that you too can better engage with your users and create innovative solutions to the product experiences you’re building.
When you have access to a pool of users willing to interact with you and provide feedback, any time is a good time to loop them into your design process. Whether you’re sending a survey, conducting user research, or simply chatting with someone at a product event, pay close attention to their feedback and ideas.
Not sure how to start talking to people? Try attending some events, even virtually. Our presence at open source events, along with our product evangelists’ strong ties with customers and our users’ enthusiastic approach to getting involved contribute to a rich feedback system and data repository for OpenShift.
Once you successfully get talking, you need to think about how to best gather feedback for future use. Whenever possible, request consent from your participants and record the interactions. While the data from the user interactions might provide you with a lot of information about what the users prefer to do with the product, along with how and when they do it, it says very little about the why. This calls for a thorough documentation of their opinions, reflections, and reactions. If they are not recorded in connection to the context, there is a high risk of getting the analysis wrong.
Also, we often tend to make on-the-spot judgments about the validity of the information, which could close many doors as to how that information could be useful in the future. To avoid this, keep the documentation thorough and record pieces of information objectively.
Once you’re done recording, transfer the relevant pieces of recorded information into a shared repository, which will behave as a single source of truth for all the future research-related queries for the team. A big challenge in maintaining a repository is organizing data in a way that it doesn’t lose its validity in terms of context and time.
This is the part where you bring order to the chaos. Organizations come up with a new requirement for a report frequently. Conducting independent research for each is not only financially unsustainable, but it also could result in multiple stacks of limited data-points with minimum overlaps. Because of the disconnected short-sighted micro accomplishment of turning in the transient reports, the bigger picture goes down the drain.
During his research gig at We Work, Tom Sharon proposed a new structure to research data organization that scraps the idea of looking at reports as the atomic unit of research. He suggested that if we instead consider the "observations" as the atomic unit of research, we could increase the life, relevancy, and scope of the data in many ways. As a result, we put together a documentation structure that allows us to conveniently generate a report for a range of requirements in just a few clicks.
When creating a shared repository for members across the team, the inconsistency in the natural individual tone of voice should be addressed. Establishing a common standard vocabulary for the common subjects, creating dropdown menu items (editable) for some of the fields, and defining a basic syntax for entering the logs and metrics will help keep the data organized.
Analyze and disperse
To retain the reusability of the recorded evidence, translate them into a "nugget" (based on Tom Sharon’s definition)––construed as an observation from the research. Having a thorough record of both the evidence and the nugget could come in handy. It can serve as an abstraction of the user’s mental model in the future in support of the proposed hypothesis for a given audience and ask. This approach could help in trimming away the redundant details from the report so that you’re left with the most important information.
The same pieces of evidence could also be used to highlight or quote the pain points on which the hypothesis would be based.
The initial phase has been especially disorderly, as it called for a lot of back and forth in the process. The depth of the exercise altered with every advancement and required us to take a step back and fix the very fundamental practices such as note-taking, nudging, demonstrating, and even framing the right questions for our interactions.
We might still be very far away from cracking the perfect format for recording the research data for our product, and each attempt may still feel like a long shot, but it’s getting us one step closer to understanding our users better. And by sharing our story, we hope that we can help you connect better with your users. Even if your product is internal, it's still important to do this research to help with design so users can be as productive as possible.
For our next milestone, we plan to resolve on a close to the ultimate format for recording and documenting the information and data. Once proven successful, the effort could be bumped up a notch to cover a wider range of related products––which would further call for another round of hits and trials.
About the author
Veethika Mishra is an Interaction Designer by practice and a tabletop gaming enthusiast. She believes in the power of play and storytelling in crafting extraordinary experiences. Her background in Game Design provides her with a fresh perspective for envisioning solutions for problems faced by developers.