Choosing the right evaluation method depends on your specific needs and constraints. Heuristic evaluations offer quick, cost-effective insights, while usability studies provide in-depth user feedback. Task analysis enhances both methods by focusing on user tasks and uncovering challenges. By integrating these approaches, you can achieve a comprehensive and user-centric evaluation of your product.
When Do You Need a Usability Study vs. a Heuristic Evaluation? Where Does Task Analysis Fit In?
Understanding when to use a usability study versus a heuristic evaluation can significantly impact the effectiveness of your product development process. Here’s a breakdown of each method, how task analysis fits in, and examples of how these methods have been applied in our day-to-day work.
HEURISTIC EVALUATION
What is it? A heuristic evaluation involves usability experts assessing a design based on established “rules of thumb.” Historically used for human-computer interaction systems, this method evaluates software interfaces for usability. Building on well-established guidelines, experts have tailored 14 Usability Heuristics specifically for medical devices. We apply these 14 heuristics to conduct thorough heuristic evaluations, ensuring usability excellence in client projects.
Why Use It? Heuristic evaluations are quick and cost-effective, making them ideal when time and resources are limited. They provide valuable insights early in the product development process, even before a fully-fledged prototype is available. Our approach includes:
Expert Reviews: Conducted by seasoned usability experts.
Tailored Heuristics: Using our custom heuristics for medical devices.
Comprehensive Analysis: Identifying potential areas for improvement and their impacts.
FORMATIVE USABILITY STUDY
What is it? At a high level, a usability study typically involves simulated use testing of a product to observe how users interact with it. Usability studies inform product design, identify use-related risks, and can help engineers and designers discover the root cause of use errors to include risk mitigations in the product’s design. This evaluation method uses representative users in a representative environment to gather user feedback on specific product components, or the product as a whole.
Why Use It? For medical devices and combination products, FDA requirements necessitate usability studies to ensure device safety and effectiveness. These studies help iterate through device design and thoroughly evaluate components. Compared to heuristic evaluations, usability studies show how end users interact with the device and reveal more opportunities for design improvements.
Our approach includes:
Simulated Use Testing: Observing real users in a controlled environment.
FDA Compliance: Ensuring all regulatory requirements are met.
Comprehensive Study Design: Simulating a representative use environment, recruiting representative users, and applying study findings to design recommendations.
TASK ANALYSIS
What is it? Task analysis goes hand in hand with both heuristic evaluations and usability studies. It involves breaking down the tasks users will perform with the device to understand their needs and challenges.
Why Use It? It provides a deeper understanding of user interactions, enhancing the effectiveness of heuristic evaluations by ensuring that the heuristics are applied in a context that reflects real user behavior. It also enriches usability studies by identifying specific tasks that need to be tested, ensuring comprehensive coverage of user interactions. Our task analysis process includes:
User-Centric Focus: Understanding user needs and challenges.
Ethnographic Methods: Observing users in their natural environment.
Detailed Task Breakdown: Analyzing each step users take with the product.
How These Methods Work Together By combining heuristic evaluations, usability studies, and task analysis, you can ensure a thorough and user-centric design of your product. These evaluation methods complement each other, providing a comprehensive understanding of both potential usability issues and real-world user interactions.
For more detailed examples and insights, check out our case studies on heuristic evaluations and usability studies. These case studies illustrate how Kaleidoscope Innovation has successfully applied these methods to improve product design and usability.
For over 7 years, Kaleidoscope Innovation has been a trusted partner to industry leaders like Eli Lilly, Pfizer, and Baxter, helping bring safer, smarter medical products to market. Our integrated Human Factors expertise ensures that the right evaluation methods—whether heuristic, usability-focused, or task-based—are applied at the right time. Whether you're developing a new device or improving an existing one, we’re here to guide your team with insights that reduce risk, streamline development, and enhance user experience. Let’s talk about how we can help you choose and apply the best evaluation methods for your product.
Taylor is a Human Factors Engineer at Kaleidoscope Innovation. She brings experience from roles in Human Factors, Research and Design, and Clinical Research. Her background in Human Factors Engineering, combined with her collaborative approach, ensures that user-centered design is seamlessly integrated into every project.
Unlock Hidden Productivity with Time & Motion Studies
A Time & Motion (T&M) study can be a valuable addition to a user-centered design process. Time & Motion studies are usually conducted to identify potential bottlenecks in productivity but can also identify physiological risks associated with working in a warehouse environment, factory, health care environment or lab setting. Significant enhancements in productivity have been linked to ergonomically designed workspaces, leading to better worker morale and increased revenue due to reduced cycle times and fewer repetitive stress injuries.
At Kaleidoscope, we perform several time & motion studies for our clients every year. One of the advantages of this research technique are the insights gained by observing actual users performing the workflow in context and in real time. Our process for conducting a T&M study usually follows this sequence:
Meet with stakeholders to define the targetusers and workflow, and to determine if user experience and motion data will be in scope. If motion data is required, collaborate with ergonomic engineers to coordinate efforts.
Schedule onsite data collection. Send enough researchers to collect observational data, operate recording equipment, and conduct contextual interviews with participants.
Extract data from video footage throughframe-by-frame video manipulation. Analyze data, conduct descriptive statistical analysis and inferential analysis when appropriate. Identify insights and themes relevant to the research question(s).
Synthesize and present results to stakeholders. When possible, compare current results with historical data to view changes in time requirements that could be related to workspace/workstation design improvements. Incorporate user experience findings to give research participants a voice in future workstation design changes and continue Kaleidoscope’s commitment to human-centered design and research.
WHAT IS NOT MEASURED IN A TIME & MOTION STUDY
Time and motion studies are ideal for identifyingproductivityobstructions and potentially unsafe body movements. Not all the important factors related to employees’ work experiences are measured, however. As technology and infrastructure become more robust and complex, we have responded by expanding our capabilities to provide value for our clients. We share one challenge here to illustrate the complexity of studying modern warehouse environments.
Challenge
Time studies often target one piece of a complex system.
When only one component of a facility/system is studied, external factors that influence time requirements may not be observable or included in data collection.
Changes made to the target component may create unanticipated changes elsewhere in a facility or system.
Factors out of employees’ control may be misinterpreted as inefficiency.
Potential Solutions
It may be helpful to apply systems thinking near the beginning of a time & motion project.
Consider the larger system structure within which the T&M study is taking place.
Document who may be influenced by the design or redesign of the target component.
Discuss what stakeholders prioritize that could be influenced by the redesign of the target component.
Consider whether there are opportunities to change the system structure.
At what points can we intervene?
If possible, design interventions that benefit the entire system.
Map the system, even parts outside the area of focus. Create a visualization that allows stakeholders to envision how the system might react to changes in its structure (e.g., process map, schematic illustration, storyboard, animation).
Potential Add-Ons
To maximize the human potential embedded within workplaces, other research techniques may be added to time & motion studies for an even greater degree of comprehension.
Visual Aids: Diagrams of the facility the study is based on may help the audience better understand research findings.
Surveys: Surveys provide an inexpensive method of gathering large amounts of data quickly. Often, responses are provided in numeric format which allows for historical comparisons.
Interviews: 1:1 or group interviews may be added to a time & motion study to gain an understanding of time requirement results. Contextual information known to participants but not researchers may be shared in an interview to provide a deeper understanding of the “why” behind observed time requirements.
THE HUMAN COST OF EFFICIENCY IMPROVEMENTS
Any improvements to efficiency should be weighed against the human cost to the workers employed in the facilities we study. If efficiency improvements create a stressor where none was present, carefully consider whether the cost is worth the price. Constant time pressure and feeling hurried will take a toll on even the hardiest employees. Consider workarounds that value the worker, and which place them at the center of decision-making. The payoff in retention and increased job satisfaction will likely outweigh any efficiency improvements under consideration.
Rachael brings over 10 years of research experience to her role at Kaleidoscope Innovation. She has advanced training in clinical psychology and mixed methods research methodology. Guided by the principles of positive psychology, Rachael uses a human-centered lens for deeply understanding the user experience. Her work at Kaleidoscope focuses on human-machine interaction and identifying design changes capable of positively impacting well-being at the individual and institutional levels.
Qualitative Research: AI's Role in Analysis Advancement
BACKGROUND
Qualitative research plays a pivotal role in enriching our comprehension of individual narratives and experiences. It is a cornerstone methodology for design researchers seeking to forge a deep connection with user perspectives, particularly during the initial phases of the design process. This approach is instrumental in guiding iterative design developments, ensuring that end-user needs are comprehensively addressed. Qualitative data encompasses a diverse array of formats, including textual content, photographs, and videos. Typically, these studies involve a more focused sample size, often with 10 or fewer participants, to facilitate an intensive, detail-oriented analysis that quantitative methods may not capture.
Although qualitative research is the methodology of choice for design researchers, the approach requires a considerable time commitment. Qualitative data is known for being unwieldy at times, and words and images require more hours of analysis than numeric data. Often, our clients are eager to obtain research findings as quickly as possible to move a product or system into production. Therefore, large scale qualitative studies are not feasible formost design research projects.With the recent surge in the availability of AIlanguage model tools, we speculated thatChatGPT could be used to analyze extremely large sets of qualitative data more efficiently.To that end, we conducted a 6-month project testing ChatGPT as a potential tool for qualitative data analysis.
THE CURRENT PROJECT
Our aim in conducting this project was to determine if AI could produce insights from a large dataset that would otherwise be unmanageable and time prohibitive for a human researcher. We used data from 25,000 open response questions to explore the capacity and capability of ChatGPT as computer-assisted qualitative data analysis software (CAQDAS). The dataset we used was provided by the VIA Institute on Character, a local non-profit organization with which we are affiliated.We decided to experiment withChatGPT to determine if it could reliably and accurately analyze text data. Our expectationwas that if ChatGPT could analyze qualitative datasets with tens or hundreds of thousands of respondents, new pathwaysfor qualitative researchers may develop.Using AI for data analysiscouldchange the trajectory of a research design and lead to large scale qualitative studies that were not possible before now.
PROCEDURE
To test the limits of ChatGPT 4.0 (the only version with the means to upload files), we tried two different approaches to determine the capability of the tool.
METHOD 1: QUICK AND EASY
We started with avague set of user queriestoplace the data preparation loadon the CAQDAS and to determine if it would complete the same tasks a human researcher would.
User Query:Analyze the data in column AQ, identify themes, and provide 3-5 insightsbased on participant responses.
Result: Not useful.
ChatGPTdid not automatically clean the data without instruction whichcaused an error. The output from ChatGPTindicated the data file was either too long or too complex and it was unable to proceed with analysis.The raw data included responses such as “N/A” or random strings of letters, which a researcher would have deleted or ignored before analysis.
Lacking more specific instruction, ChatGPT defaulted to a quantitative approach to data analysis, even though the data were text responses. One of the first outputs ChatGPTproduced was a count of the most common phrases in the dataset.
We concluded that this approach to creating user queries was not useful.ChatGPT attempted to analyze the data but quickly became overwhelmed and either produced an error message or continued to attempt analysis, getting caught in the AI version of theMac’s “spinning wheel of death.”
METHOD 2: THE GUIDED ANALYST
Wethen providedChatGPT with more specific instructions. We instructed ittoclean, review and code the data, then create insights using a theoretical framework as a guide for analysis.
User Query: I'd like to analyze some text data using Peace Psychology and Positive Psychology as theoretical frameworks. Include content from the VIA Institute on Character as an additional framework. Focus on data in the 'Open Responses_Political Differences' column.
First, ignore text that indicates a respondent did not want to answer such as 'N/A' or random strings of letters. Leave those cells blank. Next, use descriptive codessuch as a phrase that describes the content of the targeted data.
Create a new document and filter the data from columnAD. Group the data according to the codes created in columnAD and list each data point that corresponds to the code.
Create 3-5 insights using the coded data in ColumnAD using positive psychology and peace psychology as theoretical frameworks.
Create a Word document and place the insights you created in it. Make the file available for download.
Result: Success
ChatGPT produced a list of five insights using the VIA classification of character strengths as a framework, supported by references to positive psychology and peace psychology.
Our last step was tothink about how to test the accuracy and reliability of the findings. Rachael has a strong background in qualitative methodology, so she extracted a sample of 20 data points and used a traditional qualitative approach to code the data. We also queried ChatGPT using the same data set with similar user queries to determine the degree of similarity between both sets of outputs.
Accuracy
Within the first 20 rows of participant responses, we identified at least three codes related to the five insights ChatGPT produced. We were pleased with the accuracy of the codes as they related to the insights from our sample data.
Reliability
Both researchers requested insight generation fromChatGPT using similar but not identical queries. Chat GPT’s output for both sets of queries was closely aligned. Three of the five generated insights were nearly identical between the two researchers and two were unique per researcher. It is important to note that even within research teams, differences arise between researchers’ interpretation of qualitative data. Disagreements are resolved through discussion and are typical of a qualitative research project.
Finally, we tested some of the limits of ChatGPT with several ad hoc user queries. Ourqueries produced a great deal of information in a very short period.
AD HOC REQUESTS:
Code Participant Country of Origin: We queried the data set to determine if each participants’ country of origin represented an individualistic or collectivist nation(coded as “country designation”). We asked ChatGPT to code each participant’s country of origin using this designation. The codes were placed in a new column created by ChatGPT.
Group Data and Analyze Frequency:We asked ChatGPT to examine the most common keywords and themes by country of origin. It provided a summary for all countries represented in this data set with the most frequently used codes according to participant’s country of origin.
Character Strength Frequency by Country Designation: We prompted ChatGPT to organize the data according to country designation and create a bar chart to illustrate the frequency of character strengths chosen as most beneficial by participants.
All the ad hoc requests were completed successfully. ChatGPT confirmed its approach with users before completing each step.For example, after requesting the data be coded according to country designation, ChatGPT responded:
Sure! In order to label countries as "collectivist" or "individualist," I would need a reference or criteria to determine which category each country falls into. There are various sources and studies that have categorized countries based on their cultural dimensions, including the work by Geert Hofstede.
Do you have a specific reference or criteria in mind that you'd like me to use? Alternatively, I can use Hofstede's cultural dimensions as a general guideline.
ChatGPTalso encouraged us to use caution when interpreting the findings from the between-country analysis, because significantly more participants were from individualistic nations.
LESSONS LEARNED
We learned through this project thatChatGPTexhibitsseveral esoteric preferences for working with Excel files.We only used Excel to upload data sets, so our suggestions are restricted to this software.
1. ChatGPTcannot analyze data if it has been tagged with a data type. The output will state that it completed the user query, but new files will not show any changes.
SOLUTION: Remove any Data Types tags before uploading Excel files toChatGPT.
2. ChatGPT prefers references to column names instead of the letters Excel uses to identify columns.
SOLUTION: If a user query contains a letter identifier instead of a column name, remove the space between the word “Column” and the letter.
CORRECT:“Provide a mean for the data in columnAI.”
INCORRECT:“Provide a mean for the data in Column AI.”
3. Unless instructed, ChatGPT will not automatically clean uploaded data. If a user attempts to request analysis before cleaning, it will respond with an error message.
SOLUTION: Provide explicit instructions for data cleaning before analysis.
HUMAN RESEARCHER VALUE
We shared just a fraction of the user queries we submitted over a 6-month period to test ChatGPT as a qualitative analysis tool. We presented the successes and failuresaslinear, concise processes for readability. However, early in the project, ChatGPT was often overwhelmed with requests and our queries resultedin error messages. Queries usually required several back-and-forth inputs between researchers and the AI to clarify instructions. With little or no guidance, ChatGPT was unable to produce results. We found that the AI required specificinstructions to function as computer-assisted qualitative data analysis software.Our bottom-line recommendation is that well trained researchers test the tool using a data set for which they already possess human produced findings. Compare those findings with ChatGPT's output and evaluate its reliability and accuracy.
Based on our brief examination of ChatGPT’scapability, we advise only well-trained researchers with extensive qualitative research experiencetouse AI as a computer-assisted data analysis tool.As in any other profession, expertise and training are the best predictors of quality work.As the saying goes, garbage in garbage out.Users with no idea how to design a rigorous research study will not provide the needed input for AI to perform adequately.
Our early work indicatesthe potential for AI to assistin qualitative data analysis. Like other CAQDAS products such as MAXQDA and NVivo, the software serves as a management and organizational tool. We envision ChatGPT as a marginallyhigher-leveltool with the capacity for categorizing and summarizing qualitative data, with the proper guidance and instruction.
Rachael brings over 10 years of research experience to her role at Kaleidoscope Innovation. She has advanced training in clinical psychology and mixed methods research methodology. Guided by the principles of positive psychology, Rachael uses a human-centered lens for deeply understanding the user experience. Her work at Kaleidoscope focuses on human-machine interaction and identifying design changes capable of positively impacting well-being at the individual and institutional levels.
Grant is a Senior Design Engineer who enjoys being faced with new challenges, and recognizes that well-designed products lead to better experiences and outcomes for users. He loves the design process, and has operated primarily in medical device and industrial applications. His passion to understand, innovate, and simplify has been supported and strengthened by the Kaleidoscope team and their talented partners. At home, he always has a fun project in the works (think: wooden bicycle, handheld Theremin, one-string electric guitar)!
Unlock Hidden Productivity: A Research Guide for Industrial Designers
In today's fast-paced and competitive world, industrial designers face the exciting challenge of creating innovative and user-centric products that capture the market's attention. While their expertise lies in design aesthetics and functionality, the role of research in the design process cannot be underestimated. Research is the key that unlocks valuable insights, fuels inspiration, and ensures that designs are grounded in real-world needs and preferences. However, for industrial designers and other professionals who are not trained in research methods, navigating the realm of research can feel daunting. In this article, we will define research methodology and provide suggestions for selecting the right one for your project.
RESEARCH METHODOLOGY
Once a client settles on a research question, it is up to the design researcher to select the methodology that facilitates a rigorous approach. Think of methodology as a framework for conducting a research study. The chosen methodology will guide a researcher in methods and procedures that ensure the results or findings are valid and reliable.
QUANTITATIVE: Quantitative methodology is used to determine if relationships between variables exist, to test a hypothesis, or to measure a phenomenon. Quantitative data is used to make group comparisons or identify patterns. Data are numbers and reported in a standard reporting structure. Descriptive and inferential statistics require quantitative data. The output of quantitative analysis is referred to as results.
QUALITATIVE: Qualitative methodologies are used to understand a phenomenon more deeply, to obtain a detailed description of an experience, or to understand how or why an event occurs. Qualitative data may be text or images and uses a flexible reporting structure. Interview transcripts and video recordings represent qualitative data types. The output of qualitative analyses is called findings.
MIXED METHODS: Mixed methods research includes aspects of quantitative and qualitative methodologies in the same study or series of studies. Mixed methods approaches can be used sequentially or concurrently. Often, results or findings from one phase will be used to design a subsequent phase of a project. A Time & Motion Study consisting of quantitative measurement of a motion in a workflow followed by a one-on-one interview is an example of a mixed methods study. The qualitative findings could be used to understand the results of the quantitative phase more deeply, to provide context for interpreting the results, or to triangulate the results and findings.
WHICH ONE SHOULD I USE?
Choice of research methodology should be determined using several factors:
Research Purpose: If the purpose is to understand or explore, a qualitative methodology is likely the best approach. If a client wants to know how much or to determine if a new workflow is more productive than the old one, a quantitative approach will likely be appropriate. If a client wants both, a mixed methods approach will be best.
Budget: Generally, qualitative studies are more time-intensive than quantitative studies. If a client’s budget is limited, a quantitative approach may be best.
Decisions: If a client wants to use the findings of a study to generate ideas or inform iterative design requirements, a qualitative approach may be best. If a client wants to evaluate changes to a process or product, a quantitative approach is required.
The next step in planning a research study is to decide what methods will be used to collect data. Methods specific to each methodology exist but are beyond the scope of this article. If you are interested in learning more, check out some of the popular methods from a reliable source: narrative inquiry, survey, and ethnography are a few examples of methods you may encounter in the field. By embracing research methodology as an integral part of the design process, industrial designers can confidently embark on their creative journey, armed with insights that empower them to craft extraordinary products that not only meet user needs but also set new standards of innovation in their industry.
Rachael brings over 10 years of research experience to her role at Kaleidoscope Innovation. She has advanced training in clinical psychology and mixed methods research methodology. Guided by the principles of positive psychology, Rachael uses a human-centered lens for deeply understanding the user experience. Her work at Kaleidoscope focuses on human-machine interaction and identifying design changes capable of positively impacting well-being at the individual and institutional levels.
Mastering Combination Product Development: From Immersion to Validation
THE IMMERSIVE BEGINNING
Our journey kicks off with immersion, a creative problem-solving phase. Here, we ensure that solutions are at the ready for any potential roadblocks. We dive into the waters to test the concept's feasibility and identify potential challenges. We also map out short and long-term objectives, charting the course for product development.
MASTERING THE ART OF DESIGN
With a clear vision in mind, we start breathing life into it through meticulous planning and execution. Crafting a combination product resembles assembling an intrurate puzzle, where every detail carries significance. This stage revolves around rigorous testing and evaluation to pinpoint the best and most efficient design solutions.
CONSTRUCTING THE FUTURE
This phase is undeniably exhilarating. Building the product is where the concept takes tangible form. Transitioning from design to reality, prototyping takes center stage. It grants us the opportunity to scrutinize every element, ensuring the product's integrity and functionality.
THE PINNACLE TEST
Validation stands out as perhaps the most pivotal step in the entire process. During this phase, the product undergoes comprehensive reviews and testing to unveil any last-minute imperfections or errors. This thorough examination ensures the product is primed for its grand debut in the market. Validation acts as the ultimate litmus test, determining the readiness of the combination product for integration into various healthcare services.
Taylor Schmitt is currently a student at The Ohio State University, where she studies marketing. She loves exploring new opportunities and facing new challenges. While working at Kaleidoscope she has been able to work closely with the sales team to support business growth and brand visibility
Matt has always loved interacting with clients to find solutions for their challenges. He was drawn to business development at Kaleidoscope Innovation because of the great potential he saw. After graduating from the Lindner College of Business at the University of Cincinnati, he worked with two startups, a marketing consultancy, a financial services company and the non-profit 3CDC. He believes that listening is the most important part of sales. In his free time, Matt enjoys movies, trying new foods, traveling and the great outdoors.
AI as Intelligent Design? Not Yet, But It’s Coming.
From art generators to chatbots, AI seems to be having its zeitgeist moment in popular culture. But for those of us who work in design, the near-term and future applications of AI have been lively discussion points in strategic planning meetings for quite some time. There is no doubt that AI will be an instrumental part of our world’s future. It will allow us to rapidly synthesize all the data being collected via our phones, cameras, computers, smart devices, and much more, giving us the ability to decipher and understand that data in illuminating, meaningful, and likely, world-changing ways.
What does this mean for the design industry? Though it may be a long time before AI is able to design a product from the ground up, the potential is clearly there. In fact, we believe AI is a tool that designers should be adding to their arsenal sooner rather than later.
Putting AI to Work
To put our money where our industry-informed opinions are, the Kaleidoscope Innovation team recently embarked on a studio project to design a high-end lighting fixture that could mimic lighting patterns found in nature. The project would enable our team to flex our aesthetic skills while using the full range of our design toolbox. One of those tools is Midjourney, a proprietary artificial intelligence program produced by an independent research lab by the same name. Though still in the open beta phase, Midjourney proved to be a useful partner in our mission. The collaboration between AI and the guiding hand of our expert design team delivered intriguing results.
One important distinction about the AI portion of the project: We were not setting out to produce real-world functionality, and in fact, we had no expectation or need for the AI to produce fleshed-out ideas or even design sketches. This experiment was about exploring new territories in aesthetics and applying them to materials and manufacturability considerations.
Our first step was to gather a team to collaborate on the search terms that would help visually articulate the aesthetic aspirations for our new fixture. Midjourney works by inputting text-based prompts, which the AI algorithm uses to generate new images using vast databases of existing images. The terms we fed the algorithm included chandelier, lighting, brilliant, elegant light, airy, crystalline patterns of light, dancing, photorealistic detailed plants, greenery, daytime, bright, modern, beautiful, natural colors, garden, and greenery. The team also used technical inputs alongside these qualitative descriptors to determine the aspect ratio and resolution while also guiding the algorithm to reference certain lighting styles and rendering approaches.
Digesting these descriptive words, Midjourney searched vast amounts of data across the internet to create original—albeit amalgamated—artwork. The images it produced reflected the algorithm’s interpretation of the inputs the team provided. From there, we tweaked specific inputs to alter the color, lighting, tone, and subject matter, continuing to iterate until we had collected a series of AI-generated lighting fixtures that could inspire the team.
How Did AI Do?
Based on the text inputs the team provided, Midjourney was able to identify design elements that could produce the effect of light shining through leaves. The images it produced looked organic, almost surreal in the way they were able to capture the kind of nature-made glow and transparency that is elusive in real-world lighting solutions. The various iterations of artwork then became mood boards that set up our team to brainstorm ways in which the effect could conceivably be produced.
The algorithm’s interesting use of materials, colors, lighting effects, and overall mood inspired us to apply those attributes to a holistic design. In other words, instead of our team scratching their heads visualizing how the light should transmit, AI provided us with ideas that enabled us to focus on materials, manufacturability, technical requirements, and more. Rather than spending hours scouring the internet for inspirational imagery, the team was able to craft that inspiration imagery ourselves through AI in a fraction of the time—imagery that exactly aligned with our design vision.
Without question, Midjourney served as a highly effective springboard that sparked ideas our team would probably not have come up with starting from a blank sheet of paper and pen. In this sense, AI provides an upfront efficiency that can move a project farther down the road faster than it might otherwise have gone. Perhaps more than that, a significant strength of AI in this application is that it can cast a wide net in terms of inspiration and exploration. It’s an open mind, and designers should be willing—and eager—to go down the rabbit holes, teasing out new possibilities. Once an intriguing direction is established, the designer can take over to turn the AI-generated inspiration into an actual product.
The key to a successful AI collaboration is plugging in the right words or phrases to best draw out the AI. And so, crafting prompts could be viewed more as art than science. Further, with a program like Midjourney, there is an element of unpredictability: You don’t have much control over what you’re going to get out of it. There is a lot of trial and error and shooting in the dark. Therefore, if you already have a set idea in mind, using AI to design it will probably be more frustrating than productive.
The inherent aspect of exploration and discovery is a factor to consider as well. Our team felt excited about experimenting with this technology specifically because the lighting fixture was an internal project. Had we been designing for a client, we would have been more hesitant to use AI while balancing product requirements, timeline, budget, and resources.
Lastly, because this was a purely aesthetic exercise, we weren’t trying to solve any mechanical problems through AI—that’s skill is not in its wheelhouse at this point. This limitation provides a real barrier to the widespread adoption of AI, but as the algorithms improve over time, AI may be able to help us solve even our stickiest mechanical problems.
Beyond leveraging AI for creative exploration, Kaleidoscope has also put it to use in some of our research work. As part of our insights and user experience programs, we often do ethnography or time-and-motion studies in which we observe individuals interacting with a tool or experience. Typically, one of our team members is responsible for reviewing videos to log data, tracking everything from how often someone does something to the amount of time it takes them to do it. It’s a time-consuming process that has led us to start dabbling with programming AI to analyze video recordings for certain elements and then export the data quickly and effectively. Using AI to track the frequency and duration of actions for time-and-motion studies shows tremendous potential to save time and reduce costs while freeing our team members to focus on more creative assignments.
The Verdict
The Kaleidoscope team came away with an appreciation for where AI can support our design efforts today, particularly as a powerful aid in producing aesthetic inspiration and as a tool to sort and output raw data. Both help the design process in productive ways and serve as a small window to what may someday be an AI-driven design future.
Tom Gernetzke is a senior lead industrial designer at Kaleidoscope Innovation and has spent the last 12 years creatively bringing new product ideas to life.
Caterina Rizzoni is a lead industrial designer at Kaleidoscope Innovation and is the Director-at-Large of Conferences for IDSA.
Infosys Medical Devices and Engineering Services x Kaleidoscope Innovation
The Healthcare and Medical devices industry is undergoing a revolutionary transformation in the way solutions and devices are being formulated and developed. Medical devices are becoming more connected than ever and remote patient monitoring with data analytics is becoming a norm.
It is imperative for the medical device companies to adopt a strategic approach to stay ahead of the innovation curve by leveraging technology advancements in multiple areas such as mobility, wireless, cloud, and analytics to drive innovation that addresses market needs and challenges of longer device development cycles, optimization of development processes, and high production costs.
At Infosys, we help our clients in designing customized devices, end-to-end product development, maintenance, manufacturing support, regulatory documentation, and product compliance and certifications. We also help optimize R&D cost and improve supply chain efficiencies by leveraging new technologies and partner ecosystems. This is to bring innovative medical devices and Software as a Medical Device applications into the market with the objective of improving patient care while reducing the cost of care.
Our ISO 13485 certified processes and Quality Management System ensures high-quality product development which enables our client to meet their regulatory needs and objectives. With our recent acquisition of product design and development firm, Kaleidoscope Innovation, we plan to redefine patient treatment and consumer health across the globe.
Inspired by Don Norman’s classic work, the Design of Everyday Things, we’ve been thinking about mundane, everyday items that can have annoying usability flaws. While we have a particular focus on the human factors of healthcare and medical products here at Kaleidoscope, we can apply that same rigorous, analytical human factors approach to these everyday things.
So, here we have the seemingly benign 2.5 gallon jug of drinking water, a household staple used by a variety of brands across the country.
Problem 1: As water is dispensed from the jug, additional air is required to replace the dispensed water to ensure consistent water flow and prevent the jug from collapsing due to the pressure of the surrounding air. To add air flow into the jug, a small hole must be punctured into top with a sharp knife. The use of a sharp knife poses a potential safety hazard when considering the orientation and motion in which the knife must be used and the force necessary for the knife to puncture the slick plastic material of the jug. In addition, the most obvious place to puncture this hole is the top side facing the front of the jug, which has a slight slant toward the user. The angle of the stabbing motion must be just right; if the angle is too shallow, the knife blade can skid across the surface of the plastic, with the blade pointing in toward the user’s body.
A potential mitigation for this problem is to provide an adhesive pull tab that can be removed to reveal a pre-punctured vent hole.
Problem 2: The spigot contains a small strip of plastic that extends from the spigot base to the dispenser handle. The plastic strip is intended to prevent the dispenser handle from being pulled open until the user intentionally breaks the strip, pulls the dispenser handle, and begins dispensing the water. However, the plastic strip can be easily broken unintentionally, and the dispenser handle then opens with very little resistance. This can lead to the dispenser handle opening inadvertently when force is applied to the spigot during loading, or the spigot catches on a surface while unloading, potentially emptying water into a shopping cart or the trunk of a car.
A potential mitigation for this problem is to provide a screw cap over the spigot, similar to the caps on water bottles.
What’s an aspect of an everyday item that you would change to improve the user experience?
Design influences a product’s lifecycle performance and cost, starting from its development. Product development costs rise significantly if a defect is identified at a later stage. Using virtual tools for new product introduction simulates possible scenarios upfront for comprehensive testing. It gets products to the market quickly and saves money for a successful launch.
Insights
Design influences a product’s lifecycle performance and cost, starting from its development.
Conceptualization and design stages determine more than 70% of a product’s lifecycle decisions
and cost.
Virtual tools are an effective way to design new products that serve specific customer needs.
Virtual models of new products accelerate their evaluations to shrink the development cycle time.
Organizations should create virtual replicas of workplaces for human-machine interactions studies from multiple perspectives.
Lifecycle cost is the total cost (direct and indirect) a product incurs in its life span. Conceptualization and design stages determine more than 70% of a product’s lifecycle decisions and cost.1 The earlier an issue is identified, specifically in the design stage, the easier it is to fix and avoid costly rework. Virtual replicas (or digital twins) of products, processes, and environments streamline design and new product development to reduce costs and time to market.
A common assertion is between 80% and 90% of new products fail. However, realistic failure rates vary by industry, from 36% in healthcare to 45% in consumer goods.2 Professor Clayton Christensen, best known for his theory of disruptive innovation, believes the success mantra is to design products that serve its intended customers. Manufacturers should focus on the function that a customer who buys a product would want it to do.3
To enable that, virtual representations of the product under development, in orchestration with humans and other entities in the ecosystem, is an effective approach. The approach encourages innovation. Designers visualize the product’s operating condition, create digital prototypes for trial runs, and carry out tests on a global scale. Virtual tools like 3D computer models and digital twins support informed decisions in early product design stages. This mitigates the risk of a wrong product release or a poor customer experience.
Virtual products are an effective way to design new products that serve specific customer needs.
When end users receive virtual training of a complicated product’s operation (like an aircraft engine), memory retention happens in the background. Any number of such instances can be created at a negligible marginal cost for repetitive usage. A central digital setup saves the cost of setting up multiple physical arrangements at different locations.
Parameters of Successful New Products
Product failures are more from a commercial perspective than technical. More than 25% of revenue and profits across industries come from new products, according to a study by McKinsey. Successful products relate to a set of core capabilities, with the top-most as follows:4
Collaboration to execute tasks as a team.
Investment to mine market insights and their inclusion in the product.
Plans for new product launches, comprising target customer segments, key messages to communicate, and objectives to achieve.
Talent development for new product launches with defined career paths and incentives.
At the same time, the primary reasons for product failures and mitigants are the following:5
Gap in meeting product expectations; delay launch until product completion.
Inability to support rapid growth if a product is successful; set ramp-up plans to avoid this.
Low demand for a new product; perform due diligence for customer requirement before planning a product. Launch products in suitable markets.
Difficulty in new product usage; provide proper customer orientation and training.
Virtual tools for product design address the above reasons for failure and increase the chances of successful product launches.
Design Thinking with Virtual Tools
Design thinking is a popular, technology-agnostic approach for new systems design and problem solving. It balances the technical feasibility of products, financial viability, and desirability from a customer’s perspective (see Figure 1). It is even more impactful when implemented along with virtual product design tools.
Figure 1. Design thinking at the sweet spot of desirability, viability, and feasibility
Source: Infosys
The design thinking cycle starts from empathy to understand a customer’s needs from their perspective, followed by defining, ideating, prototyping, and validating, in iterative loops. New product development and customer participation encourage collaboration in a virtual environment to practice design thinking. Immersive environments using mixed reality (combinations of augmented reality or AR and virtual reality or VR) create a working environment close to the real world, to identify and correct issues much ahead (see Figure 2).
Figure 2. Virtual tools used across design thinking stages
Source: Infosys
Virtual models of new products accelerate their evaluations to shrink the development cycle time.
Design firm IDEO, for example, wanted to perform ethnographic research to capture customer requirements for new products. However, it was difficult to identify key observations from many data points and recreate them later, even with expensive videos or photos. It addressed the challenge through a VR camera.6
Kaleidoscope Innovation, a design and development unit within Infosys, designed a large freezer project using virtual tools. Such projects usually undergo several time-consuming team reviews. The team created a 3D model in a VR environment that helped designers walk around the product early in the design phase, evaluate its usability from multiple perspectives, and tackle proposed changes to design.
This virtual model did not change the overall project plan, but accelerated evaluation and decisions around it, shrinking the product development cycle time. The team selected the best design without spending time and money on physical prototypes.
Automation in WareHouses
Humans work with machines in warehouses. Material handlers carry out order fulfillment along with pick-and-place robots. Workers’ safety in all situations is important.
A leading e-commerce player wanted to validate design decisions for robots working in its order fulfillment warehouses to gain insights into their safe working alongside humans. Kaleidoscope Innovation created a virtual environment where employees interacted with robots in different situations. The team created a digital twin to simulate several configurations of robots and their working environment. The company recorded the results and interviewed employees about pros and cons of each situation.
The VR-based solution provided a cost-effective and safe way for the e-commerce firm to test new concepts in human-robot interaction and capture data and feedback before implementation. It helped the managers zoom out and look at the big picture, in contrast to one robot or equipment at a time.
Training for Product Usage
Operators need training to work on machines with complex functionality and procedures, to stay safe and productive. VR-based training prepares humans before hands-on operation on a machine. For instance, Rolls-Royce has rolled out a VR-based training kit for its airline customers to manage aircraft engine maintenance and repair.
Infosys’s VR-based program provides step-by-step instructions to train employees in a hospital environment. The program uses physical gestures to simulate actual tasks involved in a job. Gamification with scores and points keeps employees engaged and motivated. Scores reflect an individual’s strengths and weaknesses. Training data is integrated with the central learning management system for records.
A multinational industrial and consumer goods manufacturer wanted to create an e-training platform for its new operators. It had a few integrated assembly lines for its finished items. The Kaleidoscope Innovation team created a virtual training module along the assembly line, one workstation at a time. The team used front-end user interface elements to guide users for equipment operations. It tracked performance metrics in the backend to provide feedback for correction. Best practices of creating a virtual replica of one workstation are used at later stations.
Futuristic Workplaces
While collaborative, remote and hybrid working has surged since the pandemic, the future is in three-dimensional virtual and mixed reality workspaces. Organizations benefit from a virtual 3D replica of its workspaces, equipment, products, avatars, or personas. Employee collaborations lead to faster new product development with effective interactions. Teams share ideas, explore, and invent new concepts. Early collaboration of team members in multiple locations enables them to make more informed decisions in the product development process.
Organizations should create virtual replicas of workplaces for human-machine interactions studies from multiple perspectives.
The future of work in healthcare, retail, engineering, and manufacturing is where humans and human-like machines work together. Organizations should proactively create such workspaces virtually and study human-machine interaction from safety, productivity, and employee morale perspectives before any physical implementation.
Myths About New Product Failure Rates, George Castellion, Stephen K. Markham, 2013, published in the Journal of Product Innovation & Management 30 pp. 976-979.
While “User Experience Design” is often used interchangeably with “User Interface Design,” UX goes far beyond mere interface design to encompasses a user’s complete experience of a product, system or service. For Don Norman, the usability engineer and researcher who coined the term “User Experience,” all aspects of the product experience, “from initial intention to final reflections,” ought to support the user’s needs and desires. Years before Norman came onto the scene, this same concept inspired Jef Raskin, a human-computer interface expert, to define the ideal computing system. Though his vision of a computer, which was nothing more than a glorified word processor, was uninspired even in its own time, Raskin developed a set of UX Design principles, including UI consistency and encouraging users to develop productive habits, that are still relevant today.
“The Canon Cat and the Mac that Steve Jobs Killed,” an article by Matthew Guay, describes Raskin’s desire to create a computer with a humane interface. “An interface (i.e. ‘The way that you accomplish tasks with a product’) is humane if it is responsive to human needs and considerate of human frailties,” wrote Raskin. His goal was to liberate computer users through increased productivity—getting more done in less time. Inspired by Isaac Asimov’s laws of robots, Raskin defined his own laws of computing to achieve this goal:
“A computer shall not harm your work or, through inaction, allow your work to come to harm.
“A computer shall not waste your time or require you to do more work than is strictly necessary.”
Raskin’s second law is applicable far beyond word processing and seems to emphasize a common struggle faced by UX and UI designers alike. Powerpoint is a notable example of a poorly designed interface that results in decreased productivity. Its predictive toolbar feature that attempts to anticipate the user’s needs based on what has been selected. While this feature can be helpful when it correctly predicts the user’s needs, it can be very inconvenient when it guesses incorrectly, adding multiple mouse clicks to the user’s workflow.
Another violation of Raskin’s second law is inconsistency between user interface elements. Consider Apple’s latest iOS update. Previously, incoming text messages appeared at the top of the lock screen. Following the 16.1.1 update, incoming text messages now appear at the bottom of the lock screen. Neither location is objectively right or wrong, except for the user’s previous experiences of seeing new messages at the top. Now users must unlearn a previous habit to relearn a new interaction. Does the new feature add sufficient value to be worth the friction it introduces into the user’s experience?
The quintessential mnemonic “righty tighty lefty loosey” illustrates the socially ingrained understanding of how to lock or unlock a rotating mechanism. This convention becomes apparent when a user encounters an experience that is counter to what they expect. Because a user intuitively expects to turn the mechanism a certain way, requiring the opposite is a source of confusion and frustration.
When designing products, consistency is one of many usability principles, known as heuristics, that act as general guidelines for creating intuitive user interactions. Usability expert Jakob Neilsen, who cofounded the Nielsen-Norman Group with our good friend Don Norman, created the most well-known and widely used set of usability heuristics. These heuristics are used by product designers across the globe to design more intuitive and user-friendly products and experiences.
Another key heuristic that Nielsen defined is the user’s ability to match the design of the system to their understanding of the real world. Imagine a stove top with 4 burners arranged in a square and knobs that are arranged in a line. This creates confusion and tension because the user does not know which knob controls which burner. However, if the knobs are arranged in the same square pattern as the burners, and each knob activates its corresponding burner, users quickly understand which knob needs to be turned to ignite the intended burner.
The ultimate goal of user-centered design is to increase productivity and create an experience that is “responsive to human needs and considerate of human frailties.” No product is experienced in a vacuum—each user encounters that product within the context of a lifetime of other experiences. Understanding the needs and frailties of the end user empowers designers to create more intuitive, efficient, and enjoyable experiences for users. While Jef Raskin’s Canon Cat was a commercial failure, in a world inundated with widgets, tools and systems—both physical and digital—his concept of a humane interface is perhaps more relevant now than ever.
About:
Headquartered in Cincinnati, Ohio, Kaleidoscope Innovation provides medical, consumer, and industrial clients with full-service insights, design, human factors, and product development. For more than 30 years we have been helping our clients grow their capabilities, gain usable knowledge, and get worthwhile results.
As a full-spectrum product design and development firm, we are an expert extension of your product vision. Our teams collaborate across disciplines, providing specialized input to produce the ideal intersection between function and form. To ensure the soundness of our work, Kaleidoscope houses a full range of test labs, and we employ an award-winning team that embraces every challenge, applying their experience, ingenuity, and passion.
Tom Gernetzke is a senior lead industrial designer at Kaleidoscope Innovation and has spent the last 12 years creatively bringing new product ideas to life.