top of page

AnatomyNow

June 2019 - July 2019

A team of Computer Science students at Duy Tan university have been developing a program called AnatomyNow. The program allows you to examine accurate 3D models of human anatomy. Although the program is very impressive it is unwieldy and difficult to use. A small team of Oswego students collaborate with the Duy Tan team to help design a new interface for the program.      

AnatomyNow.png
The Problem:

A team of computer science students have been developing a new program to help study human anatomy. The program is very impressive however it is awkward and difficult to use. We need to help redesign the interface so it is simpler and easier to use.  

The Goal:

Redesign the interface of the program so it is more user friendly. The redesign must contain all the features of the old design and be easier to use and navigate, and less prone to user error.   

My Role:

This project was lead by me and another graduate student; we we're also assisted by undergrad student and under the guidance of a professor. The fellow graduate student Priyanka and I lead the activities of the project while our professor gave us directions to go in.  

Limitations:

For this project we had to complete it over the course of our stay in Vietnam. So we had roughly a 2 month time limit to start and finish it. Since we were in foreign country there was also a language barrier that limited us with finding participants to test our design.    

Cognitive Walkthrough

At the beginning of this project neither of the other students or I had seen the AnatomyNow program in person. Our professor had told us about it before we flew over but not of us had used or seen the program in person before. We had started the project with only a basic understanding of what the Application was. Our first day of the project was of using the program ourselves and playing around with it to get a better understanding of how it worked. We also wrote down observations about the program taking note of its different quirks.   

The AnatomyNow program consisted of two main screens: the Home screen and the 3D screen. The Home page was to select which organ system you wanted to examine. The 3D screen is where you would examine 3D modals of organs in great detail. 

65075950_476431786496090_2371273387109515264_n (1).png

Screenshot of Home Screen 

64830116_372617283611188_4620336971619762176_n.png

Screenshot of 3D Screen 

After getting firsthand experience with the program and figuring out how it works, the other students and I set up a Cognitive Walkthrough for 3 other foreign students not involved in the project to go through. A Cognitive Walkthrough means letting the participant use the program and ask them to complete a few tasks. These task were meant to mimic the core functions of using the program.    

The tasks we asked participants to complete were:

1. Open the skeletal system in 3D view.

2. Locate the Humerus bone and read the description.

3. Return to the main screen and view the Skull in 3D view.

4. Look through the top of the Skull.

5. Return the model to its original state.

6. Add the four front teeth into the pinned list.

7. Return to the main screen.

8. View the internal components of the brain together in full screen. (Only the internal components)

9. Return to the main screen and view the animated simulation of the heart.

10. Exit the program.

While participants were using the program we also observed them to see how they navigated through the program and what they struggled with. We also recorded their walkthroughs and took notes during of our observations. 

After everyone completed the Cognitive Walkthroughs we went over our notes and looked for common problems that everyone seemed to have while using the program. These problems were compiled into list from highest to lowest priority.    

Problem List:

  1. Exit button within the 3D view mode is placed poorly and lacking a warning before exiting the program. Every user initially believed the exit button to lead them back to the main screen. This, along with the poor placement and lack of warning leads to accidentally exiting the program and having to restart the program.  

  2. To open the 3D view you needed to double left click the option you wanted to select. This was unintuitive and took everyone a little bit to figure how to do so. Opening a submenu only took a single left click, which was simple intuitive for users. The double click is inconsistent and not obvious to perform.

  3. The main screen has poor labels that only appear when hovering over icons. The labels are very small, difficult to read and only in Vietnamese text. The pictures for the menu options don’t explain well enough what 3D models they contain. The menu options don’t fill up the entire page leaving a lot of blank space that doesn’t contain anything. The menu options also don’t appear to be organized in any particular order, and there is no indicator for the options that contain submenus. 

  4. The order and arrangement of the buttons within the 3D viewing mode was unclear and confusing. Several icons for the buttons were unclear about the functionality of the buttons, as well as several labels for buttons not properly explaining their function. The order and placement of the buttons also appeared arbitrary.  

  5. The “List” button in the 3D viewing screen is poorly named and has a misleading icon. The name “List” does not make it clear what the functionality of the button is. The icon for the button is a grid but that doesn’t explain the functionality of the button and only added confusion about what it does. 

  6. Language change button - The description text does not refresh immediately when changing the language, in order to see the description text in the opposite language you need to click on the select object again, or select a different object and then reselect the target object to refresh the text. Without refreshing it’s not clear if the language change went through or not. The icon for the language change also doesn’t change depending on which language is selected which also adds to the confusion of which language is selected

  7. No instructions on how to interact with the 3D model. Most users did not know how to pan around the model and some took a while to realize how to rotate it. The scroll bar zooms the model in and out but the mapping of zoom in to scroll down appeared unintuitive.        

  8. Gray back arrow on the main screen is unclear that it only works for submenus. The button is also small and gray on a white black background so it is somewhat difficult to see. The placement of this button could also be revised to be in-line with other applications. 

  9. Multiselect - The label for this button was unclear and people tried different methods for selecting multiple objects such as control selecting. The icon for this button did express its function/purpose 

  10. Pinned List/Pin - The icon for the pin/pinned list were not obvious about their function. The pinned list did not display which objects were in the pinned list. The pinned button was highlighted red when selected but this highlight was not noticable and didn’t clearly indicate which objects were already pinned or this function is currently selected or not     

  11. Change View - Names for the change view were unclear as to their function, people were unable to figure out which button did what without pressing them and seeing the change. 

  12. Search - The search bar didn’t work, or suggest any option when typing into it, the box simply remained empty besides whatever text the user typed in. The placement of the button also seemed arbitrary and could be better placed 

While the Cognitive Walkthroughs were being run a different part of the team created users personas. Personas are representations of different types of archetypical users; in other words they are fictional characters that mimic the different demographics who use a product. The personas we created were of a medical doctor who could use the program to explain medical problem to their patient or nurses, and of a professor who might use the program to help with his lectures.  

Professor Persona1024_1.png

Professor Persona 

Doctor Persona1024_1.png

Doctor Persona 

Redesign

After the Cognitive Walkthroughs were completed and we had a better understanding of the flaws of the program we started working on the redesign. Every member of the team created a rough draft sketch of a redesign. Once everyone made a sketch we came together to bounce ideas off each other, keep the ideas we liked from each sketch and combine the best parts of them together to make a redesign.      

20190718_081216.jpg
20190718_075124.jpg
20190718_081207.jpg
20190718_075121.jpg
20190718_075133.jpg
20190718_075152.jpg

Design Sketches from everyone on the team

For this redesign we kept in mind the problems of the original design: that it lacked clarity of what buttons did, it gave little feedback or warning to users causing confusion, and that parts of the menu we unintuitive to use.  

To address these problems we gave the buttons better labels so it was clearer of their purpose. Buttons were also grouped together based on the functions they had; similar functions were grouped together while different functions were in separate groups. Buttons that caused more impactful changes were given warnings before going through with the changes they made.    

Vietnam Project Home page (new).PNG

Redesign's Home Page

Vietnam Project 3D page (new).PNG

Redesign's 3D Page

With this redesign complete we create a prototype of it that could mimic the functions of main program. The prototype redesign was capable of completing all the tasks from the Cognitive walkthroughs

User Testing

After our Redesign prototype was completed we had to put it to the test to see if it was better than the original design. To see if it was an improvement or not we used an A/B testing method to compare both designs. Since the development team didn't want us to risk leaking their program we made a separate prototype to recreate their original design of the AnatomyNow software.       

For the A/B testing we conducted it in a similar manner to the Cognitive Walkthroughs. We gave participants either the redesign prototype or the Original design prototype and asked them to complete a few tasks within the prototype. Participants were monitored while using the prototypes and only given help if they became stuck and unable to proceed forward. The students conducting the tests also took notes of how the participants used the prototypes and other observations they had.       

received_482416722334304.jpeg
received_359246128093994.jpeg

Pictures from our user testing day 

After completing the main tasks in the testing participants were asked to complete a questionnaire based on their experience. The questionnaire consisted of Questions from the System Usability Score (a standardized survey to measure the overall usability of product), a rating scale of how difficult each tasks was, and lastly a few demographic questions about them.     

Because we were in a foreign country with not many English speakers, and on a time limit we only had limited amount of test subjects. Lucky we were able to set up testing in a local restaurant to get a few locals and some tourist to participate in our testing. We were only able to get 13 participates to test our prototypes but it was enough to get some definitive results.     

20190718_082905.jpg

Observation Notes from user testing, we made a template to for recording notes

received_883583878687841.jpeg

A pictures of us and many generous user testers 

Results

From our A/B testing we were able to directly compare the original design to the redesign. We only 13 participants in total due to time restraints and also because were in Vietnam and it was hard to find English speakers there. For testing we had 6 participants use the original design, and 7 participants to use the redesign.   

System Usability Score: 

The System Usability Score (SUS) gives an overall rating of usability for a system. The SUS would give a score out of 100 with a higher score indicating better usability. All the participant's scores for both groups were averaged together to create a score for each design.   

SUS Scores (1).png

By comparing the scores of the two groups, we can see that the average score for the redesign interface was 19.7 points higher than the average score for the original interface. This result shows that the redesign interface had significantly higher overall usability compared to the original interface, and that the redesign was successful in increasing the usability of the interface.

Task Difficulty Rating:

During the testing we asked all participants to complete 5 tasks using either original design or the Redesign. After completing the tasks and the SUS questions we asked each participants to rate the difficulty of each task. The ratings were on a 1 - 5 scale, with 1 being very easy and 5 being very hard. The scores from each group were averaged together for each task.   

Task Diffculty Rating (1).png

The results show that the tasks were easier to complete on the redesign; as 4 out the 5 tasks had a lower score. Task 1 and 2 had a great difference in difficulty rating. Task 4 was rated as easier on the original so there is still room for improvement in the redesign.   

Conclusion

From our testing results we were able to prove that our redesign was more user friendly than the original design by a significant margin. The redesign scored a better SUS scores and was rated as easier to use. Although the redesign was successful there is still room for improvement as certain aspects of it weren't better than the original design. For having a limited time frame and working in a foreign country I think our efforts were fruitful.     

Although this project was completed a long time ago the prototypes are up is you want to play with them:

Link to Original Design: 

Link to Resign Design: 

bottom of page