View Manipulation

Intergraph Smart 3D Model Manipulations

Design Challenge
While manipulating a 3D model orientation, the disconnect between view orientation of the model, in relation to the direction that a user tried to manipulate the model was excessively frustrating.

Understanding user goals and challenges
During contextual inquiries with end users, we were able to see exactly how and where they struggled. View cube (static image) did not change its orientation based on changes in model orientation. Also, the availability of just one view cube for multiple graphic windows led to user frustrations. Based on observations, we then worked on capturing opportunities for improving the navigation experience. To further assist our design process, we sketched two user stories to help us understand end user scenarios better.

Sketching possible solutions
Based on data from user research, we sketched six to seven possible solutions.
View Manipulation SketchesPrototype and test two designs
We selected two good ideas to prototype, based on the design challenge. A/B tests of the design prototypes helped us iteratively improve design concepts. Results helped us explore how users perceived the new controls basic functionality; how did a users’ previous experience play a role in drawing inferences on using the new control; and what functions were walk up and use and which required additional help.

High-stakes project – everyone wanted a say in the design!
Need for high fidelity prototypes vs. project deadlines

An immersive mode of embedding the 3D model in a cube when needed, with additional easy to use hotspots on edges and corners of the cube was the final solution.

We observed, 44% decrease in error rates and 31% improvement in completion times when users reoriented the 3D model in standard perspectives with the new design.

Execute user research
Lead and brainstorm with 3 member design team
Assist in usability evaluation and formulate recommendations


Smart 3D QuickPick

Design Challenge
Plant designers when interacting with denser volumes of a 3D model, struggled with quick selection of desired objects.

Process We aggregated user issues from internal inventory of customer support tickets and stakeholder interviews. Contextual inquiries revealed that, the current implementation of QuickPick automatically popped up and interrupted user workflows. In addition, from user interviews we found that, numerical representations on QuickPick conveyed no relationship with underlying objects or geometries that composed an object.

QuickPick One

Convincing stakeholders with new design ideas
Need for quick turnaround with prototype testing

Solution Objects present in the z-order of a 3D model were presented as a list with the composing geometries shown as flyouts for each object in the list. Given the high frequency of QuickPick usage, swift on demand display was facilitated with either a mouse gesture or keyboard shortcut.

Results Iterative usability tests guided us on where we should make small tweaks to QuickPick layouts. Usability tests reported 16% improved results with object selections.

Execute user research
Coordinate with Stakeholders and Developers
Ideate and create design solutions
Create prototypes

Navigation Control

Intergraph Smart™ 3D App Navigation

Design Challenge
Many of the features requested by users for Intergraph Smart 3D not easily discoverable even though they exist.

Current Smart 3D Navigation
Process We sourced information by researching internal inventory of customer support tickets, user analytics, stakeholder interviews. This was followed by contextual inquiries with end users. This research helped us capture user frustrations around information architecture, application accessibility and lack of context sensitive user actions. Based on product milestones, technical constraints and competitor research Ribbon Interface was used to implement the commanding interface for Smart 3D.

User base adverse to change
500+ commands to analyze
Graphic model screen real estate was of prime importance to end users

Solution At its core, Ribbon is a hierarchical UI container that utilizes tabs and panels to group similar functionality. Many tools were employed to determine the optimal Ribbon configuration.
Two stage card sorting exercise was conducted. Stage one, we discovered the higher level groups with an open sort. Followed by a closed sort to evaluate how users would bucket the panels of the Ribbon. The card sort was done via software so we could send it out to end users and get more data.

Frequency of command/action usage was also evaluated based on user logs from existing software. Although this information was not the sole driver for the solution. Each item in the Ribbon had a number of parameters that had to be managed and specified: was the button large or small, a split button, what was the command it launched, what property did it set, etc.
Patterns were documented with assistance from visual design team to share design knowledge with interaction designers and developers.

Results Iterative usability tests guided us on where we should make small tweaks to individual panel layouts. Initial tests reported 27% improved results with discoverability issues.

Plan and execute user research (card sorting)
Lead and guide two member design team


Smart™ 3D usability benchmarking

Before proceeding with design changes for Smart™ 3D, it was essential that benchmark tests be conducted on current version of Smart 3D to compare and contrast the UX improvements over time.

Project goals and methods In this project we focused on understanding the strength and weakness of property pages as implemented in the current Smart 3D. Various aspects of the property pages were identified (based on task analysis) for testing, leading up to user tasks for tests. Usability tests were conducted in our in house labs with twelve participants.

Metrics measured A combination of both qualitative and quantitative data was collected, with quantitative data providing the crux of evidence for usability findings. Performance metrics measured – time to complete tasks, success rates, number of attempts and mouse clicks. Perception metrics – ease of use, perceived time to complete tasks, satisfaction and feelings about the task before and after use.

Documenting findings Qualitative data was reported as issues and illustrations arranged by task. Quantitative data was predominantly reported as charts for every category of data, along with Summative Usability Metric (SUM) method for single-scoring and ranking usability.

Details on request.

Affinity Diagramming

Contextual Inquiry for Intergraph Smart™ 3D

User Research Challenge
When assigned to spearhead the design revamp of Smart™ 3D, there was a need to explore new ways to make the software easier to learn, friendlier to use, and more efficient. The major roadblock to this initiative was lack of direct access to end users. I lead the charge on creating a Contextual Inquiry plan, and successfully convinced the executive management and stakeholders on ROI. On approval, I led and conducted contextual inquiry at multiple customer sites in North America, Europe and South Asia.

During the site visits, we observed users following the master apprentice model, in some cases due to constraints we improvised and conducted interviews. Focus groups were conducted by user disciplines to understand core issues faced by each user group. Work models pertaining to flow, work sequence, physical environment, artifacts used and culture were captured. Sample physical model shown below.

Data analysis
After site visits affinity diagramming sessions were conducted to synthesize & compile findings.

This exercise lead to creation of personas, scenarios and task flows for interaction design leading up to overall product UX strategy. The findings and UX strategy for the product were presented to executive management. This churned up a lot of action, leading up to realignment of product vision and also led to approval of contextual inquiries for other products.

Details on request.


Smart™ 3D user personas

User Research Goal There was a need to facilitate better user centered conversations among internal developers and designers at Intergraph.

Methodology Personas were the first step to create a representation of end users and evangelize user centered thinking. The project was kick started by interviewing key stakeholders including Product Owners, Support and Quality Assurance. This was followed by online research for job descriptions of our end user persona to create an initial draft. These personas were further verified and validated with interviews and focus groups during contextual inquiry at customer sites.
CI_Flow Model
Results One primary persona and two secondary personas resulted from this exercise. The personas have since then been used by both designers and developers alike in design conversations encouraging focus on user centered design.

Define research questions
Create research plan and data analysis

Visual attention patterns during program debugging with an IDE

Visual attention patterns during program debugging with an IDE

ETRA 2012Eye Tracking Research & Applications
Abstract: Integrated Development Environments (IDE) generate multiple graphical and textual representations of programs. Co-ordination of these representations during program comprehension and debugging can be a complex task. In order to better understand the role and effectiveness of multiple representations, we conducted
an empirical study of Java program debugging with a professional, multi-representation IDE. We analyzed gaze patterns by segmenting the debugging sessions into three, five and fifteen minute intervals, and classifying gazes into short and long gazes. Novel data mining techniques were used to detect high frequency patterns from eye tracking data. Visual pattern differences were found among participants based on their programming experience, familiarity with the IDE and debugging performance.

View Paper

Multiple Visualizations and Debugging: How do we co-ordinate these?

Multiple Visualizations and Debugging: How do we co-ordinate these?

ACM SIGCHI 2012Conference on Human Factors in Computing Systems
Abstract: There are many popular Integrated Development Environments (IDE) that provide multiple visualizations and other sophisticated functionalities to facilitate program comprehension and debugging. To better understand the effectiveness and role of multiple visualizations, we conducted a preliminary study of java program debugging with a professional, multi-representation IDE. We found that program code and dynamic representations (dynamic viewer, variable watch and output) attracted the most attention of programmers. Static representations like Unified Modeling Language (UML) Diagrams and Control Structure Diagrams (CSD) saw significantly lesser usage. Interesting eye gaze patterns of programmers were also revealed by the study.

View Paper

Debugging Heatmap

Gaze-Based Evaluation of Program Comprehension and Debugging

This thesis develops a cognitive model of how multiple representations including visualizations are used by programmers to comprehend and debug a program in an IDE for object oriented programming. The model, based on literature review and analyses of the shortcomings of existing research, is more detailed than any model of program comprehension and debugging hitherto offered in the literature. The model was evaluated empirically with two debugging studies during which visual attention of participants was tracked with an eye-tracker.

Thesis report

Studio Based Learning Portal

Studio Based Learning Research

Implemented and evaluated a new computing pedagogy, Studio Based Learning, as a means to address problems pertaining to lack of focus on problem solving and design skills in undergraduate computing education. To evaluate its effectiveness, we carried out a quasi-experimental comparison of a “studio-based” course with pedagogical code reviews/studio based labs and an identical “traditional” course without pedagogical code reviews.

Research Questions Does the Studio Based Learning lead to better learning, improved motivation, better self efficacy and promote sense of community?

Methodology Pre and post surveys to measure sense of community, learning, motivation and self efficacy, interviews.

Role As a graduate research assistant on this project, I was the direct contact for 22 different institutions across North America. Collaborated with professors from psychology department and designed surveys, semi-structured interview questions and analyzed both quantitative and qualitative data.

Quantitative Data Analysis
Qualitative Data Analysis

Longitudinal Usability Meta Study

Longitudinal Usability Meta Study

Summer project – A meta-analysis of different longitudinal studies over the years was undertaken, to aggregate and conduct in-depth exploration of methods and metrics that have proven effective for collecting user data over time. Research Questions

Methodology Literature survey of existing publications employing longitudinal methodology to evaluate usability of a system was undertaken. More than 120 papers were surveyed for this research.

Results A breakdown of all the methodologies employed was documented and analysed for trends in methodology usage. It was found that diary studies was one of the most popular methodology for longitudinal studies.

Parking Rummage

Parking Rummage

Challenge Design, develop and test an iPhone app to resolve parking space chaos in Auburn University, by locating the nearest empty parking lot based on real-time data.

Field observations were undertaken to capture current user behaviors and user personas, followed by user interviews. Initial research lead to scenarios and workflow diagrams. We then tested paper prototypes of our design ideas. See report on Personas and Workflows.

The app was well received by all participants. However, three usability issues were recommended as “top priority” fixes before the app could be published.
This app was featured in the fall edition of Auburn Engineering Alumni Magazine. See Usability Evaluation Report.

User Research – interviews & observations
Paper prototype testing
Summative usability testing

Lead Design and Research initiative on a three member team

Recommender system

Design and evaluation of a Recommender system

Designed the interaction & interface elements to allow easy articulation; designed the underlying computation elements to collect & use user-specific data to develop recommendations.

User Task analysis
Cognitive walk-through to evaluate the prototype

Artificial Bot Counselor

Artificial Bot Counselor

Developed a web based tool, to train Counseling Psychology students basics of a counseling session.

Contextual Inquiry and Design
Heuristic Evaluation