Envisioning the future of graduate application systems for the School of Computer Science at Carnegie Mellon University
Client
School of Computer Science, Carnegie Mellon University
Context
Capstone Project, MHCI
My Role
I served as the Research Lead and Client Communicator on our project. As Research Lead, I was responsible for ideating solutions that addressed user needs as well as guiding the testing process. I worked with my team to recruit participants, conduct interviews, and generate insights that would create value both for departments and programs across the School of Computer Science. As Client Communicator, I was responsible for scheduling and leading meetings with clients and other stakeholders to update project progress, generate feedback, and align our final product with client expectations.
Team
Yuwen Lu
Ugochi Osita-Ogbonnaya
Anna Yuan
Emily Zou
Timeline
February - August, 2021
Our MHCI capstone team, the Super Cool Scholars, worked with Carnegie Mellon University to reimagine ApplyGrad, the School of Computer Science’s graduate admissions system. Reviewers have had an increased workload and resulting pressure as the number of applications grows each year, reaching over 10,000 applications for the 2020-2021 school year.
For 8 months, our team worked to develop solutions based on user research including contextual inquiry, affinity mapping, think-alouds, and A/B testing. Ultimately, our goal is to reduce cognitive burden by creating designs that improve efficiency, fairness, and joy for reviewers in the system. We imagine ApplyGrad as a one-stop system that is functional, beautiful, and can provide actionable data and insights for reviewers.
Flagship of SCS
ApplyGrad is the flagship graduate admissions system of the School of Computer Science at Carnegie Mellon University that processes thousands of applications each year. It is a custom, internal system that is used by 7 departments and 47 different programs.
The Need for Redesign
From handling students’ application materials, to evaluating candidates, to reporting the numbers to administration, the system is a hub for almost all application needs. However, much of the design and infrastructure has gradually become outdated. Small differences in the system that do not map properly onto a user’s natural workflow have gradually led to increases in time and energy for evaluation.
Key Goal 1: Efficiency
We want to guarantee that the system can first help reviewers reduce their workload. This means running consistent usability and benchmarking tests to ensure baselines are met.
Key Goal 2: Effectiveness
The next step is to add functionality in critical areas that do not currently map into the user workflow. Building and testing new visuals, calibration, and features to assist the review process is essential.
Key Goal 3: Experience
A delightful UI can bring joy and increase both morale and mood. These emotional shifts in people can also transform behavior to promote consistency in evaluations.
In the spring semester, my team focused on understanding the different procedures, values, and stakeholders’ tasks in the admission process, as well as the role that ApplyGrad plays in it. We focused particularly on the reviewer’s experience. Through research, we were able to discover reviewers’ needs and pain points during the review. They helped us dive deeper into the problem space and identify new opportunities for ApplyGrad.
We used a variety of research methods over the course of six sprints. For each research study, we recruited participants from different programs to observe differences and commonalities amongst the various departments and programs under the School of Computer Science.
Our stakeholders include reviewers, administrators, program directors, admissions heads, applicants, developers and much more. The interface that we design will have rippling effects that benefit both the people that review and the applicants that flow through the system.
The main areas that we improved are in the documentation, data manipulation, and note-taking, which we identified as high-impact based on client needs and domains such as number of overlapping research focuses, time spent per reviewers, and number of users affected.
As a team, we felt that one of the most low-cost but impactful improvements we could make is to improve the design of the interface. Not only can a more intuitive and streamlined design improve the efficiency of the user by eliminating distractions, but can actually improve the experience of the user. A cleaner interface for the system can eliminate a source of additional stress in an already taxing process.
While these improvements to the ApplyGrad interface seem intuitive, this design utilizes existing patterns and functions that admissions committees have sought out in external tools. It is a clear display of the information that reviewers most utilize in their review process, and allows for quick and easy sorting and filtering of data that a spreadsheet provides. Additionally, updating the interface is a low-cost improvement that can be implemented quickly and easily to improve the reviewer’s experience.
Capturing relevant information that reviewers need via the annotations will root the decision-making process of admissions committees in data-based evidence. Reviewers will be able to pull key pieces of information quickly and comprehensively. In doing so, the applicant’s data will play a larger role in discussions of merit and will help reviewers ground their evaluations in facts, without adding a significant amount of cognitive load.
In our re-designed system, we saw an opportunity to keep the main user tasks in one interface to not only reduce swiveling, but help them piece together a comprehensive picture of the applicant. This improvement is two-fold: the display of applicant information and documents, and the introduction of annotations into the system.
One of the key considerations that we took when designing this additional feature is how it will fit into the reviewer’s workflow. The automation of the annotations into the review form assists the reviewer in synthesizing information about an applicant in their summary; rather than having to compile all information about relevant experience in their mind or in an external system, the annotations are compiled into a central location to relieve cognitive burden.
Another key component of our design allows reviewers to add labels to applications that align with the qualities of applicants. This is another way to quickly remind reviewers about the qualities of candidates and potential benefits and risk factors associated with them. The labels that reviewers assign an applicant will be attached to the profile, and can be viewed by other members of the admissions committee. But we’ll get to the default visibility of those tags in a minute.
Review work is already quite stressful. Not only are reviewers pressured by deadlines, but many of them also feel anxious for rejecting very qualified applicants with the knowledge that this will very likely change the course of their lives. It is easy to begin losing the human element of reviewing when facing late nights and hundreds of applicants to be reviewed. On the other hand, however, it is a struggle to find a balance between spending too much and too little time reviewing an applicant. These are challenges reviewers face regardless of the system that they use. It became clear to us through our research that reviewers are willing to prioritize fairness and consistency over pure efficiency. This is a testament to the dedication and passion that these reviewers feel towards selecting the next generation of SCS graduates.
Designing for experience is closely tied with our two previous design goals of efficiency and effectiveness. Re-designing the interface and streamlining the workflow for reviewers relieves the frustration of finding assigned applicants. New annotation features make it easier to do a thorough and comprehensive evaluation. The combinations and optimizations in efficiency and effectiveness will work in tandem to create an overall improved review experience.
ApplyGrad is a mission-critical system to the admissions process for graduate programs in the School of Computer Science. Any updates to the system would need to take this fact into consideration; implementing advanced features will take time and resources to accomplish.We created a multi-stage implementation plan in order to deliver interface design improvements that can improve the efficiency of the review process while the more advanced features are built and refined in the system.
What features or improvements could we design that would be the most impactful, and would justify the resources that would be required for implementation? We believe that our solution provides four main value propositions to help improve the review process.
1) High Customizability and Generalizability
Our design can be adapted to fit the review process of different programs. While many programs have the same general stages of review, both individual and admissions committees, many have different review criteria and specific qualities that they look for in applicants. Our system reflects both the customizability and generalizability to cater to the majority of graduate programs in the School of Computer Science.
2) Reduce Cognitive Burden
A constant theme throughout our research is the cognitive burden that reviewers experience when reading applications. By incorporating the application materials and review form into one interface, reviewers can quickly cross-reference applicant materials and their notes. With the improved interface, all main user tasks are incorporated into one screen. This reduces the “clickiness” in using ApplyGrad, one thing many reviewers complained about. Also, the mental burden of remembering information across screens, documents, or tabs is reduced, thereby relieving the stress that reviewers face in synthesizing applicant information. And, as everyone knows, reduced stress leads to better decision-making and an enhanced experience for reviewers.
3) Balance the Cohort
The ability to see an overview of the applicant pool with a summary of the evaluations can facilitate a quicker analysis of the cohort’s composition. The customized table view we designed here is beneficial for committees and cannot be easily achieved by using an external spreadsheet. One of the priorities of programs is to ensure a diverse pool of admitted applicants, not only demographically but in academic disciplines for interdisciplinary programs like MHCI. For other programs, labels can be customized for more specific purposes, like potential scholarship recipients, or marking what tech stack each applicant possesses based on their project experiences.
4) Improve Retention
Improving the tools that reviewers use can additionally aid in retention and incentivization of serving on admissions committees. In an elite institution like the School of Computer Science, the opportunity to hand-pick the next students should be an exciting prospect instead of an additional burden. Not only will a customized and unique graduate application system facilitate a more natural workflow, but the in-house redesign of the ecosystem reflects the pride of being the leading computer science department in the nation.
This project was an amazing experience, and pushed me to grow into a better designer and researcher. Here are a few of the biggest lessons I learned over the course of the eight month project.
1) Creating cohesive and driven research plans was a crucial piece of this project. As Research Lead on a long-term project, I worked to define directions based on our previous findings and insights to make sure that our project stayed on track and continued to move forward. There were times, however, that called for quick pivots and adjustments. Working in two-week sprints demanded that we didn't let those changes derail us. It pushed me to learn how to find the balance between being flexible to make a change when necessary, while still maintaining our quality of work and overall project timeline and deadlines.
2) After getting feedback from our clients at the end of the spring semester, my team made a conscious and deliberate effort to determine the best "bang for the buck"- in other words, narrow our project scope to create a solution that would have the biggest impact that can be implemented as soon as possible. One of the challenges that ApplyGrad as a system faced was a lack of resources. We took this context into account as we designed and tested solutions, as well as consulting experts about the feasibility of possible features. As a result, we were able to clearly articulate our rationale behind our design and pitch the value of our design to SCS stakeholders.
3) My team was made up of amazing researchers, designers, and computer scientists. Whether it was writing research plans with Ugochi, deploying prototypes with Anna, making project plans with Emily, or conducting ML and prototyping interviews with Yuwen, I really enjoyed and valued getting to dabble in all parts of the project. We also took time each sprint to do a retrospective and give each other feedback, as well as share our thoughts on the progress of the project. Being able to share knowledge and build rapport was a key component of what made our team successful.
4) Storytelling is a key component in UX research. It was clear as we conducted our research that the current ApplyGrad system was adding to the already existing burden that is reviewing graduate admissions. But how do you convey how challenging it really is to an audience that has no experience being a reviewer? Why is updating ApplyGrad a problem worth solving? While we struggled with this narrative at the beginning of the project, I lead our team to create an engaging presentation of our final solution that clearly articulated the short- and long-term impacts of removing obstacles and adding support in the review process. It was incredibly rewarding and helped me define why I love this work: telling the stories of users and advocating to make change.