ApplyGrad

Envisioning the future of graduate application systems for the School of Computer Science at Carnegie Mellon University

ApplyGrad dashboard on laptop mockup in front of a green background

Client

School of Computer Science, Carnegie Mellon University

Context

Capstone Project, MHCI

My Role

I served as the Research Lead and Client Communicator on our project. As Research Lead, I was responsible for ideating solutions that addressed user needs as well as guiding the testing process. I worked with my team to recruit participants, conduct interviews, and generate insights that would create value both for departments and programs across the School of Computer Science. As Client Communicator, I was responsible for scheduling and leading meetings with clients and other stakeholders to update project progress, generate feedback, and align our final product with client expectations.

Team

Yuwen Lu
Ugochi Osita-Ogbonnaya
Anna Yuan
Emily Zou

Timeline

February - August, 2021

Summary

Our MHCI capstone team, the Super Cool Scholars, worked with Carnegie Mellon University to reimagine ApplyGrad, the School of Computer Science’s graduate admissions system. Reviewers have had an increased workload and resulting pressure as the number of applications grows each year, reaching over 10,000 applications for the 2020-2021 school year.

For 8 months, our team worked to develop solutions based on user research including contextual inquiry, affinity mapping, think-alouds, and A/B testing. Ultimately, our goal is to reduce cognitive burden by creating designs that improve efficiency, fairness, and joy for reviewers in the system. We imagine ApplyGrad as a one-stop system that is functional, beautiful, and can provide actionable data and insights for reviewers.

Problem

Flagship of SCS

ApplyGrad is the flagship graduate admissions system of the School of Computer Science at Carnegie Mellon University that processes thousands of applications each year. It is a custom, internal system that is used by 7 departments and 47 different programs.

The Need for Redesign

From handling students’ application materials, to evaluating candidates, to reporting the numbers to administration, the system is a hub for almost all application needs. However, much of the design and infrastructure has gradually become outdated. Small differences in the system that do not map properly onto a user’s natural workflow have gradually led to increases in time and energy for evaluation.

How might we...
... Facilitate reviewer’s natural workflow rather than hinder it?
... Quickly find & contextualize information in a delightful way?
... Prioritize fairness & consistency?

Solution

We believe that our design can minimize the disconnect between the stress that reviewers feel in reviewing towering stacks of applications, and the excitement of selecting the next generation of academic and industry leaders. A combination of interface and navigational improvements, a new annotation system, and collaborative displays that our design incorporates affects all stages of the review process. The need and the desire for a system that can support better decision-making while improving the overall reviewer experience will have a lasting impact on the School of Computer Science that reflects the identity and pride of this leading institution.​​​​​​​

Key Goal 1: Efficiency
We want to guarantee that the system can first help reviewers reduce their workload. This means running consistent usability and benchmarking tests to ensure baselines are met.

Key Goal 2: Effectiveness
The next step is to add functionality in critical areas that do not  currently map into the user workflow. Building and testing new visuals, calibration, and features to assist the review process is essential.

Key Goal 3: Experience
A delightful UI can bring joy and increase both morale and mood. These emotional shifts in people can also transform behavior to promote consistency in evaluations.

Redesigned applicant data table

Redesigned review screen with annotation and embedded review form

Research Overview

In the spring semester, my team focused on understanding the different procedures, values, and stakeholders’ tasks in the admission process, as well as the role that ApplyGrad plays in it. We focused particularly on the reviewer’s experience. Through research, we were able to discover reviewers’ needs and pain points during the review. They helped us dive deeper into the problem space and identify new opportunities for ApplyGrad.

We used a variety of research methods over the course of six sprints. For each research study, we recruited participants from different programs to observe differences and commonalities amongst the various departments and programs under the School of Computer Science.

Who We Are Designing For

Our stakeholders include reviewers, administrators, program directors, admissions heads, applicants, developers and much more. The interface that we design will have rippling effects that benefit both the people that review and the applicants that flow through the system.

Three Main Design Directions

The main areas that we improved are in the documentation, data manipulation, and note-taking, which we identified as high-impact based on client needs and domains such as number of overlapping research focuses, time spent per reviewers, and number of users affected.

Key Goal 1:
Designing for Efficiency

As a team, we felt that one of the most low-cost but impactful improvements we could make is to improve the design of the interface. Not only can a more intuitive and streamlined design improve the efficiency of the user by eliminating distractions, but can actually improve the experience of the user. A cleaner interface for the system can eliminate a source of additional stress in an already taxing process.

Current model of user journey in ApplyGrad.
This model is based on how experienced reviewers with over 15 years of experience in the ApplyGrad system do their work.In general, there are at least 4 clicks to simply get to the main task of getting to the next review. All this can be reduced to one click or button press.

In our research and testing, we found that both experienced and new ApplyGrad users struggled to find their list of assigned applicants. Incorrect identification not only means that some qualified candidates can be overlooked, but that reviewers waste time reviewing extra applications.

Our Solution

While these improvements to the ApplyGrad interface seem intuitive, this design utilizes existing patterns and functions that admissions committees have sought out in external tools. It is a clear display of the information that reviewers most utilize in their review process, and allows for quick and easy sorting and filtering of data that a spreadsheet provides. Additionally, updating the interface is a low-cost improvement that can be implemented quickly and easily to improve the reviewer’s experience.

The progress bar allows reviewers to quickly gauge their progress. Additionally, we provided two main buttons to help with quick navigation into the review process depending on the reviewer’s preferences.

If the reviewer enters into the applicant table, the default display is the list of their assigned applicants. As this was the most difficult task in our benchmark testing of the original ApplyGrad interface, we wanted to reduce the time that reviewers spent looking for the applicants that they are supposed to review. Here, the default columns that are displayed in the table are the most commonly utilized by reviewers.

Key Goal 2:
Designing for Effectiveness

Capturing relevant information that reviewers need via the annotations will root the decision-making process of admissions committees in data-based evidence. Reviewers will be able to pull key pieces of information quickly and comprehensively. In doing so, the applicant’s data will play a larger role in discussions of merit and will help reviewers ground their evaluations in facts, without adding a significant amount of cognitive load.

Model of the current workflow of reviewers. The first stage of review is the individual review of applicants, where reviewers evaluate candidates and input their comments and ratings. Next isthe admissions committee meetings, where multiple reviewers come together to discuss and compare their evaluations.

Our Solution

In our re-designed system, we saw an opportunity to keep the main user tasks in one interface to not only reduce swiveling, but help them piece together a comprehensive picture of the applicant. This improvement is two-fold: the display of applicant information and documents, and the introduction of annotations into the system.

One of the key considerations that we took when designing this additional feature is how it will fit into the reviewer’s workflow. The automation of the annotations into the review form assists the reviewer in synthesizing information about an applicant in their summary; rather than having to compile all information about relevant experience in their mind or in an external system, the annotations are compiled into a central location to relieve cognitive burden.

Another key component of our design allows reviewers to add labels to applications that align with the qualities of applicants. This is another way to quickly remind reviewers about the qualities of candidates and potential benefits and risk factors associated with them. The labels that reviewers assign an applicant will be attached to the profile, and can be viewed by other members of the admissions committee. But we’ll get to the default visibility of those tags in a minute.

In this interface, we combined all the application materials into one PDF, as well as integrated a table of contents for easy and quick navigation through the application sections. Reviewers can scroll through the materials, or jump to specific sections depending on their preference.Additionally, we embedded the review form into the display of the system. As a result, the display of all application materials in one screen keeps main user tasks within the system.The review form itself contains descriptions and guidelines for the criteria that the reviewers use to complete their evaluations These sections are provided and customized to the program’s requirements. It serves primary and secondary users, and functions as both a reminder and consistency check.

The new feature of annotation, which includes the ability to highlight, comment, and tag text in an application. Reviewers can highlight pieces of text that contain important or note-worthy pieces of information. They also have the ability to add comments to the highlights, which are categorized according to the program’s specific review form. As a result, this feature works in tandem with the embedded review form on the right side of the screen. The comments that a reviewer makes are then automatically populated in the review form according to the tag that the reviewer has prescribed it. When reviewers click on the comment, it will navigate them to the original location of the annotation within the document.

Similar to the way that their own annotations are populated and displayed, a reviewer can compare their scores, annotations, and summaries to another reviewer who has completed their evaluation.A visual display of multiple reviewer’s scores allows for a quick gauge of any discrepancies. Clicking on another reviewer’s annotation will additionally bring the reviewer to its location in the application materials PDF. Our team has designed this feature to only be available after a reviewer has completed their initial review. This can effectively ensure the fairness of evaluations and is welcomed by many reviewers in our testing sessions.

The team added a condensed view that summarizes the review that has been input for each application. For each category of the review form, there is a visualization of the numerical rating that the reviewer assigned. Users can click on the text to see the full comment that has been made in the review form. In the “All Reviews” mode, the data table displays the evaluations of multiple reviewers.

Key Goal 3: Designing for Experience

Review work is already quite stressful. Not only are reviewers pressured by deadlines, but many of them also feel anxious for rejecting very qualified applicants with the knowledge that this will very likely change the course of their lives. It is easy to begin losing the human element of reviewing when facing late nights and hundreds of applicants to be reviewed. On the other hand, however, it is a struggle to find a balance between spending too much and too little time reviewing an applicant. These are challenges reviewers face regardless of the system that they use. It became clear to us through our research that reviewers are willing to prioritize fairness and consistency over pure efficiency. This is a testament to the dedication and passion that these reviewers feel towards selecting the next generation of SCS graduates.

Designing for experience is closely tied with our two previous design goals of efficiency and effectiveness. Re-designing the interface and streamlining the workflow for reviewers relieves the frustration of finding assigned applicants. New annotation features make it easier to do a thorough and comprehensive evaluation. The combinations and optimizations in efficiency and effectiveness will work in tandem to create an overall improved review experience.

This displays a generalized version of reviewers feel as they completed tasks. These emotions are most amplified when reviewing hundreds of applicants.Reviewers have the most negative emotional experiences when trying to find their applicants, filling out the review form, and moving to the next applicant. Users cycle through this process with each applicant they evaluate; experiencing this roller coaster of highs and lows starts to take a toll on reviewers.

In the future state, we hope ApplyGrad not only fulfill functional needs, but also creates a calm and pleasant experience for the reviewers.Our goal is to elevate and level out the emotional journey to create a more even and overall positive experience. Elevating the overall emotion in the individual review process can also be tied to an improved experience in the admissions committee.

Implementation Plan

ApplyGrad is a mission-critical system to the admissions process for graduate programs in the School of Computer Science. Any updates to the system would need to take this fact into consideration; implementing advanced features will take time and resources to accomplish.We created a multi-stage implementation plan in order to deliver interface design improvements that can improve the efficiency of the review process while the more advanced features are built and refined in the system.

The introduction of design systems to our work makes common elements consistent andeasier to implement.Our design adapted the Google Material design system, and utilizes established design patterns to create a cohesive look and feel. The design system works well with modern web development and has public NPM packages containing modular implementation of the interface components. This makes the future implementation much easier because the developers wouldn’t need to build the front-end interfaces from scratch.

In order to determine which features to incorporate into each implementation phase, the team evaluated the features on seven factors including desirability, understandability, and other factors displayed on this diagram. We specifically looked for features that are easy to implement for the short-term horizon and the harder ones for long-term. We also made sure the long-term features can be used and built upon for years to come and can be easily generalized for most departments within the School of Computer Science.

Value Propositions

What features or improvements could we design that would be the most impactful, and would justify the resources that would be required for implementation? We believe that our solution provides four main value propositions to help improve the review process.

1) High Customizability and Generalizability
Our design can be adapted to fit the review process of different programs. While many programs have the same general stages of review, both individual and admissions committees, many have different review criteria and specific qualities that they look for in applicants. Our system reflects both the customizability and generalizability to cater to the majority of graduate programs in the School of Computer Science.

2) Reduce Cognitive Burden
A constant theme throughout our research is the cognitive burden that reviewers experience when reading applications. By incorporating the application materials and review form into one interface, reviewers can quickly cross-reference applicant materials and their notes. With the improved interface, all main user tasks are incorporated into one screen. This reduces the “clickiness” in using ApplyGrad, one thing many reviewers complained about. Also, the mental burden of remembering information across screens, documents, or tabs is reduced, thereby relieving the stress that reviewers face in synthesizing applicant information. And, as everyone knows, reduced stress leads to better decision-making and an enhanced experience for reviewers.

3) Balance the Cohort
The ability to see an overview of the applicant pool with a summary of the evaluations can facilitate a quicker analysis of the cohort’s composition. The customized table view we designed here is beneficial for committees and cannot be easily achieved by using an external spreadsheet. One of the priorities of programs is to ensure a diverse pool of admitted applicants, not only demographically but in academic disciplines for interdisciplinary programs like MHCI. For other programs, labels can be customized for more specific purposes, like potential scholarship recipients, or marking what tech stack each applicant possesses based on their project experiences.

4) Improve Retention
Improving the tools that reviewers use can additionally aid in retention and incentivization of serving on admissions committees. In an elite institution like the School of Computer Science, the opportunity to hand-pick the next students should be an exciting prospect instead of an additional burden. Not only will a customized and unique graduate application system facilitate a more natural workflow, but the in-house redesign of the ecosystem reflects the pride of being the leading computer science department in the nation.

Reflection

This project was an amazing experience, and pushed me to grow into a better designer and researcher. Here are a few of the biggest lessons I learned over the course of the eight month project.

1) Creating cohesive and driven research plans was a crucial piece of this project. As Research Lead on a long-term project, I worked to define directions based on our previous findings and insights to make sure that our project stayed on track and continued to move forward. There were times, however, that called for quick pivots and adjustments. Working in two-week sprints demanded that we didn't let those changes derail us. It pushed me to learn how to find the balance between being flexible to make a change when necessary, while still maintaining our quality of work and overall project timeline and deadlines.

2) After getting feedback from our clients at the end of the spring semester, my team made a conscious and deliberate effort to determine the best "bang for the buck"- in other words, narrow our project scope to create a solution that would have the biggest impact that can be implemented as soon as possible. One of the challenges that ApplyGrad as a system faced was a lack of resources. We took this context into account as we designed and tested solutions, as well as consulting experts about the feasibility of possible features. As a result, we were able to clearly articulate our rationale behind our design and pitch the value of our design to SCS stakeholders.

3) My team was made up of amazing researchers, designers, and computer scientists. Whether it was writing research plans with Ugochi, deploying prototypes with Anna, making project plans with Emily, or conducting ML and prototyping interviews with Yuwen, I really enjoyed and valued getting to dabble in all parts of the project. We also took time each sprint to do a retrospective and give each other feedback, as well as share our thoughts on the progress of the project. Being able to share knowledge and build rapport was a key component of what made our team successful.

4) Storytelling is a key component in UX research. It was clear as we conducted our research that the current ApplyGrad system was adding to the already existing burden that is reviewing graduate admissions. But how do you convey how challenging it really is to an audience that has no experience being a reviewer? Why is updating ApplyGrad a problem worth solving? While we struggled with this narrative at the beginning of the project, I lead our team to create an engaging presentation of our final solution that clearly articulated the short- and long-term impacts of removing obstacles and adding support in the review process. It was incredibly rewarding and helped me define why I love this work: telling the stories of users and advocating to make change.

```