Uber should use data science to fix its culture


Ever since a former employee with her boss and HR, the media has . We’ve heard in the past that Uber uses data to . We’ve also heard that it has a and culture.

Ironically, some of the same data analysis Uber does on its riders could help it fix its culture.

On Monday, I spoke to Dr. Carissa Romero from , a strategy firm that helps companies analyze themselves to improve inclusion and diversity based on the idea that others. Romero has a doctorate in psychology and is an expert in fixed and growth mindsets—people’s beliefs about the nature of talents and abilities—and founded Stanford’s applied research center on the subject.

I asked Dr. Romero about the techniques and tools companies can use to find problems and what kinds of interventions are effective. She began by making a distinction between the two fundamental types of bias.

and “” at Uber was one of explicit bias. Some of it even made it into written form. Finding explicit bias or harassment can be done by a simple text search.

Most workplace problems in this area, however, involve implicit bias. It can be equally as damaging—and the person making the mistake may not even know they’re doing it. For example, if I’m hiring a software developer and I have in my mind what that developer “is like,” I may inadvertently make judgments linked to race, gender, or culture that aren’t related to details actually important to the job.

This is also not something you find with a simple text search because they aren’t going to say “sex” or use a racial epithet. Also, many people who make these bias mistakes are not bad people and don’t have bad intent, but they have to make decisions differently and become better informed by data.

Uber’s explicit problems are a part of a . You don’t need fancy data analysis to see that. Yet if the company addresses the issue, it’ll still have a lot of work on internal culture and practices if it wants to have a more diverse workplace.

. We pull data from Greenhouse to learn about things like the diversity of different applicant sources and pass-through rates at each stage of the hiring process.”

Companies also need to look at employees throughout their “lifecycle” at the company. Some of this information lives in their human resources information system or performance review system.

This isn’t necessarily enough. Paradigm also relies on engagement surveys and focus groups to better understand differences in how engaged employees feel and whether they think their voices are being heard. This qualitative data helps make the quantitative data more understandable.

How do you determine bias?

Bias can exist at different stages of employment, from how applicants are attracted to apply for a job to hiring, evaluation, promotion, and retention, as well as terminations. Different metrics apply to each of these stages.

According to Dr. Romero, in the recruiting phase, it pays to take a hard look at candidate sources. Often, employee referrals result in less diversity. When it comes to hiring, companies should look at the different pass-through rates: If black candidates pass through phone screening at a lower rate than white candidates, that’s an example of quantitative data the company can use to detect bias.

Once an employee is hired, performance review scores and promotion rates become key sources. Next, when examining a company’s employee retention rates, look at terminations and longevity. If the data is stratified by demographic group (race, gender, and so on) and there are large disparities, that may be an indication of bias.

Other, more subtle data can also be analyzed. When looking at performance reviews, are “soft skills” mentioned more often for women or people of color compared to men? According to Dr. Romero, “Our data scientist uses a machine learning algorithm to look at whether different language is used to describe candidates from different demographic groups, but we also very often do it manually where we pull a random sample of written feedback to manually code. Then we use statistical tools to analyze the differences.” In other words, they and use their algorithms to crunch data on employees in the same kinds of ways companies are using it to understand their customers.

Dr. Romero also focuses on qualitative data: the “why.” This emerges from interviewing people. Some questions she pointed out: Are recruiters reaching out only on LinkedIn? What are managers looking for in a candidate?

Unfortunately, it can be hard to identify specific individuals within a company using statistical analysis. If a manager has only a few reports and hasn’t had to interview many people, the sample size will be too low. Instead, Romero advises companies to focus on establishing practices to prevent it.

How do you fix it?

According to Dr. Romero, “When you’re evaluating your employees, if you have a standard set of questions that you use to evaluate people in that role, then you’re going to make it less likely that bias influences decisions.” In contrast, “Not having a process would make it more likely that you would have more of these individual cases that people are relying on stereotypes compared to when you have processes in place.”

Process is great, but I’ve worked in organizations that only went through the motions. These organizations subscribe to , school of process. According to Romero, such sloppiness can be avoided by creating up-front descriptions of what you’re seeking in each position and clearly establishing metrics for performance. When it comes to performance reviews, force the manager to give an example of why the rating is deserved. According to Dr. Romero:

When you know what you’re evaluating up front and use examples to support your evaluation, biases are less likely to come into play. Evaluators should decide ahead of time what to look for, and organize feedback by relevant attributes. When you’re not clear about what you’re looking for, you’re more likely to rely on an overall feeling. That feeling can be influenced by bias. For example, you may be influenced by how much you like that person personally (vs. how good of a fit for the role they are). You might just like them because they are similar to you in some irrelevant way (maybe you have the same hobbies). Or you might be influenced by a stereotype – for example, what does a typical person look like in this role?

Some issues are more subtle and involve company culture. “Women often feel it’s hard to get heard in a meeting because they’re often interrupted. You might have a moderator for every team meeting or put a sign in the room. Make sure individuals are aware, agendas are distributed ahead of time, ask people for their thoughts,” said Dr. Romero.

According to Dr. Romero, what isn’t effective is “diversity training” to raise awareness, nor does copying other companies’ strategies. “Coming up with strategies before you’ve taken a look at your company’s data, and analyzed your process and your culture, is a bad approach. I also think ignoring behavioral science research is a bad approach. So basically, a non-data-driven approach is bad (ignoring your own data and ignoring what behavioral science research tells us).”

Why the need is real

I asked Dr. Romero if everyone needs this stuff, even small companies and startups. “In general, yes,” she replied. “Companies use data when making business decisions, it makes sense to use data when making people-related decisions. A data science approach to understanding people in your organization is helpful.”

This is the crux of the matter: Well-managed companies use data to make decisions. Well-managed companies have processes for making repeated decisions. It only makes sense to have good processes and data for making decisions about people. Good processes and data also happen to help create far more diverse environments.

Obviously, you want to do this because it’s the right thing to do. But as Dr. Romero says, “If you want to get your best work out of employees, you want to create an environment where people from any background can be successful.” Ask McKinsey: .