Discrimination in selection?

Professional statistician Nancy Carpenter and Unite activist Ian Allinson explain how to check for discrimination in situations like redundancy selection.

Cutting jobs; dice
Dice image: www.flickr.com/photos/dullhunk/6097248541

It’s common for employers to select employees for various purposes, including recruitment, redundancy, appraisals, pay rises, bonuses and promotions. It’s common for people to say such actions should be subject to “equality impact assessments” to ensure they are not tainted by discrimination.

Where people attempt an equality impact assessment, it’s all too common that union reps and/or HR gather the figures, draw some graphs, and squint at them before expressing opinions about whether there’s a problem or not. Sometimes that’s good enough, but what if people don’t agree?

In this article, we explain how to test whether a selection is independent of a particular characteristic (e.g. gender, age) in the simplest cases. These cases only deal with simple selection – a simple Yes/No (e.g. was an individual selected for redundancy) rather than having different values (e.g. a score of 1-5 in an appraisal system, or the size of pay rises).

First catch your data

Unless the employer can be convinced to carry out a proper check for discrimination themselves, and share the data and results, your first problem will be gathering the data in order to do it yourselves.

Bear in mind that it is likely to be classed as “sensitive personal data” under the Data Protection Act 1998 so you should take good care of the data. If you are employed by the organisation whose selection you are checking, and are doing so as part of your role (e.g. in redundancy consultation, or as a recognised employee representative) you may be covered by their own data protection registration just like their HR department.

The size of the population for which you gather data is extremely important. The bigger it is, the more conclusive the analysis can be. So if an employer selects from 200 workers by dividing them into various “selection pools” or by department, try to get data for the entire 200, not just certain groups.

Depending on what the selection is for and how workers are represented in the organisation, there are various routes you might be able to use to get the data, such as:

  • Where the employer recognises a union, it may be able to request the information for collective bargaining
  • If the data relates to redundancies or a TUPE transfer, the representatives involved in collective consultation may be able to get the information
  • Asking questions under the Equality Act 2010
  • If the data relates to health and safety then any recognised union Safety Reps (or Representatives of Employees Safety if there is no union recognition) may be able to get the data under SRSC 1997 Regulation 7 or HSCE 1996 Regulation 5.
  • If the organisation has a body set up under the Information and Consultation of Employees Regulations 2004 (sometimes called a national works council), employee representatives on that may be able to get information
  • Public Authority employers may have to disclose it under the Freedom of Information Act
  • If the organisation has a European Works Council and the issue affects employees in more than one EU country, employee representatives may be able to get information
  • You could ask workers to provide the information themselves e.g. through a survey
  • Workers could request the information as part of appeals, grievances or individual legal action
  • Depending on how the data is held by the employer, it may be possible to get access to data using a Subject Access Request under the Data Protection Act

Tools

There are lots of options available for free statistics software. For simple statistics such as the ones shown here there are also several free ‘calculators’ available on the web. For this article we’ve chosen to use Microsoft Excel, as it’s likely that someone in most workplaces will have access to a copy. It is possible to do all the sums using just Excel, but it’s much easier if you install the free Real Statistics Using Excel add-in, and that’s what we’ll use for the rest of this article.

At the time of writing, you go to the “Free Download” menu on the web site and choose the “Resource Pack” option. It’s important you follow the instructions for your version of Excel in order to install the add-in correctly and in the right place – so read them before starting.

The Real Statistics resource pack adds a lot of functions and features to the ones built-in to Excel. The web site contains a lot of useful help e.g. the page on supplemental functions.

Doing the sums

Once you’ve got your data, you need to put it into your Excel spreadsheet as a little table, like this:

A B C
1 Category A Category B
2 Selected 5 10
3 Not Selected 60 25

For example, “Category A” might be “Male” or “Aged over 50” and “Category B” might be “Female” or “Aged 50 or under”. You fill in the numbers from your data.

You then need to enter this formula in an empty cell:

=FISHERTEST(B2:C3,2)

B2:C3 is the range of cells containing your data. There’s an example spreadsheet here where this formula has been put in cell C6. The spreadsheet includes a number of other examples.

Excel has converted the formula to display a number, in this case:

0.00807565

This shows the probability that the result seen (or one even more extreme) could have happened by chance. You may want to change the result to display as a percentage (in this case 0.8%), or change the number of decimal places shown, using these buttons in Excel (on the “Home” tab of the ribbon in Excel 2013):

Image of buttons on Excel ribbon for formattingIn real life you will probably want to do a number of tables based on different categories. The spreadsheet includes several examples. You can see from the first two examples what a difference population size makes. Selecting the same proportions but from a population ten times bigger makes the result of the test many billions of times smaller.

As a way of conveying the scale, rather than the statistical significance of any issues, you may also want to calculate the “risk ratio”. This shows how much more likely someone with a characteristic is to have been selected. Using the example in the table above:

Risk ratio = (risk of Category A person being selected) / (risk of Category B person being selected)

= ( [number of A people selected] / [total number of A people] ) /
( [number of B people selected] / [total number of B people] )

= ( 5 / 65 ) / ( 10 / 35 )

= (0.0769) / (0.2857)

= 0.2692

This means that an A person was 0.2692 times as likely to be selected as a B person. You can convert this into a risk ratio the other way round: 1/0.2692 = 3.715. So a B person is 3.7 times as likely to be selected as an A person.

What do the results mean?

We are testing the hypothesis that the category is unrelated to the chance of being selected. The result is the probability of us randomly finding a result at least as extreme as the one actual seen if the hypothesis is true.

So in the example above the number 0.00807565 is 0.806565%, which means there is less than a 1% chance of randomly seeing that high a percentage of female redundancies if there really was no discrimination between males and females.

Here is an example of how you might explain that to an employer:

We have analysed your recent selection process using the following data:

Male Female
Selected 5 10
Not Selected 60 25

We calculated the risk ratio and found that a woman was 3.7 times as likely to have been selected as a man.

We used Fisher’s Exact Test (2 tailed) to analyse this information and found with a high degree of statistical confidence that an individual’s chance of being selected is related to their gender, with only a 0.8% probability that this result occurred by chance. For comparison, in most statistical fields, <5% probability of a result occurring by chance is typically considered an acceptable level for rejecting the hypothesis that there is no causal link.

If you’ve ended up with a result from Fisher’s Exact Test (2 tailed) of around 0.05 (5%) or less, then it is hard to be confident that the selection was free of discrimination.

A result like the one above does not necessarily mean that unlawful discrimination has taken place. As well as the possibility that the result occurred by chance, the Equality Act 2010 allows an employer to argue that the difference in outcome was objectively justified and proportionate to a legitimate business objective. Crucially, however, once there is a complaint and evidence of detrimental treatment from someone with a “protected characteristic” under the Equality Act, the onus is on the employer to justify the difference, not on the employee to prove how it happened.

How to use the results

If your results show that the probability of selection was linked to any protected characteristic under the Equality Act 2010 (age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, religion and belief, sex, and sexual orientation) then you should challenge the fairness of the selection.

The results of the analysis will not (other than in exceptional circumstances) identify individuals, so are not “personal data” and the Data Protection Act does not stop you sharing them however you feel is appropriate.

The most obvious thing to do with your analysis is to present it to the employer. You can point out to them that their selection was not independent of a protected characteristic. You may want to ask them to stop a process (e.g. redundancy) pending investigation. You could point out the potential damage if they have discriminated, including to reputation and financially through litigation.

The most important thing to do with your analysis is to share it with the workers. This can help you campaign for the employer to take a fairer approach. Some workers will be angry about any potential discrimination. It’s worth pointing out to others that a selection process that discriminates against people with a protected characteristic is probably unfair in all sorts of other ways.

You may want to release the analysis to the public or the media if you’re having to campaign to get the employer to take the problems you have uncovered seriously.

You should get workers discussing how discrimination could have occurred in the selection process. Two important concepts to discuss are “unconscious bias” and “institutional discrimination”. While some people consciously and deliberately discriminate against people with a particular characteristic, it’s common for discrimination to occur unintentionally – sometimes those responsible are genuinely shocked to discover what they have done.

Unconscious bias is where assumptions are unconsciously made about people based on their characteristics, leading to them being treated differently. For example, a manager might assume that a worker was less able to learn new skills (or less keen to do so) because they were older and not offer them training. Unconscious bias is even an issue when people score themselves for a selection. For example, when workers are asked to rate their skill in a particular area, women tend to rate themselves lower than men. Many selection processes include subjective elements such as “future potential” where the unconscious bias of the scorer could lead to people being scored differently because of a characteristic.

Institutional discrimination is a concept which hit public attention when the Macpherson Report into the murder of Stephen Lawrence talked about institutional racism in the Metropolitan Police. The same concept applies to other areas of discrimination. Institutional discrimination takes place through policies, practices, processes, customs etc. Such discrimination may be built into the choice or wording of scoring or selection criteria themselves. For example, scoring on “flexibility” could indirectly discriminate against those with disabilities or caring responsibilities – who are more likely to be women. Or the discrimination may occur before selection but affect people’s ability to score well, for example if training, work or appraisal scores had been allocated unfairly.

Potential Pitfalls

The working in this article assumes that the population is divided into just two categories for a test – for example male or female; <50 or 50+. If you are having to choose how to divide the population e.g. into age bands, or grouping different ethnic origins, you need to be ready to justify your choice of division and that there are reasonable numbers in both groups. If possible, decide how to divide the population for analysis before seeing the data, so that it does not influence the analysis. Alternatively, you could use a more complex statistical approach to test across multiple categories.

It is important that you don’t “over test” the data. A confidence level of 5% means that there was only a 1 in 20 chance of getting that result by chance if there was no causal link. But if you run 20 tests, you run a good chance that at least one of them will come back with an apparently “statistically significant” result, as this cartoon illustrates well: https://xkcd.com/882/. The more tests you run, the higher the confidence level you should demand.

In this article we describe using Fisher’s Exact Test to test the null hypothesis that selection is independent of a characteristic. Sometimes we have described that as testing for a causal link. You need to be aware that a “causal link” isn’t necessarily direct. For example, you might find that a higher proportion of women had been selected, and that a higher proportion of part-time workers had been selected. Since these two characteristics are often linked (more women work part-time) it would require further investigation to determine whether the link between gender and selection is just a reflection of the link between part-time working and selection, or whether there is an additional gender factor at play. In the jargon, the part-time working characteristic might be “confounding” gender inequalities.

The maths (if you are interested)

The mathematical model used for the analysis is based on this model:

An urn contains N balls, of which K are white and N-K are black. You randomly select n balls from the urn, without replacing them after selection.

The hypergeometric distribution allows you to work out the probability of selecting a particular number of white and black balls.

Fisher’s Exact Test builds on the hypergeometric distribution to test a null hypothesis that the probability of selection (a ball being picked) is independent of its category (ball colour). It does this by adding up the probability of the selection that actually happened with the probabilities of all the even more unlikely selections. The test is called “2-tailed” because it adds up probabilities at both ends (lots of white balls or lots of black balls).

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here