The Power of the Algorithm

Algorithms and machine learning are often applying the same bias and prejudices that humans show, however many people are failing to notice as their attention is elsewhere.

Today’s priority can become yesterday’s issue with simply a new line of code. By the time people notice, the game may be over. Even if someone wanted to correct the system, biases may become so buried they are almost invisible and hard to detect.

An example of this is many organisations spend time, money and energy assisting people to recognise unconscious bias in the workplace through webinars and workshops. This is almost a yesterday’s solution when algorithms are making employment decisions written by predominately young male developers. It is this group, which may not even be employed directly by the company that can transfer their unconscious biases to the algorithm. This is today’s reality.

A sentencing algorithm created in America highlights this. It predicted which people would re-offend after an initial crime. The algorithm falsely said black offenders would offend twice as often as white offenders. The people who were creating the algorithm was predominately white males.

Developers may also be directed to create algorithms that meet specific company requirements which may be biased. If this practice exists, algorithms may be considered trade secrets and are not required to be divulged.

In California, a person was jailed for life based on a piece of software that relied on DNA traces from a crime scene. When the defence asked to see the source code of the algorithm it was denied because it was called a trade secret.

If anyone was convicted of a crime by the information provided by an algorithm, wouldn’t it be their human right to know how the decision was made? Apparently not yet.

Organisations must take notice of the emerging issues with the use of algorithms. Where possible, the methodology used must be visible and transparent. What is the alternative?

Although algorithms are becoming more sophisticated they are not the Holy Grail. Many capable people will simply not fit the algorithm irrespective of bias. Though underestimating the speed at which algorithms are evolving would not be wise. It is espoused that they can detect gender and race by scanning a resume with an 88{01332a80e2e652688e18927fa9a6162580960d47bc08263a3993439d666dcd52} accuracy.

In Australia, today it is not possible for an applicant to challenge an algorithm’s decision about the suitability of their application. The process prevents this. The applicant submits their CV online, they subsequently receive an anonymous computer generated an acknowledgement.

If unsuccessful they get another computer-generated response giving no real reason. The whole process is invisible to the applicant and leaves only the applicant’s imagination to understand why.

Even without intent, if the process itself not managed properly may entrench a range of inequalities as previous examples demonstrate. This is very thing many institutions have spent years trying to avoid.

It is easy to imagine that an algorithm may be coded to a specific group. Like an expert from a specific school or country within a certain age bracket. It is not beyond the realms of possibility, particularly if it is a trade secret and there are no rights of appeal.

This scenario helps breed inequality and if history has taught us anything, inequality sews the seeds of societal discontent, often with catastrophic consequences.

As priority decisions made by algorithms must become more visible for both practical and ethical reasons.  Imagine if technology could tell the applicant applying for the job, why they did not get it, how the decision was made, and what they could do to increase their opportunities. The impact could be positive for the company’s brand and its ethical approach may help attract talent to their business.

Countries like Germany have created guidelines which provide algorithm visibility.  They state that “if an accident is unavoidable the self-driving car must not make any choices over who to save. No decisions should be made on age, sex, race, disabilities, and so on; all human lives matter”.

Statesmen like cosmologist Stephen Hawking and Tesla CEO Elon Musk have endorsed a set of principles that reinforce the importance of transparency to ensure that self-thinking machines remain safe and act in humanity’s best interests.

Not every leader has the knowledge available to them that some countries or technologists do. However, today’s leaders have a responsibility to be informed and have enough knowledge to ask the probing and ethical questions. Otherwise, they will be implementing yesterday’s solutions.

The relationship with technology and bias is only one of the complex ethical issues that are facing society today. It becomes even more complex if the system is invisible to the people using it or being affected by it. The power cannot lie solely with the algorithm.  Today’s mantra must be algorithm transparency.