information for transformational people

AI 246Beating the bias in AI 


From an article by ETIQ.AI

ETIQ.AI is a start-up with social impact, helping organisations using artificial intelligence (AI) to remove biases in their machine learning models. The ETIQ Engine finds and mitigates negative biases at the earlier possible stage. ETIQ helps companies avoid reputational threats, meet regulatory requirements such as GDPR or discrimination laws and, more generally, build services appropriate for a variety of groups in their consumer base.

Why are solutions to address AI Bias so important for business and society? As companies rush to implement various use cases for machine learning algorithms without being ready to deal with inadequate training models, it is crucial that business and society understand what the implications of the resulting bad decisions based on the outputs could be.

Take the case of Amazon, for example. It detected considerable gender biases in its recruitment algorithms, raising major questions about how reliable and ethical these algorithms are. Then there’s Apple. In November last year, a US financial regulator launched an inquiry into its new credit card over claims it gave different credit limits to women and men. This was brought to light when a prominent tech entrepreneur tweeted about being given 20 times more credit than his wife (who had a higher credit rating).

Some US police departments use AI for criminal risk assessment algorithms. Historical crime statistics had been fed into the tool in order to find patterns based on a defendant’s profile and a recidivism score produced – a numerical estimation of likelihood to reoffend. Judges used the scores in sentencing to make decisions such as the type of rehabilitation the defendant received, whether they got bail, length of sentences. High scores lead to severity, low scores to leniency. Far from reducing the human biases that judges might have, this AI model perpetuated and amplified existing discrimination and inequalities found in society. It released dangerous criminals from less disadvantaged backgrounds while keeping low risk defendants from low income and minority backgrounds in prison for longer than they should have done.

Populations already disproportionately targeted by law enforcement had this targeting embedded into the system. This led to a vicious cycle of amplification – not only amplifying inequality but also not achieving the objective of most efficiently allocating resources and reducing prison populations without a rise in crime. In fact, the accuracy of the score was found to be overwhelmingly unreliable in forecasting violent crime, with only 20 per cent of the people predicted to commit violent crimes actually going on to do so.

Another field of particular risk is recruitment, where qualified and skilled candidates can be lost during the preliminary round due to the type of keyword identification criteria programmed into online screening and assessment. Candidates who don’t fit the pattern of the historical hires analysed by the algorithm often never get to be considered by a human, never mind having an interview. Candidates from minority backgrounds that are screened out by AI bias could be missing out on benefits that also impact their families and communities.

Another field in which biased predictive algorithms have a big risk of negative impact is in financial services. Particularly in financial risk assessments where the financial services company will decide how much of a loan customers can access and what premiums will apply.

ETIQ believe it is possible to stop the bias, unfairness, and bad business decisions currently associated with AI algorithms. They created an AI bias scanning tool in order to show that it is possible to solve some issues around algorithmic bias which have a huge impact on society, and are also fundamentally bad business.

They start by identifying the groups of people in protected categories that AI algorithms get wrong over and over again. Then they change some of the data points or details associated with those people that causes bias in the algorithm. If the algorithm is not exposed to these details in the training data it’s less likely to use them once it actually makes a decision.

The second stage takes place once the company has built the algorithm, started testing and put it into production. They help analyse the output and audit what it is recommending. They identify if there are any biased predictions through providing a set of metrics and statistics to guide the company on any factors or features which need addressing.

They know from experience that AI bias negatively impacts business and the rights of individuals in our society. It’s important organisations act now in order to stop the reinforcement of existing prejudice against certain groups that exists today. Ask yourself, what will the future look like if we don’t?

Watch this 1 minute video:
 



Retweet about this article:



 

From an article by ETIQ.AI, 11/08/2021

To submit a story or to publicise an event please contact us. Sign up for email here.