Regulatory technology leaders have become more concerned about artificial intelligence bias over the past two years, according to a new survey by analytics firm DataRobot. It’s not just about damaging the brand’s reputation either. Among those who experienced the negative effects of AI bias, the largest proportion, 62%, lost revenue as a result. It also lost 61% more customers.
No wonder tech leaders are getting anxious.
54% of tech leaders surveyed said they were very concerned or very concerned about bias in AI, compared to 42% who expressed this level of concern in 2019. The 2021 online survey of 350 US IT managers was conducted and the United Kingdom and other IT leaders in June 2021. A similar online survey was conducted in June 2019.
The findings suggest that more organizations are looking more closely at their algorithms, the datasets that go into their training, and the potential to explain AI results — just how did the algorithm come to that conclusion?
In fact, in September 2021, Gartner identified responsible AI—including the transparency, fairness, and auditability of AI technologies—as one of the four directions driving AI innovation in the near term. Brandon Purcell, analyst at Forrester Research, told InformationWeek that the market for responsible AI solutions will double in 2022, giving organizations more help with technology to help them ensure that their AI meets ethical requirements, is explainable, fair and privacy compliant. .
“It has become a priority in any highly regarded industry,” Purcell says. There are a number of companies working on solutions as well, from tech giants to startups.
When it comes to AI bias, what do CIOs and other IT leaders worry about in particular? The biggest concern was a loss of customer trust at 56%, followed by poor brand reputation or backlash on social media at 50%. The increase in regulatory scrutiny was next at 43%, followed by loss of employee confidence at 42%, inconsistency with personal ethics at 37%, lawsuits at 25%, and erosion of shareholder value at 22%.
These concerns are not just about some of the murky future consequences of AI bias. The organizations also cited real-world findings from AI bias as well. A full 36% said their organization experienced a negative impact due to an incident of AI bias in one or more of their algorithms. who are they:
- 62% reported a loss of revenue;
- 61% lost customers;
- lost 43% of employees;
- incurred 35% legal fees due to lawsuits or legal actions, and;
- 6% experienced brand reputation damage or media backlash.
The fallout is partly due to discrimination caused by biased AI. Survey respondents said their organization’s algorithms inadvertently contributed to discrimination based on gender (32%), age (32%), race (29%), sexual orientation (19%), and religion (19%).
However, many of those surveyed are already working to mitigate the bias of AI. More than two-thirds (69%) say their organizations conduct data quality checks to avoid AI bias. Another 51% train employees on how to identify and prevent bias in AI. Still 51% of them have hired an AI bias or ethics expert. And 50% said they measure decision-making factors in AI.
Others were posting tools to help. For example, 47% said they were monitoring when data changes over time, 45% said they publish algorithms that detect and reduce hidden biases in training data, and 35% said they provide interpretable AI tools.
Only 1% of respondents said they had taken no steps at all to prevent AI bias.
who is in Charge?
CIOs have become less involved in AI bias initiatives over the past two years. In 2019, 49% of CIOs were involved in AI bias prevention, but the number dropped to 28% in 2021. The job title most frequently cited as being involved in AI bias prevention initiatives was 48 percent Data Scientist. %. Others included a third-party AI bias expert/consultant (47%), AI ethics specialist (35%), CEO (32%), customer experience team (30%), and business experts (28%) and organizers and marketers at 24% each.
What to read next: