Ex-Googler’s Ethical AI Startup Models More Inclusive Approach

Issues related to ethical artificial intelligence have received more attention over the past several years. Tech giants from Facebook to Google to Microsoft have already developed and published principles to demonstrate to stakeholders – customers, employees, and investors – that they understand the importance of ethical or responsible AI.

So it was a black eye last year when Google Ethical AI group co-chair Timnit Gebru was fired after a dispute with management over a scientific paper she co-authored and was due to present at a conference.

Now Gebru has created her own startup focused on ethical artificial intelligence. The Distributed Artificial Intelligence Research Institute (DAIR) produces interdisciplinary research in the field of artificial intelligence, according to the organization. It is supported by grants from the MacArthur Foundation, the Ford Foundation, the Open Society Foundations, and a gift from the Kapoor Center.

Ethical AI will be among the top topics on the minds of progressive CEOs in 2022. Forrester Research predicts that the market for responsible AI solutions will double in 2022.

Enterprise organizations that have accelerated their digital transformations, including investments in artificial intelligence during the pandemic, may look to improve their practices now.

However, organizations that have already invested in ethical/responsible AI will be the most likely to pursue continuous improvement of their practices, according to Gartner Senior Vice President of Research Witt Andrews. These organizations likely have stakeholders who care about ethical AI issues, whether it’s in pursuit of unbiased data sets or avoiding problematic facial recognition software.

Should these companies look to tech giants for guidance or should they look to smaller institutes like Gebru’s DAIR?

DAIR مهمة mission

The new Gebru Institute was created “to counter the pervasive impact of Big Tech on the research, development and deployment of artificial intelligence,” according to the organization announcing its formation.

The foundations that funded DAIR point to the importance of independent voices representing the interests of people and communities, not just those of corporations.

“To shape a more just and equitable future where AI benefits all people, we must accelerate independent public interest research free of corporate constraints, focused on the expertise of people who have historically been excluded from the field of AI,” John said. Palfrey, president of the MacArthur Foundation, in a prepared statement. “MacArthur is proud to support Dr. Gebrough’s bold vision for the DAIR Institute to examine and mitigate the harms of AI, while expanding the possibilities of AI technologies to create a more inclusive technological future.”

DAIR has identified specific research directions of interest, including the development of artificial intelligence for low-resource settings, language technology serving marginalized communities, coordinated social media activity, data-related work, robustness testing and documentation.

“We firmly believe in a bottom-up approach to research, supporting ideas initiated by members of the DAIR community, rather than the purely top-down trend dictated by a few,” according to the institute’s statement on research philosophy.

Enterprise Approach to Ethical Artificial Intelligence

For organizations looking forward to their own ethical/responsible AI practices, Gartner’s Andrews offers some recommendations to get you started. First, create an internal practice that defines what the word “ethics” or “responsibility” means in your organization.

“I guarantee that people here in western Massachusetts have a different idea of ​​what morals are than those in Japan, or China, or Bali, or India,” he says. UNESCO just issued recommendations on the ethical use of artificial intelligence last month. “This is a sensitive topic.” That is why it must be carefully selected before it can be implemented.

For example, Facebook could encourage people to register to vote in elections. Some people may think that this is ethical behavior. Others think that this is unethical behaviour.

To avoid this type of conflict, organizations must clarify what they consider ethical or unethical.

Then, Andrews recommends organizations present their Chief Ethics Officer to their Chief Data Officer and CIO.

“Did you create a common creed for them to follow?” Andrews asks. “If not, the organization’s executives need to sit down and establish an ethical creed.”

What to read next:

How and why companies should engage with ethical AI

Why are companies training AI for local markets

AI liability risks to consider


Leave a Comment