Most companies today have a large amount of data at their fingertips. They also have the tools to mine this information. But with that strength comes responsibility. Before using the data, technologists need to step back and assess the need. In our data-driven hypothetical age, it’s not about whether you have the information, but whether and how you should use it.
Consider the implications of big data
Artificial intelligence (AI) tools have revolutionized data processing, turning vast amounts of information into actionable insights. It’s tempting to think that all data is good and that AI makes it even better. Spreadsheets, graphs, and visualizations make data “real.” But as any good technologist knows, the old computing sentiment, “Garbage in, trash out” still applies. Now more than ever, organizations need to question where the data comes from and how algorithms interpret that data. All of these charts are buried with potential ethical risks, biases and unintended consequences.
It’s easy to ask your technology partners to develop new features or capabilities, but as more and more companies adopt machine learning (ML) processes and tools to streamline and inform processes, there is potential for bias. For example, do algorithms inadvertently discriminate against people of color or women? What is the data source? Is there permission to use the data? All of these considerations should be transparent and closely monitored.
Consider how current law applies to artificial intelligence and machine learning
The first step in this journey is to develop data privacy guidelines. This includes, for example, policies and procedures that address considerations such as notification and transparency of data use for artificial intelligence, policies on how information is protected and updated, and how to manage data sharing with third parties. We hope that these guidelines will build on an existing comprehensive data privacy framework.
Other than privacy, other relevant bodies of law may affect your development and deployment of artificial intelligence. For example, in the field of human resources, it is important that you refer to federal, state, and local employment laws and anti-discrimination laws. Similarly, in the financial sector, there are a set of rules and regulations in place which must be taken into consideration. Existing law enforcement continues, just as it does outside the context of AI.
Stay ahead while using new technologies
Combined with the current law, as technology, including artificial intelligence and machine learning, accelerates, the considerations become more complex. In particular, artificial intelligence and machine learning present new opportunities to distinguish insights from data that were previously unattainable and can do so in many ways better than humans. But AI and machine learning are ultimately created by humans, and without close supervision there are risks of introducing bias and undesirable outcomes. Creating an AI and Data Ethics Council can help companies anticipate problems with these new technologies.
Start by developing guiding principles to control the use of AI, machine learning, and automation specifically in your company. The goal is to ensure that your forms are convenient and functional, and do not “deviate” from their intended purpose inadvertently or inappropriately. Consider these five tips:
1. Accountability and transparency. Perform audits and risk assessments to test your models, and actively monitor and improve your models and systems to ensure that changes in underlying data or model conditions do not unduly affect desired outcomes.
2. Privacy by design. Ensure that your organization-wide approach integrates privacy and data security into ML and associated data processing systems. For example, do ML models seek to reduce access to personally identifiable information to ensure that you are using only the personal data you need to generate insights? Do you give individuals a reasonable opportunity to have their personal data checked and updated if it is inaccurate?
3. Clarity. Design straightforward, interpretable AI solutions. Are your machine learning data usage and discovery models designed with understanding as a key feature, and measured against a defined desired outcome?
4. Data management. Understanding how you use data and the sources from which you get it should be key to the principles of AI and machine learning. Maintain processes and systems to track and manage data usage and retention. If you use external information in your forms, such as government reports or industry terminology, understand the processes and the impact of that information on your models.
5. Ethical and practical use of data. Establish governance to provide direction and oversight for the development of products, systems, and applications that incorporate artificial intelligence and data.
Principles like these can guide discussion of these issues and help create policies and procedures for how data is handled in your business. On a larger scale, they will set the tone for the entire organisation.
Create an AI and Ethics Council
The guidelines are great – but they need to be implemented in order to be effective. The Data Ethics and Artificial Intelligence panel is one way to ensure these principles are incorporated into product development and internal data uses. But how can companies do that?
Start by gathering a multidisciplinary team. Consider including both internal and external experts such as IT, product development, law and compliance, privacy, security, audit, diversity and inclusion, industry analysts, external legal and/or a consumer affairs expert, for example. The more diverse and knowledgeable the team is, the more effective your discussions of the potential implications and feasibility of different use cases will be.
Next, spend some time discussing the bigger issues. Here it is important to step away from the process for a minute and immerse yourself in a lively and fruitful discussion. What are the core values of your organization? How should they guide your policies around the development and deployment of artificial intelligence and machine learning? All of this discussion lays the foundation for the actions and processes you define.
Setting a regular meeting rhythm to review projects can also be helpful. Again, bigger issues should drive the discussion. For example, most product developers will introduce the technical aspects – such as how to protect or encrypt data. The board’s role should aim to analyze the project at a more fundamental level. Some questions to consider to guide the discussion could be:
- Do we have the right to use the data in this way?
- Should we share this data at all?
- What is the use case?
- How does this serve our customers?
- How does this serve our core business?
- Is this in line with our values?
- Could it lead to any risks or damages?
As artificial intelligence and ethics become an issue of increasing importance, there are many resources to help your organization navigate these waters. Connect with vendors, consulting firms, trade groups and consortia, such as the Enterprise Data Management Board (EDM). Implement the right parts of your business but remember that tools, checklists, processes, and procedures should not replace the value of the discussion.
The ultimate goal is to make these considerations a part of the company culture so that every employee who touches a project, works with a vendor, or consults with a customer keeps data privacy at the fore.