DEIA | Artificial Intelligence: Friend or Foe to DEIA Efforts
May 18th, 2023
Have you heard about or used Artificial Intelligence (AI) solutions like conversational AI, recommendation engines, fraud/anomaly detection software, demand/time series forecasting, hiring, image recognition, smart or autonomous systems, robotic process automation, or prescriptive analytics? If your organization isn’t using these solutions, they most likely will in the near future.
On a sunny day in April, I was walking to lunch, and a peer in the DEIA (diversity, equity, inclusion, and access) field called to ask, “Have you heard about all of these new AI solutions? How will it impact the DEIA work we’re doing?” I didn’t answer the question in the moment. I asked for time to think about it. That thinking time resulted in the blog post you’re reading right now.
AI solutions can advance DEIA efforts by helping identify and address hiring biases, supporting language translation, enabling personalized education and development, and detecting hate speech and online harassment. Implementing AI solutions can reduce cost, enhance performance and efficiency, and minimize bias in hiring and other job-related decisions. More than one thing can be true simultaneously, so while these benefits can’t be denied, the use of AI solutions could also pose some challenges for those driving equity and inclusion in their work environments. Here are a few examples.
|AI Solution Access||Financial, educational, and infrastructure barriers can create a digital divide, making it harder for business owners from underserved communities to access AI solutions.||Governments and large corporations can invest in training programs, affordable access to AI solutions, and initiatives that bridge the digital divide.|
|Job Displacement||Automation-focused AI can potentially disrupt various industries and job markets which could disproportionately impact underserved communities.||Provide reskilling and upskilling opportunities focusing on ensuring equitable access to training programs and support for career transitions.|
|Algorithm Bias||Algorithms are trained on a large amount of data. If that data is biased and reflects societal prejudice, so will the output.||Ensure diverse and representative datasets, apply equality and equity techniques during model development, and conduct ongoing audits of AI systems to identify and rectify biases.|
|Transparency and Accountability||AI solutions make it difficult to understand how decisions are more, creating a lack of transparency that can undermine trust and exacerbate concerns about bias and discrimination.||Employ explainable AI techniques to make the decision-making more interpretable, enabling users to understand and question the outcomes.|
|Compromised Ethics||A lack of diversity in AI development teams can lead to biased and incomplete models that do not consider ethics.||Encourage diversity in the AI field by promoting interdisciplinary collaboration to ensure that AI solutions are designed to be inclusive and aligned with ethical principles.|
Implementing the mitigation strategies for the aforementioned issues ensures that AI solutions are developed with a focus on equality, equity, transparency, accountability, and diversity of perspective, actively addressing access barriers and promoting inclusivity. I want to thank my friend for posing the question and making me think. Now let’s carry the message forward and continue our DEIA work.
Vice President of Diversity, Equity, Inclusion & Access