We will discuss the assumptions and choices that shape the design and implementation of algorithms, applying our learnings about equity and ethics to understand the impact of machine learning and AI on stakeholders and communities.

Prior to this session, read and watch the following materials 60 minutes total
Whether your project or organization is using AI or not, it's essential to think through how AI technologies are or could impact your work. These technologies are becoming increasingly more accessible, often being built into systems that you already use.

This case study, of a government agency in Pennsylvania, illustrates the many ways these systems can cause significant negative impacts, even if they are implemented with the best of intentions.
<aside> đź’ˇ Key Point: There are so many ethical issues raised in this case study. One poignant example is in how the mental healthcare history is factored into the algorithm. Consider the caregiver who seeks mental healthcare from a publicly-funded therapist. Since the county has a data warehouse with all mental health data, and since the designers of the system decided it was OK to include this data, the algorithm identified mental health services as correlated with child maltreatment.
But here’s one ethical issue: the thing that’s correlated with child maltreatment in fact is not “caregiver seeks mental healthcare”, it is “receiving county health or mental health treatment” — because caregivers who are able to afford private therapy won’t be listed in this database at all, and caregivers who may need mental healthcare but haven’t sought it out, also won’t be listed. So the algorithm, missing this data, essentially ignores those wealthier or untreated caregivers, and penalizes those who seek care from the public system, the very system that is supposed to be supporting them in keeping them and their families healthy and safe.
</aside>

These types of issues (missing chunks of data, causing major logical flaws in the resulting machine learning output) are stunningly common in algorithm design. This report, from the Center for Equity, Gender & Leadership at the UC Berkeley Haas School of Business, provides some practical tools to help address these issues within organizations.
Building from the Berkeley report, this case study will be the basis of our interactive exercise during this week’s session.

Participants in this workshop often ask us for examples of “data equity done right” — examples of work that successfully implements good data practices. These are tough to find! And especially examples where organizations transparently share issues they identified and how they addressed them to prevent harm from occurring. This is one of those rare examples.

The next four minutes of this video you watched last week will help tie together AI + algorithms with the content we’ve discussed in the previous weeks:

OPTIONAL: Review with this short, big picture video
[ ] WATCH: An Equity Lens on Artificial Intelligence 7 min
[ ] WRITE: a post in our Slack channel
Explain your thinking on one of these reflection questions –
a. If your organization is currently using algorithms, machine learning, and/or AI: How did your organization evaluate the algorithm/AI tool before implementing it? Is it evaluated on an ongoing basis? If so, how?
b. If your org is not using any algorithms, machine learning, or AI: If one of your colleagues proposed bringing an AI tool into your organization, what criteria would you want to use to evaluate the tool?