Satsuko VanAntwerp on Designing Human-Centered, Equitable AI
Satsuko VanAntwerp is a design researcher and service designer working on creating human-centered AI. Having previously founded a social sector design consultancy for government agencies, you’ll hear how she brings her experience in business and the social sector to make more just and equitable AI. In this conversation we talk about:
- why she sees large scale AI as an opportunity to shift until recently invisible power structures
- data and its implications for businesses and customers
- how a cross functional AI product team balances desirability, feasibiliy and viability
- what happens when you optimize from a biz/tech lens and how to optimize from a human lens and why it generates better results
- why AI needs to have ongoing learning from humans, making it important to have humans in the loop
- the critical but distinct roles that tech and design play in developing explainable AI
- how the micro and macro social contract within a company and of a country impacts the adoption of AI
Recommended Resources
- Compass recidivism rate research
- Explainable AI
- Algorithms of Oppression: How Search Engines Reinforce Racism by Safiya Umoja Noble
- Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Neil
- Race After Technology: Abolitionist Tools for the New Jim Code by Ruja Benjamin
- Invisible Women: Data Bias in a World Designed for Men by Caroline Criado Perez
- Satsuko VanAntwerp on LinkedIn
Select Highlights
- “Design, what I’m representing, is the desirability. What does the human want, how do we make this usable? Does this make sense for society, and is it humane and responsible?”
- “In the end, AI is just a tool, but it has the opportunity to do a lot of good if we can get our values in there. Tech is not neutral, right? And so I’m most excited by AI, enterprise level AI, AI that has the opportunity to be used by a lot of people, because it’s just another slice of society that we can actually embed values that are good for all people.”
- “Values that are good for all people like equality, fairness, accountability, safety, privacy, all of these things. If we dont intentionally embed this into the technology, it’s not going to happen on its own.”
- “As a design researcher, in the end we really believe the user is king or queen. My client may be a cosmetics company, but who I really think of as my client is the end user society, humanity.”
- “AI can be problematic if we dont highlight some of these issues and deal with them. At the same time it shines a light on everything that has already been going on that’s not working in society.”
- “What’s so helpful about the AI technology, now we have the proof, the data. And now we can’t ignore it. Now we have to do something about this. Yes, it’s problematic, and it’s all of our collective responsibility to act and do something.”
- “It’s so granular. I think one of the things thats interesting about AI. I think people think of it as Ex Machina, or HER, or Westworld. And we think of sentient beings, we think of it being very powerful stuff. In the end AI is often very narrow, it’s applied to mundane tasks.”
- “Tech is not neutral and what AI can do, what’s exciting about it, is that it can shift power. It can rebalance a system. There’s so much potential for using AI to create a more just world, the kind of world we want to see. If we just take the data as is, the historical data, we’ll just keep perpetuating all these disparities and discrepancies and biases we have until now. But now we do have a chance to rebalance that and to shift power. That’s the most exciting part about AI.”
Interesting relevant reading
Have questions?
Want to explore an opportunity?
Get in touch with Lauren Sinreich directly at lauren [at] wearewhole [dot] com
to find a time to talk.