Opinion: The hidden danger of letting AI help you find a mental health therapist
Companies have learned the hard way that their artificial intelligence tools have unforeseen outputs, like Amazon’s favoring men’s resumes over women’s or Uber’s disabling the user accounts of transgender drivers. When not astutely overseen by human intelligence, deploying AI can often bend into an unseemly rainbow of discriminatory qualities like ageism, sexism, and racism. That’s because biases unnoticed in the input data can become amplified in the outputs.
Another underappreciated hazard is the potential for AI to cater to our established preferences. You can see that in apps that manage everything from sources of journalism to new music and prospective romance. Once an algorithm gets a sense of what you like, it delivers the tried and true, making the world around you more homogeneous than it
You’re reading a preview, subscribe to read more.
Start your free 30 days