STAT

Opinion: The hidden danger of letting AI help you find a mental health therapist

Finding a mental health therapist who looks like you may sound like a good move, but choosing someone who is different has the potential to gently challenge preconceptions. That's a…

Companies have learned the hard way that their artificial intelligence tools have unforeseen outputs, like Amazon’s favoring men’s resumes over women’s or Uber’s disabling the user accounts of transgender drivers. When not astutely overseen by human intelligence, deploying AI can often bend into an unseemly rainbow of discriminatory qualities like ageism, sexism, and racism. That’s because biases unnoticed in the input data can become amplified in the outputs.

Another underappreciated hazard is the potential for AI to cater to our established preferences. You can see that in apps that manage everything from sources of journalism to new music and prospective romance. Once an algorithm gets a sense of what you like, it delivers the tried and true, making the world around you more homogeneous than it

You’re reading a preview, subscribe to read more.

More from STAT

STAT1 min readIntelligence (AI) & Semantics
STAT+: Q&A: Defining Responsible AI In Health Care, With CHAI CEO Brian Anderson
CHAI CEO Anderson wants to build public trust in AI and empower patients and providers to have more informed conversations.
STAT2 min read
STAT+: Pharmalittle: We’re Reading About FDA Approval For Sarepta Drug, MSF Closing Access Campaign And More
After months of haggling, Vertex Pharmaceuticals and the National Health Service in England reached agreement over access to several cystic fibrosis treatments, capping an end to a controversial, long-running saga:
STAT2 min read
STAT+: Pharmalittle: We’re Reading About EMA Plans For GLP-1 Shortages, Walgreens Store Closings, And More
The European Medicines Agency and member countries announced several steps to address shortages of GLP-1 drugs.

Related Books & Audiobooks