AI therapy is a surveillance machine in a police state

Mark Zuckerberg wants you to be understood by the machine. The Meta CEO has recently been pitching a future where his AI tools give people something that "knows them well," not just as pals, but as professional help. "For people who don't have a person who's a therapist," he told Stratechery's Ben Thompson, "I think […]

May 13, 2025 - 16:16
 0
AI therapy is a surveillance machine in a police state

Mark Zuckerberg wants you to be understood by the machine. The Meta CEO has recently been pitching a future where his AI tools give people something that "knows them well," not just as pals, but as professional help. "For people who don't have a person who's a therapist," he told Stratechery's Ben Thompson, "I think everyone will have an AI."

The jury is out on whether AI systems can make good therapists, but this future is already legible. A lot of people are anecdotally pouring their secrets out to chatbots, sometimes in dedicated therapy apps, but often to big general-purpose platforms like Meta AI, OpenAI's ChatGPT, or xAI's Grok. And unfortunately, this is starting to seem extraordinarily dangerous - for reasons that have little to do with what a chatbot is telling you, and everything to do with who else is peeking in.

This might sound paranoid, and it's still hypothetical. It's a truism someone is always watching on the internet, but the worst thing that comes of it for many people is some unwanted targeted ads. Right now in the US, though, we're watching the impending collision of two alarming trends. In one, tech executives are encouraging people to reveal ever more inti …

Read the full story at The Verge.