TechAI Is More Convincing Than You — Should We Be Worried?What's going on: A new study suggests that, yes, AI might now be more persuasive than actual people — and that’s raising major red flags about manipulation, misinformation, and just how little it takes for a chatbot to win us over. Researchers in Italy tested ChatGPT’s persuasive chops and found it outperformed human opponents in two-thirds of online debates. The wild part? It got even better when fed basic personal info about its opponent. (Not mildly terrifying or anything.) The bot used political leaning, gender, and race details to tailor its arguments to hit harder, like emphasizing hard work when debating a white male Republican. Scientists had 900 people in the US argue about abortion, climate change, and the death penalty. Some debated fellow humans, others sparred with ChatGPT. Participants rated their stance before and after each exchange — and the bots won more converts. What it means: One Oxford professor told The Washington Post she found the results “quite alarming,” especially given how persuasive AI could be in spreading lies and disinformation. On social media platforms, where algorithms already feed us what we want to hear, chatbots could quietly reinforce our beliefs — or nudge them in more extreme directions. And because AI tools like ChatGPT often focus more on sounding helpful than sticking to facts, it’s getting even harder to tell what’s true. So that’s…comforting. Bottom line: If AI is better at persuasion than humans, we may need to rethink how we protect ourselves — and our opinions — in an increasingly automated world. Related: Who Gets To Regulate AI? House Republicans Say Not the States (CNN) |