
Can we really trust AI to make better decisions than humans? A new study says … not always. Researchers have discovered that OpenAI’s ChatGPT, one of the most advanced and popular AI models, makes the same kinds of decision-making mistakes as humans in some situations—showing biases like overconfidence of hot-hand (gambler’s) fallacy—yet acting inhuman in others (e.g., not suffering from base-rate neglect or sunk cost fallacies).
Published in the Manufacturing & Service Operations Management journal, the study reveals that ChatGPT doesn’t just crunch numbers—it “thinks” in ways eerily similar to humans, including mental shortcuts and blind spots...
Read More
Recent Comments