GPT4 model tagged posts

AI thinks like us—flaws and all: Study finds ChatGPT mirrors human decision biases in half the tests

AI bias
Credit: AI-generated image

Can we really trust AI to make better decisions than humans? A new study says … not always. Researchers have discovered that OpenAI’s ChatGPT, one of the most advanced and popular AI models, makes the same kinds of decision-making mistakes as humans in some situations—showing biases like overconfidence of hot-hand (gambler’s) fallacy—yet acting inhuman in others (e.g., not suffering from base-rate neglect or sunk cost fallacies).

Published in the Manufacturing & Service Operations Management journal, the study reveals that ChatGPT doesn’t just crunch numbers—it “thinks” in ways eerily similar to humans, including mental shortcuts and blind spots...

Read More