AI Models tagged posts

Novel technique overcomes spurious correlations problem in AI

ai
Credit: Unsplash/CC0 Public Domain

AI models often rely on “spurious correlations,” making decisions based on unimportant and potentially misleading information. Researchers have now discovered these learned spurious correlations can be traced to a very small subset of the training data and have demonstrated a technique that overcomes the problem. The work has been published on the arXiv preprint server.

“This technique is novel in that it can be used even when you have no idea what spurious correlations the AI is relying on,” says Jung-Eun Kim, corresponding author of a paper on the work and an assistant professor of computer science at North Carolina State University.

“If you already have a good idea of what the spurious features are, our technique is an efficient and effective...

Read More

AI thinks like us—flaws and all: Study finds ChatGPT mirrors human decision biases in half the tests

AI bias
Credit: AI-generated image

Can we really trust AI to make better decisions than humans? A new study says … not always. Researchers have discovered that OpenAI’s ChatGPT, one of the most advanced and popular AI models, makes the same kinds of decision-making mistakes as humans in some situations—showing biases like overconfidence of hot-hand (gambler’s) fallacy—yet acting inhuman in others (e.g., not suffering from base-rate neglect or sunk cost fallacies).

Published in the Manufacturing & Service Operations Management journal, the study reveals that ChatGPT doesn’t just crunch numbers—it “thinks” in ways eerily similar to humans, including mental shortcuts and blind spots...

Read More

Chain of Draft approach allows AI models to carry out tasks using far fewer resources

Chain of Draft approach allows AI models to carry out tasks using far fewer resources
Comparison of Claude 3.5 Sonnet’s accuracy and token usage across different tasks with three different prompt strategies: direct answer (Standard), Chain of Thought (CoT), and Chain of Draft (CoD). Credit: arXiv (2025). DOI: 10.48550/arxiv.2502.18600

A small team of AI engineers at Zoom Communications has developed a new approach to training AI systems that uses far fewer resources than the standard approach now in use. The team has published their results on the arXiv preprint server.

The new approach developed at Zoom is called Chain of Draft (CoD), an update of the traditional approach now in use called Chain of Thought (CoT). CoT uses a step-by-step approach to solving a problem, similar in many ways to human problem-solving...

Read More

Leaner Large Language Models could enable Efficient Local Use on Phones and Laptops

Large language models (LLMs) are increasingly automating tasks like translation, text classification and customer service. But tapping into an LLM’s power typically requires users to send their requests to a centralized server—a process that’s expensive, energy-intensive and often slow.

Now, researchers have introduced a technique for compressing an LLM’s reams of data, which could increase privacy, save energy and lower costs. Their findings are published on the arXiv preprint server.

The new algorithm, developed by engineers at Princeton and Stanford Engineering, works by trimming redundancies and reducing the precision of an LLM’s layers of information...

Read More