AI Models tagged posts

Why faster AI isn’t always better

AI Latency Perception

In the race to make AI models not just reason better but respond faster, latency—the delay before an answer appears—is often treated as a purely technical constraint, something to minimize and move past. But how is this relentless push for speed actually impacting the people using these systems every day?

There is a rich body of work in human–computer interaction linking faster response to better usability. But AI models are fundamentally different from the deterministic systems that previous research was built on. When you wait for a file to download or a page to load, the outcome is fixed and predictable.

AI models are probabilistic—you cannot anticipate the precise response...

Read More

Novel technique overcomes spurious correlations problem in AI

ai
Credit: Unsplash/CC0 Public Domain

AI models often rely on “spurious correlations,” making decisions based on unimportant and potentially misleading information. Researchers have now discovered these learned spurious correlations can be traced to a very small subset of the training data and have demonstrated a technique that overcomes the problem. The work has been published on the arXiv preprint server.

“This technique is novel in that it can be used even when you have no idea what spurious correlations the AI is relying on,” says Jung-Eun Kim, corresponding author of a paper on the work and an assistant professor of computer science at North Carolina State University.

“If you already have a good idea of what the spurious features are, our technique is an efficient and effective...

Read More

AI thinks like us—flaws and all: Study finds ChatGPT mirrors human decision biases in half the tests

AI bias
Credit: AI-generated image

Can we really trust AI to make better decisions than humans? A new study says … not always. Researchers have discovered that OpenAI’s ChatGPT, one of the most advanced and popular AI models, makes the same kinds of decision-making mistakes as humans in some situations—showing biases like overconfidence of hot-hand (gambler’s) fallacy—yet acting inhuman in others (e.g., not suffering from base-rate neglect or sunk cost fallacies).

Published in the Manufacturing & Service Operations Management journal, the study reveals that ChatGPT doesn’t just crunch numbers—it “thinks” in ways eerily similar to humans, including mental shortcuts and blind spots...

Read More

Chain of Draft approach allows AI models to carry out tasks using far fewer resources

Chain of Draft approach allows AI models to carry out tasks using far fewer resources
Comparison of Claude 3.5 Sonnet’s accuracy and token usage across different tasks with three different prompt strategies: direct answer (Standard), Chain of Thought (CoT), and Chain of Draft (CoD). Credit: arXiv (2025). DOI: 10.48550/arxiv.2502.18600

A small team of AI engineers at Zoom Communications has developed a new approach to training AI systems that uses far fewer resources than the standard approach now in use. The team has published their results on the arXiv preprint server.

The new approach developed at Zoom is called Chain of Draft (CoD), an update of the traditional approach now in use called Chain of Thought (CoT). CoT uses a step-by-step approach to solving a problem, similar in many ways to human problem-solving...

Read More