LLMs tagged posts

Chain of Draft approach allows AI models to carry out tasks using far fewer resources

Chain of Draft approach allows AI models to carry out tasks using far fewer resources
Comparison of Claude 3.5 Sonnet’s accuracy and token usage across different tasks with three different prompt strategies: direct answer (Standard), Chain of Thought (CoT), and Chain of Draft (CoD). Credit: arXiv (2025). DOI: 10.48550/arxiv.2502.18600

A small team of AI engineers at Zoom Communications has developed a new approach to training AI systems that uses far fewer resources than the standard approach now in use. The team has published their results on the arXiv preprint server.

The new approach developed at Zoom is called Chain of Draft (CoD), an update of the traditional approach now in use called Chain of Thought (CoT). CoT uses a step-by-step approach to solving a problem, similar in many ways to human problem-solving...

Read More

New study identifies differences between human and AI-generated text

robot typing
Credit: Pixabay/CC0 Public Domain

A team of Carnegie Mellon University researchers set out to see how accurately large language models (LLMs) can match the style of text written by humans. Their findings were recently published in the Proceedings of the National Academy of Sciences.

“We humans, we adapt how we write and how we speak to the situation. Sometimes we’re formal or informal, or there are different styles for different contexts,” said Alex Reinhart, lead author and associate teaching professor in the Department of Statistics & Data Science.

“What we learned is that LLMs, like ChatGPT and Llama, write a certain way, and they don’t necessarily adapt to the writing style...

Read More

Test of ‘Poisoned Dataset’ shows Vulnerability of LLMs to Medical Misinformation

Credit: Nature Medicine (2025). DOI: 10.1038/s41591-024-03445-1

By conducting tests under an experimental scenario, a team of medical researchers and AI specialists at NYU Langone Health has demonstrated how easy it is to taint the data pool used to train LLMs.

For their study published in the journal Nature Medicine, the group generated thousands of articles containing misinformation and inserted them into an AI training dataset and conducted general LLM queries to see how often the misinformation appeared.

Prior research and anecdotal evidence have shown that the answers given by LLMs such as ChatGPT are not always correct and, in fact, are sometimes wildly off-base...

Read More

Leaner Large Language Models could enable Efficient Local Use on Phones and Laptops

Large language models (LLMs) are increasingly automating tasks like translation, text classification and customer service. But tapping into an LLM’s power typically requires users to send their requests to a centralized server—a process that’s expensive, energy-intensive and often slow.

Now, researchers have introduced a technique for compressing an LLM’s reams of data, which could increase privacy, save energy and lower costs. Their findings are published on the arXiv preprint server.

The new algorithm, developed by engineers at Princeton and Stanford Engineering, works by trimming redundancies and reducing the precision of an LLM’s layers of information...

Read More