LLMs tagged posts

AI outperforms humans in emotional intelligence tests, study finds

AI outperforms humans in emotional intelligence tests
Credit: AI-generated image

Is artificial intelligence (AI) capable of suggesting appropriate behavior in emotionally charged situations? A team from the University of Geneva (UNIGE) and the University of Bern (UniBE) put six generative AIs—including ChatGPT—to the test using emotional intelligence (EI) assessments typically designed for humans.

The outcome: these AIs outperformed average human performance and were even able to generate new tests in record time. These findings open up new possibilities for AI in education, coaching, and conflict management. The study is published in Communications Psychology.

Large language models (LLMs) are artificial intelligence (AI) systems capable of processing, interpreting and generating human language...

Read More

Chain of Draft approach allows AI models to carry out tasks using far fewer resources

Chain of Draft approach allows AI models to carry out tasks using far fewer resources
Comparison of Claude 3.5 Sonnet’s accuracy and token usage across different tasks with three different prompt strategies: direct answer (Standard), Chain of Thought (CoT), and Chain of Draft (CoD). Credit: arXiv (2025). DOI: 10.48550/arxiv.2502.18600

A small team of AI engineers at Zoom Communications has developed a new approach to training AI systems that uses far fewer resources than the standard approach now in use. The team has published their results on the arXiv preprint server.

The new approach developed at Zoom is called Chain of Draft (CoD), an update of the traditional approach now in use called Chain of Thought (CoT). CoT uses a step-by-step approach to solving a problem, similar in many ways to human problem-solving...

Read More

New study identifies differences between human and AI-generated text

robot typing
Credit: Pixabay/CC0 Public Domain

A team of Carnegie Mellon University researchers set out to see how accurately large language models (LLMs) can match the style of text written by humans. Their findings were recently published in the Proceedings of the National Academy of Sciences.

“We humans, we adapt how we write and how we speak to the situation. Sometimes we’re formal or informal, or there are different styles for different contexts,” said Alex Reinhart, lead author and associate teaching professor in the Department of Statistics & Data Science.

“What we learned is that LLMs, like ChatGPT and Llama, write a certain way, and they don’t necessarily adapt to the writing style...

Read More

Test of ‘Poisoned Dataset’ shows Vulnerability of LLMs to Medical Misinformation

Credit: Nature Medicine (2025). DOI: 10.1038/s41591-024-03445-1

By conducting tests under an experimental scenario, a team of medical researchers and AI specialists at NYU Langone Health has demonstrated how easy it is to taint the data pool used to train LLMs.

For their study published in the journal Nature Medicine, the group generated thousands of articles containing misinformation and inserted them into an AI training dataset and conducted general LLM queries to see how often the misinformation appeared.

Prior research and anecdotal evidence have shown that the answers given by LLMs such as ChatGPT are not always correct and, in fact, are sometimes wildly off-base...

Read More