large language models tagged posts

New study identifies differences between human and AI-generated text

robot typing
Credit: Pixabay/CC0 Public Domain

A team of Carnegie Mellon University researchers set out to see how accurately large language models (LLMs) can match the style of text written by humans. Their findings were recently published in the Proceedings of the National Academy of Sciences.

“We humans, we adapt how we write and how we speak to the situation. Sometimes we’re formal or informal, or there are different styles for different contexts,” said Alex Reinhart, lead author and associate teaching professor in the Department of Statistics & Data Science.

“What we learned is that LLMs, like ChatGPT and Llama, write a certain way, and they don’t necessarily adapt to the writing style...

Read More

As LLMs Grow Bigger, they’re more likely to give Wrong Answers than Admit Ignorance

As LLMs grow bigger, they're more likely to give wrong answers than admit ignorance
Performance of a selection of GPT and LLaMA models with increasing difficulty. Credit: Nature (2024). DOI: 10.1038/s41586-024-07930-y

A team of AI researchers at Universitat Politècnica de València, in Spain, has found that as popular LLMs (Large Language Models) grow larger and more sophisticated, they become less likely to admit to a user that they do not know an answer.

In their study published in the journal Nature, the group tested the latest version of three of the most popular AI chatbots regarding their responses, accuracy, and how good users are at spotting wrong answers.

As LLMs have become mainstream, users have become accustomed to using them for writing papers, poems or songs and solving math problems and other tasks, and the issue of accuracy has become a bigger...

Read More

Language Agents Help Large Language Models ‘Think’ Better and Cheaper

Language agents help large language models 'think' better and cheaper
An example of the agent producing task-specific instructions (highlighted) for a classification dataset IMDB. The agent only runs once to produce the instructions. Then, the instructions are used for all our models during reasoning. Credit: arXiv (2023). DOI: 10.48550/arxiv.2310.03710

The LLMs that have increasingly taken over the tech world are not “cheap” in many ways. The most prominent LLMs, such as GPT-4, took some $100 million to build in the form of legal costs of accessing training data, computational power costs for what could be billions or trillions of parameters, the energy and water needed to fuel computation, and the many coders developing the training algorithms that must run cycle after cycle so the machine will “learn.”

But, if a researcher needs to do a specializ...

Read More

Researchers to present New Tool for Enhancing AI Transparency and Accuracy at conference

SMU researchers to present new tool for enhancing AI transparency and accuracy at IEEE Conference
Clark and Buongiorno’s research explores GAME-KG’s potential across two demonstrations. The first uses the video game Dark Shadows. Credit: SMU

While large language models (LLMs) have demonstrated remarkable capabilities in extracting data and generating connected responses, there are real questions about how these artificial intelligence (AI) models reach their answers. At stake are the potential for unwanted bias or the generation of nonsensical or inaccurate “hallucinations,” both of which can lead to false data.

That’s why SMU researchers Corey Clark and Steph Buongiorno are presenting a paper at the upcoming IEEE Conference on Games, scheduled for August 5-8 in Milan, Italy...

Read More