large language models tagged posts

LLMs choose friends and colleagues like people, researchers find

When large language models (LLMs) make decisions about networking and friendship, the models tend to act like people, across both synthetic simulations and real-world network contexts.

Marios Papachristou and Yuan Yuan developed a framework to study network formation behaviors of multiple LLM agents and compared these behaviors against human behaviors. The paper is published in the journal PNAS Nexus.

How LLMs form network connections
The authors conducted simulations using several large language models placed in a network, which were asked to choose which other nodes to connect with, given their number of connections, common neighbors, and shared attributes, like arbitrarily assigned “hobbies” or “location.”

The authors varied the network context, including simulations of fri...

Read More

Cyber defense innovation could significantly boost 5G network security

Breakthrough development could significantly boost 5G network security
Proposed FedLLMGuard Architecture. Credit: University of Portsmouth

A framework for building tighter security into 5G wireless communications has been created by a Ph.D. student working with the University of Portsmouth’s Artificial Intelligence and Data Center.

With its greater network capacity and ability to rapidly transmit huge amounts of information from one device to another, 5G is a critical component of intelligent systems and services—including those for health care and financial services.

However, the dynamic nature of 5G networks, the high volumes of data shared and the ever changing types of information transmitted means that these networks are extremely vulnerable to cyber threats and increasing risks of attack.

Hadiseh Rezaei, a Ph.D...

Read More

Approach improves how new skills are taught to large language models

ChatGPT
Credit: Unsplash/CC0 Public Domain

Researchers have developed a technique that significantly improves the performance of large language models without increasing the computational power necessary to fine-tune the models. The researchers demonstrated that their technique improves the performance of these models over previous techniques in tasks including commonsense reasoning, arithmetic reasoning, instruction following, code generation, and visual recognition.

Large language models are artificial intelligence systems that are pretrained on huge data sets. After pretraining, these models predict which words should follow each other in order to respond to user queries...

Read More

New study identifies differences between human and AI-generated text

robot typing
Credit: Pixabay/CC0 Public Domain

A team of Carnegie Mellon University researchers set out to see how accurately large language models (LLMs) can match the style of text written by humans. Their findings were recently published in the Proceedings of the National Academy of Sciences.

“We humans, we adapt how we write and how we speak to the situation. Sometimes we’re formal or informal, or there are different styles for different contexts,” said Alex Reinhart, lead author and associate teaching professor in the Department of Statistics & Data Science.

“What we learned is that LLMs, like ChatGPT and Llama, write a certain way, and they don’t necessarily adapt to the writing style...

Read More