Category Physics

Approach improves how new skills are taught to large language models

ChatGPT
Credit: Unsplash/CC0 Public Domain

Researchers have developed a technique that significantly improves the performance of large language models without increasing the computational power necessary to fine-tune the models. The researchers demonstrated that their technique improves the performance of these models over previous techniques in tasks including commonsense reasoning, arithmetic reasoning, instruction following, code generation, and visual recognition.

Large language models are artificial intelligence systems that are pretrained on huge data sets. After pretraining, these models predict which words should follow each other in order to respond to user queries...

Read More

Centaur: AI that thinks like us—and could help explain how we think

AI that thinks like us—and could help explain how we think
Evaluation in different held-out settings. Credit: Nature (2025). DOI: 10.1038/s41586-025-09215-4

Researchers at Helmholtz Munich have developed an artificial intelligence model that can simulate human behavior with remarkable accuracy. The language model, called Centaur, was trained on more than ten million decisions from psychological experiments—and makes decisions in ways that closely resemble those of real people. This opens new avenues for understanding human cognition and improving psychological theories.

For decades, psychology has aspired to explain the full complexity of human thought. Yet traditional models could either offer a transparent explanation of how people think—or reliably predict how they behave. Achieving both has long seemed out of reach.

The team le...

Read More

RisingAttacK: New technique can make AI ‘see’ whatever you want

AI eye
Credit: AI-generated image

Researchers have demonstrated a new way of attacking artificial intelligence computer vision systems, allowing them to control what the AI “sees.” The research shows that the new technique, called RisingAttacK, is effective at manipulating all of the most widely used AI computer vision systems.

At issue are so-called “adversarial attacks,” in which someone manipulates the data being fed into an AI system to control what the system sees, or does not see, in an image. For example, someone might manipulate an AI’s ability to detect traffic signals, pedestrians or other cars—which would cause problems for autonomous vehicles. Or a hacker could install code on an X-ray machine that causes an AI system to make inaccurate diagnoses.

“We wanted to find an eff...

Read More

Entropy engineering opens new avenue for robust quantum anomalous Hall effect in 2D magnets

New design concept proposed for robust quantum anomalous Hall effect via entropy engineering in 2D magnets

A research team from the University of Wollongong’s (UOW) Institute for Superconducting and Electronic Materials (ISEM) has addressed a 40-year-old quantum puzzle, unlocking a new pathway to creating nextgeneration electronic devices that operate without losing energy or wasting electricity.

Published in Advanced Materials, the study is the work of UOW researchers led by Distinguished Professor Xiaolin Wang and Dr. M Nadeem, with Ph.D. candidate Syeda Amina Shabbir and Dr. Frank Fei Yun.

It introduces a new design concept to realize the elusive and highly sought-after quantum anomalous Hall (QAH) effect.

The field of quantum materials could cut global energy consumption and transform everyday life for people around the world.

Using a technique called entropy engineering, t...

Read More