LLMs tagged posts

Leaner Large Language Models could enable Efficient Local Use on Phones and Laptops

Large language models (LLMs) are increasingly automating tasks like translation, text classification and customer service. But tapping into an LLM’s power typically requires users to send their requests to a centralized server—a process that’s expensive, energy-intensive and often slow.

Now, researchers have introduced a technique for compressing an LLM’s reams of data, which could increase privacy, save energy and lower costs. Their findings are published on the arXiv preprint server.

The new algorithm, developed by engineers at Princeton and Stanford Engineering, works by trimming redundancies and reducing the precision of an LLM’s layers of information...

Read More

Integer Addition Algorithm could Reduce Energy Needs of AI by 95%

image of computer screen with ai screen on it connected to a big energy source
Credit: AI-generated image

A team of engineers at AI inference technology company BitEnergy AI reports a method to reduce the energy needs of AI applications by 95%. The group has published a paper describing their new technique on the arXiv preprint server.

As AI applications have gone mainstream, their use has risen dramatically, leading to a notable rise in energy needs and costs. LLMs such as ChatGPT require a lot of computing power, which in turn means a lot of electricity is needed to run them.

As just one example, ChatGPT now requires roughly 564MWh daily, or enough to power 18,000 American homes...

Read More

AI Study reveals Dramatic Reasoning Breakdown in Large Language Models

AI study reveals dramatic reasoning breakdown in LLMs
Strong fluctuations across AIW problem variations. Also for higher performers, eg GPT-4o, GPT-4 and Claude Opus 3, correct response rates vary strongly from close to 1 to close to 0, despite only slight changes introduced in AIW variations (a color per each variation 1–4). This clearly shows lack of model robustness, hinting basic reasoning deficits. Credit: arXiv (2024). DOI: 10.48550/arxiv.2406.02061

Even the best AI large language models (LLMs) fail dramatically when it comes to simple logical questions. This is the conclusion of researchers from the Jülich Supercomputing Center (JSC), the School of Electrical and Electronic Engineering at the University of Bristol and the LAION AI laboratory.

In their paper posted to the arXiv preprint server, titled “Alice in Wonderland: S...

Read More

Researchers develop AI-driven Machine-Checking Method for Verifying Software Code

 software code
Credit: Pixabay/CC0 Public Domain

A team of computer scientists led by the University of Massachusetts Amherst recently announced a new method for automatically generating whole proofs that can be used to prevent software bugs and verify that the underlying code is correct.

This new method, called Baldur, leverages the artificial intelligence power of large language models (LLMs), and when combined with the state-of-the-art tool Thor, yields unprecedented efficacy of nearly 66%. The team was recently awarded a Distinguished Paper award at the ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering.

“We have unfortunately come to expect that our software is buggy, despite the fact that it is everywhere and we all use it every day...

Read More