Category Technology/Electronics

New Security Protocol Shields Data from Attackers during Cloud-Based Computation

Light securing a data pathway between a computer and a cloud-based computing platform
Caption: MIT researchers have developed a security protocol that leverages the quantum properties of light to guarantee that data sent to and from a cloud server remain secure during deep learning computations.
Credit: Christine Daniloff, MIT; iStock

The technique leverages quantum properties of light to guarantee security while preserving the accuracy of a deep-learning model. Researchers developed a technique guaranteeing that data remain secure during multiparty, cloud-based computation. This method, which leverages the quantum properties of light, could enable organizations like hospitals or financial companies to use deep learning to securely analyze confidential patient or customer data.

Deep-learning models are being used in many fields, from health care diagnostics to financi...

Read More

As LLMs Grow Bigger, they’re more likely to give Wrong Answers than Admit Ignorance

As LLMs grow bigger, they're more likely to give wrong answers than admit ignorance
Performance of a selection of GPT and LLaMA models with increasing difficulty. Credit: Nature (2024). DOI: 10.1038/s41586-024-07930-y

A team of AI researchers at Universitat Politècnica de València, in Spain, has found that as popular LLMs (Large Language Models) grow larger and more sophisticated, they become less likely to admit to a user that they do not know an answer.

In their study published in the journal Nature, the group tested the latest version of three of the most popular AI chatbots regarding their responses, accuracy, and how good users are at spotting wrong answers.

As LLMs have become mainstream, users have become accustomed to using them for writing papers, poems or songs and solving math problems and other tasks, and the issue of accuracy has become a bigger...

Read More

Language Agents Help Large Language Models ‘Think’ Better and Cheaper

Language agents help large language models 'think' better and cheaper
An example of the agent producing task-specific instructions (highlighted) for a classification dataset IMDB. The agent only runs once to produce the instructions. Then, the instructions are used for all our models during reasoning. Credit: arXiv (2023). DOI: 10.48550/arxiv.2310.03710

The LLMs that have increasingly taken over the tech world are not “cheap” in many ways. The most prominent LLMs, such as GPT-4, took some $100 million to build in the form of legal costs of accessing training data, computational power costs for what could be billions or trillions of parameters, the energy and water needed to fuel computation, and the many coders developing the training algorithms that must run cycle after cycle so the machine will “learn.”

But, if a researcher needs to do a specializ...

Read More

Compact ‘Gene Scissors’ enable Effective Genome Editing, may offer Future Treatment of High Cholesterol Gene Defect

Compact
In Gerold Schank’s lab, researchers from the University of Zurich have used protein engineering and an AI model to make the protein TnpB much more effective for genome editing. Credit: Christian Reichenbach

CRISPR-Cas is used broadly in research and medicine to edit, insert, delete or regulate genes in organisms. TnpB is an ancestor of this well-known “gene scissors” but is much smaller and thus easier to transport into cells.

Using protein engineering and AI algorithms, University of Zurich researchers have now enhanced TnpB capabilities to make DNA editing more efficient and versatile, paving the way for treating a genetic defect for high cholesterol in the future. The work has been published in Nature Methods.

CRISPR-Cas systems, which consist of protein and RNA components, we...

Read More