ChatGPT tagged posts

ChatGPT creates Persuasive, Phony Medical Report

chatgpt
Credit: Pixabay/CC0 Public Domain

A common truism among statisticians is that “the data don’t lie.” However, recent findings by Italian researchers may make those who study data think twice before making such assumptions.

Giuseppe Giannaccare, an eye surgeon at the University of Cagliari in Italy, reports that ChatGPT has conjured reams of persuasive phony data to support one surgical eye procedure over another.

“GPT-4 created a fake dataset of hundreds of patients in a matter of minutes,” Giannaccare said. “This was a surprising—yet frightening—experience.”

There have been countless stories of ChatGPT’s great achievements and potential since the model was unveiled to the world a year ago...

Read More

Scientists begin Building AI for Scientific Discovery using Tech behind ChatGPT

particle graphic
Credit: Pixabay/CC0 Public Domain

An international team of scientists, including from the University of Cambridge, have launched a new research collaboration that will leverage the same technology behind ChatGPT to build an AI-powered tool for scientific discovery.

While ChatGPT deals in words and sentences, the team’s AI will learn from numerical data and physics simulations from across scientific fields to aid scientists in modeling everything from supergiant stars to the Earth’s climate.

The team launched the initiative, called Polymathic AI earlier this week, alongside the publication of a series of related papers on the arXiv open access repository.

“This will completely change how people use AI and machine learning in science,” said Polymathic AI principal investigator S...

Read More

Researchers Trick Large Language Models into providing Prohibited Responses

chatgpt
Credit: Pixabay/CC0 Public Domain

ChatGPT and Bard may well be key players in the digital revolution currently underway in computing, coding, medicine, education, industry and finance, but they also are capable of easily being tricked into providing subversive data.

Articles in recent months detail some of the leading problems. Disinformation, inappropriate and offensive content, privacy breaches and psychological harm to vulnerable users all raise issues of questions about if and how such content can be controlled.

OpenAI and Google have, for instance, designed protective barriers to stanch some of the more egregious incidents of bias and offensive content. But it is clear that a complete victory is not yet in sight.

Researchers at Carnegie Mellon University in Pittsburgh are...

Read More