LLM tagged posts

Generative AI brings us Closer to Automating Investment Expertise

Credit: CC0 Public Domain

Large language models (LLMs) such as ChatGPT and Google Gemini excel at being trained on large data-sets to generate informative responses to prompts. Yi Cao, an assistant professor of accounting at the Donald G. Costello College of Business at George Mason University, and Long Chen, associate professor and area chair of accounting at Costello, are actively exploring how individual investors can use LLMs to glean market insights from the dizzying array of available data about companies.

Their new working paper, appearing in SSRN Electronic Journal and co-authored with Jennifer Wu Tucker of the University of Florida and Chi Wan of University of Massachusetts Boston, examines AI’s ability to identify “peer firms,” or product market competitors in an industry.

Read More

Engineers Recreate Star Trek’s Holodeck using ChatGPT and Video Game Assets

Penn Engineers recreate Star Trek's Holodeck using ChatGPT and video game assets
Essentially, Holodeck engages a large language model (LLM) in a conversation, building a virtual environment piece by piece. Credit: Yue Yang

In “Star Trek: The Next Generation,” Captain Picard and the crew of the U.S.S. Enterprise leverage the Holodeck, an empty room capable of generating 3D environments, of preparing for missions and entertaining them, simulating everything from lush jungles to the London of Sherlock Holmes.

Deeply immersive and fully interactive, Holodeck-created environments are infinitely customizable, using nothing but language; the crew has only to ask the computer to generate an environment, and that space appears in the Holodeck.

Today, virtual interactive environments are also used to train robots prior to real-world deployment in a process called “Sim2...

Read More

Microsoft’s Small Language Model Outperforms Larger Models on Standardized Math tests

Grade School Math
Credit: Deepak Gautam from Pexels

A small team of AI researchers at Microsoft reports that the company’s Orca-Math small language model outperforms other, larger models on standardized math tests. The group has published a paper on the arXiv preprint server describing their testing of Orca-Math on the Grade School Math 8K (GSM8K) benchmark and how it fared compared to well-known LLMs.

Many popular LLMs such as ChatGPT are known for their impressive conversational skills—less well known is that most of them can also solve math word problems. AI researchers have tested their abilities at such tasks by pitting them against the GSM8K, a dataset of 8,500 grade-school math word problems that require multistep reasoning to solve, along with their correct answers.

In this new study, th...

Read More

Multiple AI Models Help Robots Execute Complex Plans more Transparently

Multiple AI models help robots execute complex plans more transparently
The HiP framework developed at MIT CSAIL develops detailed plans for robots using the expertise of three different foundation models, helping it execute tasks in households, factories, and construction that require multiple steps. Credit: Alex Shipps/MIT CSAIL

Your daily to-do list is likely pretty straightforward: wash the dishes, buy groceries, and other minutiae. It’s unlikely you wrote out “pick up the first dirty dish,” or “wash that plate with a sponge,” because each of these miniature steps within the chore feels intuitive. While we can routinely complete each step without much thought, a robot requires a complex plan that involves more detailed outlines.

MIT’s Improbable AI Lab, a group within the Computer Science and Artificial Intelligence Laboratory (CSAIL), has offered t...

Read More