neural networks tagged posts

Face Recognition for Galaxies: Artificial Intelligence brings new tools to astronomy

A 'deep learning' algorithm trained on images from cosmological simulations is surprisingly successful at classifying real galaxies in Hubble images. Top row: High-resolution images from a computer simulation of a young galaxy going through three phases of evolution (before, during, and after the "blue nugget" phase). Middle row: The same images from the computer simulation of a young galaxy in three phases of evolution as it would appear if observed by the Hubble Space Telescope. Bottom row: Hubble Space Telescope images of distant young galaxies classified by a deep learning algorithm trained to recognize the three phases of galaxy evolution. The width of each image is approximately 100,000 light years. Credit: Image credits for top two rows: Greg Snyder, Space Telescope Science Institute, and Marc Huertas-Company, Paris Observatory. For bottom row: The HST images are from the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS).

A ‘deep learning’ algorithm trained on images from cosmological simulations is surprisingly successful at classifying real galaxies in Hubble images. Top row: High-resolution images from a computer simulation of a young galaxy going through three phases of evolution (before, during, and after the “blue nugget” phase). Middle row: The same images from the computer simulation of a young galaxy in three phases of evolution as it would appear if observed by the Hubble Space Telescope. Bottom row: Hubble Space Telescope images of distant young galaxies classified by a deep learning algorithm trained to recognize the three phases of galaxy evolution. The width of each image is approximately 100,000 light years...

Read More

Artificial Intelligence Analyzes Gravitational Lenses 10 million Times Faster

Neural Nets and Gravitational Lenses

KIPAC scientists have for the first time used artificial neural networks to analyze complex distortions in spacetime, called gravitational lenses, demonstrating that the method is 10 million times faster than traditional analyses. (Greg Stewart/SLAC National Accelerator Laboratory)

Brain-mimicking ‘neural networks’ can revolutionize the way astrophysicists analyze their most complex data. Researchers from the Department of Energy’s SLAC National Accelerator Laboratory and Stanford University have for the first time shown that neural networks – a form of artificial intelligence — can accurately analyze the complex distortions in spacetime known as gravitational lenses 10 million times faster than traditional methods...

Read More

Neural Networks allow Classic Painting Styles to be applied to Modern Video

Neural networks allow classic painting styles to be applied to modern video

Scene from Ice Age (2002) processed in the style of The Starry Night. Comparing independent per-frame processing to our time consistent approach, the latter is clearly preferable. Credit: Manuel Ruder, et al.

3 researchers at the University of Freiburg has taken the science of using neural networks to understand the style of paintings done by human hands and applying it to modern photographs, one step further, by applying it to video.

Imagine if you could film a friend with your phone, as he wanders aimlessly down a road on the edges of a quaint village, then use an app to make it look like your friend was wandering through a Starry Night painting, by Vincent Van Gough? It appears the day may be coming when amateur and professional video/movie makers alike may be able to do such things wit...

Read More

Chip Could bring Deep Learning to Mobile Devices

MIT researchers have designed a new chip to implement neural networks. It is 10 times as efficient as a mobile GPU, so it could enable mobile devices to run powerful artificial-intelligence algorithms locally, rather than uploading data to the Internet for processing. Credit: MIT News

MIT researchers have designed a new chip to implement neural networks. It is 10 times as efficient as a mobile GPU, so it could enable mobile devices to run powerful artificial-intelligence algorithms locally, rather than uploading data to the Internet for processing. Credit: MIT News

Advance could enable mobile devices to implement ‘neural networks’ modeled on the human brain. It is 10 times as efficient as a mobile GPU, so it could enable mobile devices to run powerful artificial-intelligence algorithms locally, rather than uploading data to the Internet for processing.

Neural networks are typically implemented using graphics processing units (GPUs), special-purpose graphics chips found in all computing devices with screens...

Read More