March 25, 2023


Research in machine learning and artificial intelligence, now a key technology in nearly every industry and company, is too large for anyone to read through. This column Perceptron aims to collect some of the most relevant recent discoveries and papers—especially but not limited to artificial intelligence—and explain why they matter.

One”audible“Using sonar to read facial expressions is one of the projects that has caught our eye over the past few weeks. As well Proctor, a framework from the Allen Institute for Artificial Intelligence (AI2) that procedurally generates environments that can be used to train real-world robots. Among other highlights, Meta created an AI system It can predict the structure of a protein given a single amino acid sequence.MIT researchers have developed a new hardware They claim to provide AI with faster computation with less energy.

Developed by a team at Cornell University, the “earable” looks like a bulky pair of headphones. Speakers send sound signals to the side of the wearer’s face, while microphones pick up almost imperceptible echoes created by the nose, lips, eyes and other facial features. These “echo profiles” enable the headset to capture movements such as brow lifts and eye darts, which AI algorithms translate into full facial expressions.

artificial intelligence headset

Image Source: Cornell

Earbuds have some limitations. It only lasts three hours on battery, processing has to be offloaded to a smartphone, and the echo translation AI algorithm has to be trained on 32 minutes of facial data to start recognizing expressions. But the researchers believe it’s a much smoother experience than recorders traditionally used for animation in movies, TV and video games. For example, for the mystery game LA Noire, Rockstar Games built a rig that trained 32 cameras on each actor’s face.

Perhaps one day Cornell’s wearables will be used to animate humanoid robots. But these robots must first learn how to navigate a room. Fortunately, AI2’s ProcTHOR has taken a step in this direction (no pun intended), creating thousands of custom scenarios, including classrooms, libraries, and offices, in which simulated robots must perform tasks such as picking up lift objects and move around furniture.

The idea behind a scene with simulated lighting and a subset of surface materials (e.g. wood, tiles, etc.) and household objects is to have the simulated robot touch as much as possible. Performance in simulated environments can improve the performance of real-world systems, a well-established theory in artificial intelligence; self-driving car companies like Alphabet’s Waymo simulate entire communities to fine-tune how their cars behave in the real world.

ProcTHOR AI2

Image Source: Allen Institute for Artificial Intelligence

As for ProcTHOR, AI2 claims in a paper that scaling the number of training environments can consistently improve performance. That bodes well for robots for homes, workplaces and beyond.

Of course, training these types of systems requires a lot of computing power. But that may not always be the case. MIT researchers say they have created an “analog” processor that can be used to create super-fast networks of “neurons” and “synapses” that can be used to perform tasks such as recognizing images, translating languages, and more.

The researchers’ processor uses “proton programmable resistors” arranged in an array to “learn” the skill. Increasing and decreasing the conductance of the resistor simulates the strengthening and weakening of synapses between neurons in the brain, which is part of the learning process.

Conductance is controlled by electrolytes that control the movement of protons. As more protons are pushed into the resistor’s channel, the conductance increases. When protons are removed, the conductance decreases.

computer circuit board

processor on computer circuit board

An inorganic material, phosphosilicate glass, makes the MIT team’s processor extremely fast because it contains nanometer-sized pores whose surfaces provide perfect paths for proteins to diffuse. As an added bonus, the glass can run at room temperature and isn’t damaged by proteins as they move along the pores.

“Once you have an analog processor, you stop training the network that everyone else is working on,” lead author and MIT postdoc Murat Onen was quoted as saying in a press release. “You’re going to train a network of unprecedented complexity that no one else can afford, and so will vastly outperform all networks. In other words, it’s not a faster car, it’s a spaceship.”

Speaking of acceleration, machine learning is now in action Manage particle accelerators, at least in experimental formAt Lawrence Berkeley National Laboratory, two teams have demonstrated that ML-based simulations of whole machines and beams can provide high-precision predictions 10 times higher than ordinary statistical analysis.

Image Source: Thor Swift/Berkeley Lab

“If you can predict the beam properties with precision beyond its fluctuations, then you can use that prediction to improve the performance of the accelerator,” says Daniele Filippetto of the lab. “Simulating all the physics and equipment involved is no easy feat, but it’s impressive. Surprisingly, the early efforts of the various teams yielded promising results.

At Oak Ridge National Laboratory, an AI-powered platform lets them use neutron scattering to perform hyperspectral computed tomography to find the best… Maybe we should let them explain.

In the medical world, machine learning-based image analysis has new applications in neurology, and researchers at University College London have trained a model Detecting early signs of epilepsy-causing brain damage.

Brain MRI for training the UCL algorithm.

A common cause of drug-resistant epilepsy is so-called focal cortical dysplasia, which is an area of ​​the brain that develops abnormally but is not apparent on MRI for whatever reason. Detecting it early can be very helpful, so the UCL team trained an MRI examination model called multicentric epilepsy detection on thousands of examples of healthy and FCD-affected brain regions.

The model was able to detect two-thirds of the FCD it showed, which is actually pretty good because the signs are so subtle. In fact, it found 178 cases where doctors couldn’t find FCD but could. Of course, the final decision rests with the experts, but sometimes all it takes is a computer that hints that there might be a problem to take a closer look and get a confident diagnosis.

“We emphasize creating an explainable AI algorithm that can help doctors make decisions. Showing doctors how the MELD algorithm makes predictions is an important part of the process,” said UCL’s Mathilde Ripart.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *