An AI that lights up the moon, improvises grammar and teaches robots to walk like humans • TechCrunch

The research in the area of ​​machine learning and AI, now a key technology in virtually every industry and business, is far too voluminous for anyone to read it all. This column, Perceptron, aims to bring together some of the most relevant recent discoveries and papers – particularly, but not limited to, artificial intelligence – and explain why they matter.

Over the past few weeks, scientists have been developing an algorithm to uncover fascinating details about the moon’s dimly lit – and in some cases pitch black – asteroid craters. Elsewhere, MIT researchers trained an AI model on textbooks to see if it could independently figure out the rules of a specific language. And teams from DeepMind and Microsoft investigated whether motion capture data could be used to teach robots to perform specific tasks, such as walking.

With the impending (and predictably delayed) launch of Artemis I, lunar science is once again in the spotlight. Ironically, however, it’s the darker regions of the Moon that are potentially the most interesting, as they may harbor water ice that can be used for countless purposes. It’s easy to spot the darkness, but what’s in it? An international team of image experts applied ML to the problem with some success.

Although the craters are in the deepest darkness, the Lunar Reconnaissance Orbiter still captures the occasional photon from within, and the team gathered years of these underexposed (but not totally black) exposures with a “physics-based, deep learning-based post-processing”. tool” described in Geophysical Research Letters. The result is that “visible routes in permanently shaded regions can now be designed, greatly reducing the risks to astronauts and robotic explorers of Artemis”, according to David Kring of the Lunar and Planetary Institute.

Let there be light! The interior of the crater is reconstructed from parasitic photons.

They’ll have flashlights, we imagine, but it’s good to have a general idea of ​​where to go beforehand, and of course that could affect where robotic exploration or landers focus their efforts.

As useful as it is, there’s nothing mysterious about turning sparse data into an image. But in the world of linguistics, AI is making fascinating inroads into how and if linguistic models really know what they know. In the case of learning the grammar of a language, an experiment at MIT revealed that a model trained on several textbooks was able to build its own model of how a given language works, to the point that its grammar for Polish, for example, could respond successfully to textbook problems. on this subject.

“Linguists thought that to really understand the rules of a human language, to understand what makes the system work, you have to be human. We wanted to see if we could mimic the kinds of knowledge and reasoning that humans (linguists ) bring to the task,” Adam Albright of MIT said in a press release. This is very early research on this front, but promising in that it shows that subtle or hidden rules can be “understood.” by AI models without explicit instruction.

But the experiment did not directly address a key and open question in AI research: how to prevent language models from producing toxic, discriminatory or misleading language. New work from DeepMind Is approach this problem by adopting a philosophical approach to the problem of aligning linguistic models with human values.

The lab’s researchers say there’s no “one size fits all” path to better language models, because models need to embody different traits depending on the contexts in which they’re deployed. For example, a model designed to aid in scientific study would ideally only make true statements, while an agent acting as a moderator in public debate would exercise values ​​like tolerance, civility, and respect.

So how can these values ​​be instilled into a language model? The DeepMind co-authors don’t suggest a specific way. Instead, they imply that models can cultivate more “robust” and “respectful” conversations over time through processes they call context building and elucidation. As the co-authors explain: “Even when a person is unaware of the values ​​that govern a given conversational practice, the agent can still help the human understand those values ​​by foreshadowing them in the conversation. , making the course of communication deeper and more fruitful for the person. human speaker.

TheMDA

Google’s LaMDA language model answering a question.

Finding the most promising methods for aligning language models takes immense time and resources, financial and otherwise. But in areas beyond language, particularly in science fields, that may not be the case for a long time, thanks to a $3.5 million National Science Foundation (NSF) grant awarded to a team of scientists from the University of Chicago, Argonne National Laboratory and MIT.

With the NSF grant, the grantees plan to build what they describe as “model gardens,” or repositories of AI models designed to solve problems in fields including physics, math, and chemistry. The repositories will link the models with data and computational resources along with automated tests and screens to validate their accuracy, ideally making it simple for scientific researchers to test and deploy the tools in their own studies.

“A user can come to the [model] garden and see all this information at a glance,” said Ben Blaiszik, data science researcher at Globus Labs involved in the project, in a press release. “They can cite the model, they can learn about the model, they can contact the authors, and they can invoke the model themselves in a web environment, on leadership computing facilities, or on their own computer.”

Meanwhile, in the field of robotics, researchers are building a platform for AI models not with software, but with hardware – neuromorphic hardware to be exact. Intel says the latest generation of its experimental Loihi chip can allow an object recognition model to “learn” to identify an object it’s never seen before using up to 175 times less power than if the model was running on a processor.

Neuromorphic Intel

A humanoid robot equipped with one of Intel’s experimental neuromorphic chips.

Neuromorphic systems attempt to mimic the biological structures of the nervous system. While traditional machine learning systems are fast or energy efficient, neuromorphic systems achieve both speed and efficiency by using nodes to process information and connections between nodes to transfer electrical signals using analog circuits. Systems can modulate the amount of power flowing between nodes, allowing each node to perform processing, but only when needed.

Intel and others believe neuromorphic computing has applications in logistics, such as powering a robot designed to aid manufacturing processes. It’s theoretical at this point – neuromorphic computing has its downsides – but maybe one day this vision will come true.

Embodied AI DeepMind

Picture credits: DeepMind

Closer to reality is DeepMind’s recent work on “embodied intelligence,” or using human and animal motion to teach robots to dribble a ball, carry boxes, and even play soccer. Researchers in the lab designed a setup to record data from motion trackers worn by humans and animals, from which an AI system learned to infer how to perform new actions, such as how to walk in a circular motion. The researchers say this approach has applied well to real-world robots, for example allowing a four-legged robot to walk like a dog while simultaneously dribbling a ball.

Coincidentally, Microsoft released a library of motion capture data earlier this summer intended to spur research into robots that can walk like humans. Called MoCapAct, the library contains motion capture clips that, when used with other data, can be used to create agile bipedal robots, at least in simulation.

“[Creating this data set] took the equivalent of 50 years on many GPUs equipped [servers] …a testament to the computational hurdle that MoCapAct removes for other researchers,” the work’s co-authors wrote in a blog post. “We hope the community can build on our data set and work to do amazing research on humanoid robot control.”

Peer review of scientific papers is invaluable human labor, and AI is unlikely to take over, but it can actually help ensure that peer reviews are actually useful. A Swiss research group has looked at model-based evaluation of peer reviews, and their early results are mixed – in a good way. There was no obvious good or bad method or trend, and the publication’s impact assessment did not seem to predict whether a review was thorough or useful. That’s okay, because although the quality of reviews differs, you wouldn’t want there to be a systematic lack of good reviews anywhere except in major journals, for example. Their work is in progress.

Finally, for anyone concerned with creativity in this field, here is a personal project of Karen X. Cheng it shows how a bit of ingenuity and hard work can be combined with AI to produce something truly original.

Comments are closed.