“Wherever you go today, there is talk of how artificial intelligence (AI) can transform or disrupt a process or system. But today’s artificial intelligence is not really intelligent, because it does not have the ability to deal with the unknown. A typical AI model uses a lot of data and computing power, but it remains a black box with no way of explaining how it makes decisions.
“
Wherever you go today, there is talk of how artificial intelligence (AI) can transform or disrupt a process or system. But today’s artificial intelligence is not really intelligent, because it does not have the ability to deal with the unknown. A typical AI model uses a lot of data and computing power, but it remains a black box with no way of explaining how it makes decisions.
This is something Bruno Maisonnier, CEO and founder of French startup AnotherBrain, has emphasized several times in an interview with EE Times Europe. Eager to achieve true intelligence in artificial intelligence, when his previous startup, Aldebaran Robotics, was acquired by SoftBank for $100 million in 2013, Maisonnier opted to quit the company to continue his “artificial general intelligence” expedition.
“Robots are great, but at the same time, they’re stupid,” Maisonnier said. “They can’t understand the basics, and they can’t understand the environment around them. In 2010, I started working on artificial intelligence and giving robots the ability to behave more naturally, making them behave more like we would expect.” In researching deep learning systems After that, he concluded that “deep learning, which everyone calls AI today, is fundamentally deceiving.”
“Deep learning, neural networks, big data and artificial intelligence, all of which have nothing to do with intelligence,” Maisonnier said. “While they are very powerful and have a lot of possibilities, I’m not trying to discredit it, they really have nothing to do with intelligence.”
Bruno Maisonnier, CEO of AnotherBrain (Image credit: AnotherBrain)
Inspired by PalmPilot inventor Jeff Hawkins’ book On Intelligence, Maisonnier considered implementing some of Hawkins’ ideas. “I started AnotherBrain with a promise — to create more general intelligence.” He already had the framework, but some electronics were missing; he also wanted to embed the system in a chip. To that end, Maisonnier hired brain vision expert Patrick Pirim as chief scientific officer.
Maisonnier went to great lengths to make it clear that the world is unpredictable and that we simply cannot build self-driving cars or even complex robots with existing technology. “Real intelligence is a system that can analyze and understand how our brains work in real time, it doesn’t require a lot of data, it operates in a very frugal way,” he said. “Real intelligence needs to be implemented in a chip that consumes less than 1 watt, compared to 15 or 20 watts for the inference phase of deep learning and several kilowatts for the learning phase of training.”
Maisonnier added that a true smart chip does not require big data or a lot of data, and its behavioral results should also be interpretable. To illustrate what he means by interpretability, he cites an example of a self-driving car recognizing a motorcycle. The car’s cameras and sensors spot an object and determine it’s a motorcycle, but can they explain why it’s a motorcycle, rather than just ticking off something in the database? As of now, they can’t. But if a human driver were asked to explain why an object was a motorcycle, he might point out that it had thick black wheels, a roaring motor in the middle, a gas tank and a helmet. rider. “That’s interpretability,” Maisonnier said. “That’s what we do at AnotherBrain. We’ve demonstrated our proof-of-concept on a quality-controlled industrial line at a French automaker.”
What is artificial intelligence based on?
Interpretability enables systems to work in a more natural way. Explainable AI is necessary as AI models are increasingly used to augment and replace human decision-making, in some cases Under these circumstances, companies need to judge whether the decisions made by these models are reasonable. However, most advanced AI models are complex black boxes that cannot explain why they make a particular recommendation or decision.
The concept of explainable AI (XAI) (Image credit: DARPA)
Based on concerns about the black-box nature of today’s AI systems, there has been a trend to create systems that better explain the reasons for their decisions. The U.S. Defense Advanced Research Projects Agency (DARPA) is working on an explainable AI (XAI) program to enable “third-wave AI systems,” where AI can understand the context and environment in which it operates, and over time Over time, underlying explanatory models can be constructed, allowing them to describe real-world phenomena.
Researchers at the Technical University of Berlin, the Fraunhofer Heinrich Hertz Institute (HHI) and the Singapore University of Technology and Design (SUTD) published a paper last month in Nature exploring how artificial intelligence systems derive Whether their conclusions and their decisions are truly intelligent, or just “average” success. It explores the importance of understanding the decision-making process, but questions the validity and generality of machine learning, questioning whether models are simply making decisions based on spurious correlations in training data.
Bionic AI learns to walk in five minutes
At the same time, Bio-inspired AI promises to overcome the limitations of current machine learning methods. Machine learning methods rely on pre-programming a system for all potential scenarios, a complex, computationally intensive and inefficient process. DARPA is developing another program, the Lifelong Learning Machine (L2M), a system that can learn during execution and become increasingly specialized as it performs tasks. First announced in 2017, the L2M project is currently conducting research and development of next-generation artificial intelligence systems and their components, as well as research to translate the learning mechanisms of biological organisms into computational processes. The L2M program supports the work of up to 30 research groups by offering grants and contracts of varying duration and size.
One of L2M’s grantees is the University of Southern California’s (USC) Viterbi School of Engineering, which has released the results of its research into biologically inspired AI algorithms. In the March cover story of Nature Machine Intelligence, the USC research team details its successful creation of an AI-controlled robotic limb powered by an animal-like tendon that can teach itself walking tasks and even automatically recover from disturbed balance. recovery.
Behind the USC researchers’ robotic limb is a bio-inspired algorithm that can learn walking tasks on its own with just five minutes of “unstructured play,” or random movements that allow the robot to learn its own structure and its surroundings. The ability of robots to learn by doing is a major advance in machine lifelong learning. The USC researchers’ work shows that an AI system can learn from relevant experience, and over time it can find and adapt its solutions to challenges.
Another recent paper on Swarm Intelligence proposed a model that could have implications for enabling more intuitive AI. In the February issue of the Proceedings of the National Academy of Sciences, researchers from Harvard University’s John A. Paulson School of Engineering and Applied Sciences and the Department of Organic and Evolutionary Biology outline a new framework for illustrating how environmental physics and animals How simple rules between behaviors create complex structures in nature. By focusing on the most famous example of animal architecture, the termite mound, the theoretical framework shows how living systems use simple rules to create microenvironments that utilize matter and infuse complex architecture.
Based on reflections on the architectural achievements of termites, Maisonnier said, “Nature is clearly perfect when it comes to system optimization.” There are no complex systems driving the behavior of termites, only simple rules applied to basic agents. If the simple rules are chosen correctly, it is possible to end up with a termite mound that can perfectly control the temperature and humidity conditions inside it, even if the outside temperature rises to 42°C. The result “is a system that appears to have been designed, but in fact there is no leader, no king, no queen, no leader, not even an engineer. It emerges entirely from natural behavior.”
Understanding Transistors Through Logic Gates
So, what do termites have to do with developing AI systems? Maisonnier explains from the basics: “If you want to understand how a computer works, you can say it’s nothing more than a network of transistors. But while you may know the basics of how transistors work, it’s hard to compare this with higher-level how computers work because the conceptual gap between the two is so large.”
“But if you think about it in terms of intermediate-level functions, like logic gates, it’s easier to understand how a computer works. These intermediate-level functions come through a network of transistors. So the most important thing here is the intermediate-level functions— – Logic gates.”
The same goes for the brain, whose basic components include neurons, like transistors, and which resembles a computer, Maisonnier said. Again, the gap between the two is huge; you can’t infer how the brain works based solely on what you know about neurons, he said.
AnotherBrain has developed so-called organic AI technology that converts sensors into smart sensors. (Image credit: AnotherBrain)
“But there are functions in the brain called cortical columns. If you understand what these columnar organizations are doing (it’s known science), then you can reproduce those functions, or through networks of neurons, neural networks ( It’s very difficult), or directly through classical electronics. At AnotherBrain, we use classical electronics to reproduce what those cortical columns do.”
Unlike deep learning, which explores the human brain at the micro/neuron level, AnotherBrain replicates the behavior of the brain at a more macro level, where large populations of neurons have specialized functions, such as the perception of motion or curvature. “We’re focused on creating an ecosystem, not just technology,” Maisonnier said.
AnotherBrain isn’t trying to replace machine learning techniques, it’s trying to complement them. “There are a lot of applications that deep learning and neural networks can serve very well today,” Maisonnier said. Deep learning will still be relevant for many applications, he said, but there are other applications that require true intelligence to understand what’s going on and how to behave in a chaotic and unpredictable world.
The technology developed by AnotherBrain, called organic AI, converts sensors into smart sensors for use in the industrial automation, Internet of Things (IoT) and automotive markets. Maisonnier said the company is already running the technology on GPUs to solve problems in industrial automation for customers, and its next target application is autonomous driving.
AnotherBrain’s technology is explainable by design and operates at the edge using minimal energy and data resources, allowing for more efficient deployment of AI in factories and logistics. The company’s products include real-time manufacturing robotic guidance and one-time learning visual inspections. The technology is based on a suite of bionic algorithms that allow sensors to understand their surroundings and learn autonomously, while at the same time interpreting decisions to users.
The Links: ADE7858AACPZ LQ5AW116