Gold: Intel is building a silicon brain

0


The battle to enable AI computing is intense. Every chip vendor creates AI accelerators that everyone thinks are better than anything out there.

Yet the AI ​​market is very diverse, and each product has its own place and its own benefits – from Nvidia’s massively parallel systems focused on machine learning, to Intel’s chips more focused on IT workloads. inference from data centers, and even down to smartphones and other mobile devices where Qualcomm has incorporated significant AI acceleration for things like image processing and security capabilities.

The truth is, AI is not a market, rather it is a series of different submarkets that cannot be served by one type of chip.

Intel is exploring a different approach as an addition to its current AI acceleration capabilities, not a replacement for its current devices. While he’s not the only one exploring this approach, he builds neuromorphic chips that emulate the brain. Instead of the traditional massively parallel architectures based on the Von Neumann architecture using processing elements and shared memory systems that power computers from the start, Intel is experimenting with an approach that confines many individual computing elements (processing capacity with dedicated memory cells) and connects them via high-speed mesh networks. It is now in its second generation of Loihi neuromorphic chips with the faster, more densely configured Loihi 2.

By emulating the biological functions of the brain with densely packed neurons and synaptic equivalents, Intel expects to be able to build chips for certain types of AI functions that can be very low power compared to processing AI chips. traditionally designed parallel – with chip power in milliwatts vs. Tens to hundreds of watts for traditional chips. While not a replacement for all AI learning systems, it shows promise in special workloads such as routing, scheduling, and audio / video processing, where biological brains are much more capable than current AI systems.

Intel is not currently considering selling the Loihi chips. Rather, he plans to build a limited number of systems and “lease” them to groups and institutions that can experiment and advance the neuromorphic ecosystem by creating specialized programming and similar enhancements that can ultimately be shared and integrated into the system. open source ecosystem. At present, Loihi and Neuromorphic are clearly in the experimental stages of development, with mass deployments in at least 5+ years.

Perhaps even more important than the hardware is the software that Intel makes available to activate neuromorphic ability; Lava and magma.

Lava is an open source environment intended to enable the modeling and development of neuromorphic code. Its intention is to eventually enable programs that can run on Loihi 2 chips, but it also creates a version of the code that can be run on a standard processor to allow code evaluation on commonly available computers. While the initial version of the platform is still under development and includes source code improvements, it should ultimately be similar in approach to what Intel has done with its oneAPI code. That is, create an environment that allows code to be produced and then allows it to run on the most compatible and / or optimized platform (e.g. CPU, GPU, FPGA, AI, etc. .). Additionally, since Lava is open source, there will be community improvements that will make it more relevant to users over time.

Magma is the code that runs on the Loihi chip environment and will remain the exclusive property of Intel, much like the IA / x86 instruction set. Translating Lava to Magma will be a key part of the higher level capabilities inherent in the Lava environment, much like higher level languages ​​like C and Python are today.

All of this means that in 5-10 years you can expect to see a range of neuromorphic subsystems integrated into SoCs and optimized for specific AI and deep learning workloads. Neuromorphic chips are unlikely to be turned into stand-alone servers, but instead will become complementary accelerators for specific purposes in dedicated SoCs, much like GPUs accompany a processor.

Conclusion: AI will ultimately not be served by a single type of processing architecture. By adding the ability to include neuromorphic computing, Intel is adding another tool to the growing number of specialized functions that will ultimately speed up AI processing dramatically. Intel is betting on a technology that has yet to be proven, but if successful it adds a key piece to a growing array of AI-specific processing capabilities that will allow for a much broader implementation of systems powered by longer term AI. This will ultimately result in a wider range of systems that will be deployed by businesses and end users. Neuromorphic computing is definitely a technology to watch over the next few years.

Jack Gold is Founder and Senior Analyst at J. Gold Associates, LLC., An information technology analyst firm based in Northborough, Massachusetts. He has over 25 years of experience as an analyst and covers many aspects of business and consumer computing and emerging technologies. He works with many companies. Follow Jack on Twitter and on LinkedIn.



Leave A Reply

Your email address will not be published.