New hardware offers faster computation for artificial intelligence, with much less energy (2023)

As scientists push the boundaries of machine learning, the amount of time, energy, and money required to train increasingly complex neural network models is skyrocketing. A new area of artificial intelligence called analog deep learning promises faster computation with a fraction of the energy usage.

Programmable resistors are the key building blocks in analog deep learning, just like transistors are the core elements for digital processors. By repeating arrays of programmable resistors in complex layers, researchers can create a network of analog artificial “neurons” and “synapses” that execute computations just like a digital neural network. This network can then be trained to achieve complex AI tasks like image recognition and natural language processing.

A multidisciplinary team of MIT researchers set out to push the speed limits of a type of human-made analog synapse that they had previously developed. They utilized a practical inorganic material in the fabrication process that enables their devices to run 1 million times faster than previous versions, which is also about 1 million times faster than the synapses in the human brain.

Moreover, this inorganic material also makes the resistor extremely energy-efficient. Unlike materials used in the earlier version of their device, the new material is compatible with silicon fabrication techniques. This change has enabled fabricating devices at the nanometer scale and could pave the way for integration into commercial computing hardware for deep-learning applications.

(Video) Study finds new hardware offers faster computation for artificial intelligence with much less energy

“With that key insight, and the very powerful nanofabrication techniques we have at MIT.nano, we have been able to put these pieces together and demonstrate that these devices are intrinsically very fast and operate with reasonable voltages,” says senior author Jesús A. del Alamo, the Donner Professor in MIT’s Department of Electrical Engineering and Computer Science (EECS). “This work has really put these devices at a point where they now look really promising for future applications.”

“The working mechanism of the device is electrochemical insertion of the smallest ion, the proton, into an insulating oxide to modulate its electronic conductivity. Because we are working with very thin devices, we could accelerate the motion of this ion by using a strong electric field, and push these ionic devices to the nanosecond operation regime,” explains senior author Bilge Yildiz, the Breene M. Kerr Professor in the departments of Nuclear Science and Engineering and Materials Science and Engineering.

“The action potential in biological cells rises and falls with a timescale of milliseconds, since the voltage difference of about 0.1 volt is constrained by the stability of water,” says senior author Ju Li, the Battelle Energy Alliance Professor of Nuclear Science and Engineering and professor of materials science and engineering, “Here we apply up to 10 volts across a special solid glass film of nanoscale thickness that conducts protons, without permanently damaging it. And the stronger the field, the faster the ionic devices.”

These programmable resistors vastly increase the speed at which a neural network is trained, while drastically reducing the cost and energy to perform that training. This could help scientists develop deep learning models much more quickly, which could then be applied in uses like self-driving cars, fraud detection, or medical image analysis.

“Once you have an analog processor, you will no longer be training networks everyone else is working on. You will be training networks with unprecedented complexities that no one else can afford to, and therefore vastly outperform them all. In other words, this is not a faster car, this is a spacecraft,” adds lead author and MIT postdoc Murat Onen.

Co-authors include Frances M. Ross, the Ellen Swallow Richards Professor in the Department of Materials Science and Engineering; postdocs Nicolas Emond and Baoming Wang; and Difei Zhang, an EECS graduate student. The research is published today in Science.

(Video) Study finds new hardware offers faster computation for artificial intelligence with much less energy

Accelerating deep learning

Analog deep learning is faster and more energy-efficient than its digital counterpart for two main reasons. “First, computation is performed in memory, so enormous loads of data are not transferred back and forth from memory to a processor.” Analog processors also conduct operations in parallel. If the matrix size expands, an analog processor doesn’t need more time to complete new operations because all computation occurs simultaneously.

The key element of MIT’s new analog processor technology is known as a protonic programmable resistor. These resistors, which are measured in nanometers (one nanometer is one billionth of a meter), are arranged in an array, like a chess board.

In the human brain, learning happens due to the strengthening and weakening of connections between neurons, called synapses. Deep neural networks have long adopted this strategy, where the network weights are programmed through training algorithms. In the case of this new processor, increasing and decreasing the electrical conductance of protonic resistors enables analog machine learning.

The conductance is controlled by the movement of protons. To increase the conductance, more protons are pushed into a channel in the resistor, while to decrease conductance protons are taken out. This is accomplished using an electrolyte (similar to that of a battery) that conducts protons but blocks electrons.

To develop a super-fast and highly energy efficient programmable protonic resistor, the researchers looked to different materials for the electrolyte. While other devices used organic compounds, Onen focused on inorganic phosphosilicate glass (PSG).

(Video) Study finds new hardware offers faster computation for artificial intelligence with much less energy

PSG is basically silicon dioxide, which is the powdery desiccant material found in tiny bags that come in the box with new furniture to remove moisture. It is studied as a proton conductor under humidified conditions for fuel cells. It is also the most well-known oxide used in silicon processing. To make PSG, a tiny bit of phosphorus is added to the silicon to give it special characteristics for proton conduction.

Onen hypothesized that an optimized PSG could have a high proton conductivity at room temperature without the need for water, which would make it an ideal solid electrolyte for this application. He was right.

Surprising speed

PSG enables ultrafast proton movement because it contains a multitude of nanometer-sized pores whose surfaces provide paths for proton diffusion. It can also withstand very strong, pulsed electric fields. This is critical, Onen explains, because applying more voltage to the device enables protons to move at blinding speeds.

“The speed certainly was surprising. Normally, we would not apply such extreme fields across devices, in order to not turn them into ash. But instead, protons ended up shuttling at immense speeds across the device stack, specifically a million times faster compared to what we had before. And this movement doesn’t damage anything, thanks to the small size and low mass of protons. It is almost like teleporting,” he says.

“The nanosecond timescale means we are close to the ballistic or even quantum tunneling regime for the proton, under such an extreme field,” adds Li.

(Video) Study finds new hardware offers faster computation for AI with much less energy

Because the protons don’t damage the material, the resistor can run for millions of cycles without breaking down. This new electrolyte enabled a programmable protonic resistor that is a million times faster than their previous device and can operate effectively at room temperature, which is important for incorporating it into computing hardware.

Thanks to the insulating properties of PSG, almost no electric current passes through the material as protons move. This makes the device extremely energy efficient, Onen adds.

Now that they have demonstrated the effectiveness of these programmable resistors, the researchers plan to reengineer them for high-volume manufacturing, says del Alamo. Then they can study the properties of resistor arrays and scale them up so they can be embedded into systems.

At the same time, they plan to study the materials to remove bottlenecks that limit the voltage that is required to efficiently transfer the protons to, through, and from the electrolyte.

“Another exciting direction that these ionic devices can enable is energy-efficient hardware to emulate the neural circuits and synaptic plasticity rules that are deduced in neuroscience, beyond analog deep neural networks. We have already started such a collaboration with neuroscience, supported by the MIT Quest for Intelligence,” adds Yildiz.

“The collaboration that we have is going to be essential to innovate in the future. The path forward is still going to be very challenging, but at the same time it is very exciting,” del Alamo says.

(Video) Efficient Computing for Deep Learning, Robotics, and AI (Vivienne Sze) | MIT Deep Learning Series

“Intercalation reactions such as those found in lithium-ion batteries have been explored extensively for memory devices. This work demonstrates that proton-based memory devices deliver impressive and surprising switching speed and endurance,” says William Chueh, associate professor of materials science and engineering at Stanford University, who was not involved with this research. “It lays the foundation for a new class of memory devices for powering deep learning algorithms.”

“This work demonstrates a significant breakthrough in biologically inspired resistive-memory devices. These all-solid-state protonic devices are based on exquisite atomic-scale control of protons, similar to biological synapses but at orders of magnitude faster rates,” says Elizabeth Dickey, the Teddy & Wilton Hawkins Distinguished Professor and head of the Department of Materials Science and Engineering at Carnegie Mellon University, who was not involved with this work. “I commend the interdisciplinary MIT team for this exciting development, which will enable future-generation computational devices.”

This research is funded, in part, by the MIT-IBM Watson AI Lab.

FAQs

Does AI need a lot of computing power? ›

AI is more computationally intensive because it needs to read through lots of data until it learns to understand it – that is, is trained. This training is very inefficient compared to how people learn. Modern AI uses artificial neural networks, which are mathematical computations that mimic neurons in the human brain.

Does AI use a lot of energy? ›

Data centers and large AI models use massive amounts of energy and are harmful to the environment. Businesses can take action to lower their environmental impact.

Why is artificial intelligence growing so fast? ›

The financial boost from the recent investment in AI has led to a rapid expansion of the AI industry. More companies are looking to provide smarter solutions for their customers, and an explosion of new AI related companies that are looking to provide these solutions are emerging.

Which memory technology is most powerful for artificial intelligence application? ›

For AI/ML inferencing in IoT devices (such as autonomous vehicles), GDDR6 offers a strong combination of performance, power-efficiency, capacity, cost and reliability, while also maintaining the more familiar and cheaper implementation approach.

How you can minimize energy consumption maximize energy efficiency using machine learning? ›

Using machine learning to improve building energy efficiency begins with data analytics. ML algorithms constantly incorporate and analyze data from a variety of sources (such as equipment, sensors, and devices) to refine an internal model that can be mined for trends and identify anomalies.

What is computational power in AI? ›

Computing power and its growth

Increasing sophistication and gathering of data requires a corresponding growth in the ability of the hardware to tackle the demands of AI. Basically, computing power is the ability of a computer to perform a certain task with speed and accuracy.

Why is designing small and energy efficient AI models important? ›

Shrink the models for better energy efficiency

Tiny AI was developed so that smart devices would not always need to send data to the cloud for processing. By shrinking machine learning (ML) models, it is possible to retrain the models locally without compromising accuracy.

What exactly AI means? ›

artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.

How fast is artificial intelligence growing? ›

In 2020, the global AI software market is expected to grow approximately 54 percent year-on-year, reaching a forecast size of 22.6 billion U.S. dollars.
...
Forecast growth of the artificial intelligence (AI) software market worldwide from 2019 to 2025.
CharacteristicYear-on-year growth
--
6 more rows

What is the benefit of artificial intelligence? ›

AI enables the execution of hitherto complex tasks without significant cost outlays. AI operates 24x7 without interruption or breaks and has no downtime. AI augments the capabilities of differently abled individuals. AI has mass market potential, it can be deployed across industries.

Where is artificial intelligence used? ›

Artificial intelligence is widely used to provide personalised recommendations to people, based for example on their previous searches and purchases or other online behaviour. AI is hugely important in commerce: optimising products, planning inventory, logistics etc.

What is type of artificial intelligence? ›

These three types are: Artificial Narrow Intelligence. Artificial General Intelligence. Artificial Super Intelligence.

How AI can be used to save energy? ›

Artificial intelligence monitors, collects information, controls, evaluates and manages energy consumption in buildings and factories. It controls energy usage and reduces it during peak hours, identifies and signals problems, and detects equipment failures before they occur.

How does AI reduce energy consumption? ›

In buildings and factories, AI can monitor and collect information about energy consumption in the form of numbers, text, images and videos. Evaluating what is observed, AI can manage energy usage, reducing it during peak hours, for example.

How machine learning can transform the energy industry? ›

With advancements in technologies, domains like AI and machine learning have come into existence. They have the capability of transforming the renewable energy industry. By leveraging the power of AI, power companies can get better forecasts, manage their grids and schedule maintenance.

How much longer can computing power drive Artificial Intelligence progress? ›

How Much Longer Can Computing Power Drive Artificial Intelligence Progress? Between 2012 and 2018, the amount of computing power used by record-breaking artificial intelligence models doubled every 3.4 months. Even with money pouring into the AI field, this trendline is unsustainable.

What is computational power? ›

Let us use the following definition: Computing Power: Two processors have the same computing power if they can run the same programs (after translation into each processor's machine language) and produce the same results. For example, say that two processors have the same power.

What does an increased computing power offer us? ›

Increased computational power will make it possible to program much more detailed models of technologies, such as wind turbines, solar cells, and batteries. These programs may provide the information needed to construct far more efficient clean energy sources. Data analysis. We generate tremendous amounts of data.

Does increasing the amount of data generally mean an increase in accuracy? ›

Having more data certainly increases the accuracy of your model, but there comes a stage where even adding infinite amounts of data cannot improve any more accuracy. This is what we called the natural noise of the data.

Does more epochs of training means less energy consumption? ›

A machine learning algorithm makes a number of passes, or “epochs,” over a data set while being trained. In our research, there was always a threshold of epochs at which the accuracy of the model quickly plateaued, but energy consumption continued to increase.

What is machine learning? ›

Machine learning (ML) is a type of artificial intelligence (AI) that allows software applications to become more accurate at predicting outcomes without being explicitly programmed to do so. Machine learning algorithms use historical data as input to predict new output values.

Which data type is used to teach a machine learning during structured learning? ›

Answer: The data type used is training data.

How do you build and sell in Illustrator? ›

  1. Determine Your Product Range and Market Positioning.
  2. Define Your Ideal Customer.
  3. Focus on the Customer and their Problem.
  4. Develop a Sales Strategy.
  5. Generate Leads to Your Business.
  6. Make a Case for Your AI.
  7. Deliver Value with AI Products.
10 Jan 2022

What are the 4 types of AI? ›

According to the current system of classification, there are four primary AI types: reactive, limited memory, theory of mind, and self-aware.

Is artificial intelligence real? ›

Scientists and technology experts all agree that the “AI” that most people know right now is simply a matter of automation. Rather than a self-thinking and self-evolving artificial entity, what we have now are just groups of programmed processes that are set to be executed upon a given trigger, condition, or command.

Which algorithm is used in the game tree to make decisions of win lose Mcq? ›

10) Which algorithm is used in the Game tree to make decisions of Win/Lose? Explanation: A game tree is a directed graph whose nodes represent the positions in Game and edges represent the moves. To make any decision, the game tree uses the Min/Max algorithm.

What is artificial intelligence with examples? ›

Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems. Specific applications of AI include expert systems, natural language processing, speech recognition and machine vision.

Which of the following can improve the performance of AI agent? ›

Perceiving, Learning, Observing all this can improve the performance of an AI agent.

Which of the following are some aspects in which AI has transformed business? ›

AI is poised to bring Digital Transformation to Intelligently Power Businesses Worldwide
  • Formulating Business Priorities. • ...
  • Synergising Data Pipelines. • ...
  • Securing Ethical Concerns. • ...
  • Addressing Skill Gaps. • ...
  • Assuring Seamless Implementation. • ...
  • Delivering Customer Engagement. • ...
  • Handling Change Management. ...
  • Driving Revenue Growth.
4 Jul 2020

What is the role of artificial intelligence in business? ›

By deploying the right AI technology, your business may gain the ability to: save time and money by automating and optimising routine processes and tasks. increase productivity and operational efficiencies. make faster business decisions based on outputs from cognitive technologies.

What are the impact of artificial intelligence in online business? ›

Implementing AI in business helps personalise marketing and sales of your product, as AI can help qualify leads much faster than humans, facilitating sales reps to close deals. It also helps determine what kind of customers will buy your product and analyse prospects for your business.

Which statement is true about artificial intelligence? ›

The correct statement is Data is the fundamental reason AI succeeds or fails. The study of and creation of computer systems that can carry out activities that need human intellect is known as artificial intelligence (AI), a branch of computer science.

What is are the most common type s of machine learning tasks? ›

Recognized as the most common type of Machine Learning, supervised learning algorithms are designed to learn through example, hence the term 'supervised'. To achieve this, the algorithm uses provided input and output data.

What is an example of intelligent automation solution that makes use of artificial intelligence? ›

Answer: Recognizing and classifying images is an example of an Intelligent Automation solution that makes use of Artificial Intelli.

How artificial intelligence machine learning and deep learning differ from each other? ›

Artificial Intelligence is the concept of creating smart intelligent machines. Machine Learning is a subset of artificial intelligence that helps you build AI-driven applications. Deep Learning is a subset of machine learning that uses vast volumes of data and complex algorithms to train a model.

Who discovered artificial intelligence? ›

Stanford's John McCarthy, seminal figure of artificial intelligence, dies at 84. McCarthy created the term "artificial intelligence" and was a towering figure in computer science at Stanford most of his professional life.

Which of the following is an attribute of strong or generalized AI Mcq? ›

The correct answer to the question 'Which of the following is an attribute of Strong or Generalized AI' is (c) Perform independent tasks.

What is intelligence system? ›

Intelligent systems are technologically advanced machines that perceive and respond to the world around them. Intelligent systems can take many forms, from automated vacuums such as the Roomba to facial recognition programs to Amazon's personalized shopping suggestions.

How do you create a program in Illustrator? ›

Let's go through the basic steps to help you understand how to create an AI from scratch.
  1. Step 1: The First Component to Consider When Building the AI Solution Is the Problem Identification. ...
  2. Step 2: Have the Right Data and Clean It. ...
  3. Step 3: Create Algorithms. ...
  4. Step 4: Train the Algorithms. ...
  5. Step 5: Opt for the Right Platform.
22 Feb 2022

How is artificial intelligence used today? ›

Artificial intelligence (AI) makes it possible for machines to learn from experience, adjust to new inputs and perform human-like tasks. Most AI examples that you hear about today – from chess-playing computers to self-driving cars – rely heavily on deep learning and natural language processing.

Why is artificial intelligence important? ›

Simply put, AI allows organizations to make better decisions, improving core business processes by increasing both the speed and accuracy of strategic decision-making processes.

What is machine learning Mcq? ›

Machine Learning (ML) is that field of computer science. B. ML is a type of artificial intelligence that extract patterns out of raw data by using an algorithm or method. C. The main focus of ML is to allow computer systems learn from experience without being explicitly programmed or human intervention.

Which artificial intelligence term is used to describe extracting information from unstructured text? ›

Text mining (also referred to as text analytics) is an artificial intelligence (AI) technology that uses natural language processing (NLP) to transform the free (unstructured) text in documents and databases into normalized, structured data suitable for analysis or to drive machine learning (ML) algorithms.

What are the challenges in artificial intelligence? ›

Ten Challenges (and Solutions) for AI Implementation and Development
  • Determining the right data set. Data quality and availability are necessities for AI capabilities. ...
  • The bias problem. ...
  • Data security and storage. ...
  • Infrastructure. ...
  • AI integration. ...
  • Computation. ...
  • Niche skillset. ...
  • Expensive and rare.
2 Jun 2022

What does an increased computing power offer us? ›

Increased computational power will make it possible to program much more detailed models of technologies, such as wind turbines, solar cells, and batteries. These programs may provide the information needed to construct far more efficient clean energy sources. Data analysis. We generate tremendous amounts of data.

What is compute power? ›

Let us use the following definition: Computing Power: Two processors have the same computing power if they can run the same programs (after translation into each processor's machine language) and produce the same results. For example, say that two processors have the same power.

Does AI increase carbon footprint? ›

AI and Energy

Researchers from the University of Massachusetts, Amherst, conducted a life cycle analysis for training several typical big AI models in a recent publication. They discovered that the procedure may produce almost 626,000 pounds of CO2 equivalent.

What is the benefit of artificial intelligence? ›

AI enables the execution of hitherto complex tasks without significant cost outlays. AI operates 24x7 without interruption or breaks and has no downtime. AI augments the capabilities of differently abled individuals. AI has mass market potential, it can be deployed across industries.

What are the typical benefits of Chatbots for a business? ›

Chatbots are an excellent tool to track purchasing patterns and analyze consumer behaviors by monitoring user data. This data can help companies market the products differently and expand their reach. Similarly, bots can be used to collect feedback through simple questions and improve products or optimize the website.

What are the primary attributes used to measure the performance of a computer system? ›

Computer performance metrics (things to measure) include availability, response time, channel capacity, latency, completion time, service time, bandwidth, throughput, relative efficiency, scalability, performance per watt, compression ratio, instruction path length and speed up.

What does compute power refer to select the correct answer? ›

Terms in this set (15) What does 'compute power' refer to? The speed at which a computer is able to process data.

How do you increase the computing power for an application? ›

7 Ways to Improve Your Computer Performance
  1. Uninstall unnecessary software. ...
  2. Limit the programs at startup. ...
  3. Add more RAM to your PC. ...
  4. Check for spyware and viruses. ...
  5. Use Disk Cleanup and defragmentation. ...
  6. Consider a startup SSD. ...
  7. Take a look at your web browser.
26 Dec 2018

Which of the following is also called compute? ›

A set of virtual machine instances is known as compute.

What unit is used to measure computing power? ›

When you talk about how many gigahertz your processor has, you're really talking about clock speed. The number refers to how many electrical pulses your CPU sends out each second. A 3.2 gigahertz processor sends out around 3.2 billion pulses each second.

What is compute capacity? ›

Compute capacity defines amount of server and storage resources that are available to the databases in an instance. When you create an instance, you specify its compute capacity as a number of processing units or as a number of nodes, with 1000 processing units being equal to 1 node.

Is AI good for the environment? ›

According to the forum and experts in the field, AI has the potential to accelerate environmental degradation. The use of power-intensive GPUs to run machine learning training has already been cited as contributing to increased CO2 emissions.

How does AI benefit the environment? ›

A real-time, open API, AI-infused, digital geospatial dashboard for the planet would enable the monitoring, modelling and management of environmental systems at a scale and speed never before possible – from tackling illegal deforestation, water extraction, fishing and poaching, to air pollution, natural disaster ...

Does AI help global warming? ›

AI can be employed to help measure emissions at both the macro and micro levels, reduce emissions and greenhouse gas (GHG) effects, and remove existing emissions from the atmosphere.

Videos

1. Energy-Efficient AI | Vivienne Sze | TEDxMIT
(TEDx Talks)
2. Energy-Efficient Hardware and Intelligent Materials for Brain-inspired Computing, Bilge Yildiz
(Samsung Semiconductor Innovation Center)
3. Cloud First! Leveraging the Cloud for Your System Design
(EE Journal)
4. Deep Learning Hardware: Past, Present, and Future, Talk by Bill Dally
(Dejan Milojicic)
5. tinyML Talks: SRAM based In-Memory Computing for Energy-Efficient AI Inference
(tinyML)
6. In-memory computing: Hardware accelerator for embedded AI
(HiPEAC TV)
Top Articles
Latest Posts
Article information

Author: Edwin Metz

Last Updated: 12/27/2022

Views: 6378

Rating: 4.8 / 5 (78 voted)

Reviews: 85% of readers found this page helpful

Author information

Name: Edwin Metz

Birthday: 1997-04-16

Address: 51593 Leanne Light, Kuphalmouth, DE 50012-5183

Phone: +639107620957

Job: Corporate Banking Technician

Hobby: Reading, scrapbook, role-playing games, Fishing, Fishing, Scuba diving, Beekeeping

Introduction: My name is Edwin Metz, I am a fair, energetic, helpful, brave, outstanding, nice, helpful person who loves writing and wants to share my knowledge and understanding with you.