Hardware platform for AI accelerators reduces energy use

Hardware platform for AI accelerators based on photonic integrated circuits.

While the disruptive nature of AI is clearly seen across industries, there are still many challenges, including increasing workloads and inherent energy consumption. Given deep learning and big data requirements, AI must access sufficient processing power to train its models, and today’s use of graphics processing units (GPUs) does not meet the power demand, according to researchers. A 2025 IEEE study demonstrates a new hardware platform for AI accelerators, leveraging silicon photonics, that handles larger workloads and lowers energy use.

AI applications such as natural language processing, autonomous driving, large language models, and edge computing are pushing the need for efficient processing of huge datasets. Like the concept of Moore’s Law, there is a doubling in the number of hyperparameters—but the rate is now every 3.5 months, according to researchers.

A novel approach to solving key challenges

A recent study, “Large-Scale Integrated Photonic Device Platform for Energy-Efficient AI/ML Accelerators,” published in the IEEE Journal of Selected Topics in Quantum Electronics, offers a potential solution: an AI acceleration platform based on photonic integrated circuits (PICs) that meets all of the advanced computational power, sustainability, and energy-efficiency demands.

Leading the project at Hewlett Packard Labs, senior research scientist Bassem Tossoun focused on how PICs, leveraging III-V compound semiconductors, can achieve efficient execution of AI workloads. The project showed that photonic AI accelerators, which use optical neural networks (ONNs), operate at the speed of light and have minimal loss of energy, which is different from traditional AI hardware that uses electronic distributed neural networks (DNNs).

Tossoun asserts that while silicon photonics are easy to manufacture, they are just as difficult to scale for complex ICs. The AI acceleration platform can act as building blocks for photonic accelerators with superior energy efficiency and scalability when compared with GPU solutions, he said.

The report explains that specific fundamental limitations in modern computing create inefficiency in executing AI algorithms. For example, computing architectures are currently built on the von Neumann architecture, in which memory and processing units are separate, and interconnects in between provide data transfer. However, the architecture results in the von Neumann bottleneck whereby the amount of data capable of being transferred from processor to memory is constrained at the interconnect.

In addition, the “memory wall” also represents an obstacle in this architecture, as it constrains the system performance to the performance of the memory, the report noted, as processor chips outperform memory chips annually. Most of the energy used by computers during the execution of neural network operations results from data movement, causing metal wires to charge and discharge while dissipating heat and power in the process.

To implement a neural layer in photonics, linear and nonlinear operations must be performed and include light generation or amplification stages to ensure the scalability of ONNs, according to the researchers. Because a comprehensive platform for integrating complete DNN layers did not exist, demonstrations were based on prototypes that implement individual stages of a neural layer, limiting ONN scalability and preventing ONNs from achieving competitive performance with digital ANNs.

Scaling to large-scale (e.g., 1,024 × 1,024) ONNs demands O(N2) MZIs and O(N) cascaded stages, creating huge optical losses, control complexity, and large circuit footprints, according to the report. Currently, low-loss waveguides on silicon photonic platforms can achieve below 0.1 dB/cm, and each device has its own insertion losses, adding to total system loss.

Another major challenge, cited in the report, is the lack of a device platform that can monolithically integrate optical neurons with photodetectors (PDs), electrical neuron circuits, light emitters, memory, and synaptic interconnections on silicon. With silicon’s indirect bandgap material, silicon light emitters are inefficient, researchers said, and aligning III-V diode laser chips to silicon photonic chips will induce coupling losses and packaging complexity, limiting energy efficiency and integration density.

Arriving at an AI acceleration platform

Hewlett Packard Labs developed a scalable III-V-on-silicon photonic platform as a device-level foundation for innovative photonic computing architectures. The platform makes it possible to physically instantiate each fundamental building block of DNNs at wafer scale. In addition, nonlinear activations were created using active optoelectronic devices such as quantum-dot avalanche photodiodes, lasers, and amplifiers. As a result, it eliminated the need to move data off the chip in between neural layers.

Hardware platform for AI accelerators based on photonic integrated circuits.

Researchers have developed a new hardware platform for AI accelerators using PICs on a silicon chip. (Source: IEEE Photonics Society)

The team used a heterogeneous integration approach to fabricate the hardware, including the use of silicon photonics along with III-V compound semiconductors that functionally integrate lasers and optical amplifiers to reduce optical losses and improve scalability. According to researchers, the III-V semiconductors create PICs with greater density and complexity and can run all operations required for supporting neural networks.

The heterogeneous III/V-on-SOI platform provides the essential components necessary to develop photonic and optoelectronic computing architectures for AI/ML acceleration, according to Tossoun.

The platform features unique functionalities, including novel optical memory solutions based on optical memristors using nonvolatile MOSCAP layers, providing an alternative to conventional nonvolatile optical memory solutions based on optical phase-change materials, with higher switching speeds and reduced power consumption.

The researchers found that workloads on application-specific PICs with properly used nonvolatile devices can significantly reduce the overall power consumption of the optical computing PICs.

This photonic platform achieves wafer-scale integration of the devices required to build an optical neural network on one single photonic chip, according to the researchers. This includes active devices such as on-chip lasers and amplifiers, high-speed PDs, energy-efficient modulators, and nonvolatile phase shifters.

The platform enables the development of tensorized optical neural-network-based accelerators with a footprint energy efficiency that is 2.9 × 10² times greater than other photonic platforms and 1.4 × 10² times greater than the most advanced digital electronics.

Researchers believe this platform will enable data centers to accommodate more AI workloads and solve several optimization challenges, providing a breakthrough technology for AI/ML acceleration while reducing energy costs and improving computational efficiency.

The post Hardware platform for AI accelerators reduces energy use appeared first on Electronic Products.

Access to this page has been denied.

Access to this page has been denied either because we believe you are using…

Reducing Certification Risk at the Design Stage Hazardous Environments

Hazardous-area electronics demand high reliability, but traditional wired approaches often deliver it at significant cost…

Simplifying inductive wireless charging

What do e-bikes and laptops have in common? Both can be wirelessly charged by induction.…

Access to this page has been denied.

Access to this page has been denied either because we believe you are using…