On 19 November, the German startup Q.ANT announced the launch of its first commercial product—a photonics-based Native Processing Unit (NPU) for high-performance computing and artificial intelligence (AI) applications in a 19-inch, rack-mountable server.
The company, a spinoff of worldwide laser and machine tool giant Trumpf, says the new NPU will deliver a 30-fold improvement in energy efficiency along with significantly faster computational speeds compared with traditional CMOS technology. This feature could have far-reaching implications for reducing the carbon footprint and operational costs of AI-driven industries and data centers around the world.
“For the first time, developers can create AI applications and explore the capabilities of photonic computing, particularly for complex, nonlinear calculations,” said Michael Förtsch, CEO of Q.ANT. “For us, this is not just a processor—it’s a statement of intent: Sustainability and performance can go hand in hand,” he added.
Leveraging light for speed
Based on Q.ANT’s proprietary LENA (light-empowered native arithmetic) architecture, the Q.ANT NPU uses light rather than electrons to transmit and process data. Photonics can run at a few tens of GHz bandwidth compared with a few GHz in digital electronics, yielding more operations performed per second.
For example, while a conventional CMOS processor requires 1,200 transistors to perform a simple 8-bit multiplication, Q.ANT’s NPUs achieve this with a single optical element. Using multiple wavelengths of light to run calculations on one chip also increases compute density.
Photonics can run at a few tens of GHz bandwidth compared with a few GHz in digital electronics, yielding more operations performed per second.
In addition, because Q.ANT’s NPU comes on the industry-standard PCI-Express (PCIe), it is compatible with current devices and upgradeable with additional PCIe cards for more processing power in the future. The LENA platform includes thin-film lithium niobate on insulator chips. Q.ANT has been developing this photonic material since its founding in 2018 to enable precise light control.
In September of this year, Q.ANT granted cloud access to users, to demonstrate how this photonic chip technology can perform complex AI-based tasks. In the showcase system, users can select an image of a handwritten number from the Modified National Institute of Standards and Technology (MNIST) database. Using a trained neural network, the NPU predicts the number (from 0 to 9) and performs matrix–vector multiplication on the photonic chip.
Need for efficient solutions
“This new processor generation finally gives access to superior mathematical operations, which have been too energy-demanding on traditional GPUs,” said Eric Mounier, chief analyst of photonics and sensing at Yole Group, France.
The rapid growth of AI has driven demand for energy efficiency in an already-strained computing industry. In addition to training new large language models, AI inference is a particularly energy-intensive AI application. “The first impact is expected in AI inference and training performance, paving the way for high-efficiency, sustainable AI computing,” Mounier said.
A challenging landscape
While myriad companies have pursued photonic processors—including Intel Corp., USA, and Lightmatter, USA, which was recently valued at US$4 billion—some have scaled back or pivoted to other technologies due to the complexity of manufacturing photonic components, difficulties associated with system integration and the cost of precise alignment.
At a minimum, Q.ANT’s new product rekindles hope that these challenges can be overcome, enabling more environmentally friendly computing solutions moving forward. “Imagine a future where high-performance computing operates with minimal energy and at least as powerful as our brain,” said Förtsch. “This is the vision behind native computing.”