Coming hot on the heels of two massive announcements last year, last week Nvidia and Cerebras showed yet again that the pace of computing is still accelerating.
The first CS-2 based Condor Galaxy AI supercomputers went online in late 2023, and already Cerebras is unveiling its successor the CS-3, based on the newly launched Wafer Scale Engine 3, an update to the WSE-2 using 5nm fabrication and boasting a staggering 900,000 AI optimized cores with sparse compute support. CS-3 incorporates Qualcomm AI 100 Ultra processors to speed up inference.
Note: sparse compute is an optimization that takes advantage of the fact that a multiplication by zero always results in zero to skip calculations that could include dozens of operands, one of which is a zero. The result can lead to a huge speedup in performance with sparse data sets like neural networks.
Comments are closed.