NVIDIA’s upcoming Feynman GPUs will be the first to include Co-Packaged Optics (CPO), a significant shift from earlier plans. This advanced technology uses light instead of traditional copper to transmit signals, a crucial step for the next generation of AI development.
CPO, also known as Silicon Photonics, helps reduce the reliance on copper cables. These optical components are packaged directly alongside hardware accelerators like GPUs. They are set to be a key solution for future “AI factories,” promising better connection speeds and high-bandwidth links between CPUs and GPUs.
Originally, CPO was not expected to be ready for widespread use until 2033. However, NVIDIA has dramatically sped up this timeline by five years, now aiming for 2028 with its Feynman GPUs. A report from Nikkei Xtech explains that as AI systems grow larger, the distances between platforms can stretch over 10 kilometers. This requires data to move incredibly fast, at hundreds of Gigabits per second or more, something traditional copper cables struggle with. Optics provide the solution.
Recognizing this need, the Optical Compute Interconnect Multi-Source Agreement (OCI-MSA) was formed in March. Major AI companies, including NVIDIA, Broadcom, AMD, Meta, OpenAI, and Microsoft, are part of this group. NVIDIA, being a leader in the field, plans to introduce its first Co-Packaged Optics solution with Feynman GPUs in 2028.
At GTC 2026, NVIDIA also confirmed that Feynman GPUs will use 3D Die Stacking technology. This could mean we will see NVIDIA’s first GPUs with multiple dies stacked in three dimensions. It also appears that NVIDIA will work with Intel as a manufacturing partner, using Intel’s advanced packaging techniques like EMIB to produce Feynman chips.
Another exciting detail is that Feynman GPUs will feature custom HBM (High Bandwidth Memory) technology, moving beyond the standard next-gen HBM. While Rubin GPUs will use HBM4 and Rubin Ultra will use HBM4E, Feynman’s solution might be a customized or enhanced version of HBM4E, or even a unique HBM5 solution, setting it apart from standard offerings.
NVIDIA also confirmed the name for its next-generation Data Center CPU architecture. Feynman will not use the Vera CPU; instead, it will feature a brand new CPU called Rosa. This CPU is named after Rosalyn Sussman, an American physicist and Nobel Prize winner. While no specific details are available yet, given NVIDIA’s track record, we can expect significant advancements.
Alongside these major components, NVIDIA will continue to release a full range of chips for its AI platforms, including BlueField-5, NVLink 8 CPO, Spectrum 7 204T, CPO, and CX10. NVIDIA’s Rosa Feynman solutions are expected in 2028. AMD is also reportedly developing its own Co-Packaged Optics with Global Foundries, with an initial rollout expected around the same 2028 timeframe for its MI500 GPUs.










