Nvidia Unveils NVLink Fusion to Accelerate AI Chip Connectivity, Expands Footprint in Taiwan

Nvidia Plans New AI GPU for China Amid Export Restrictions

Nvidia has announced a major expansion of its chip technology offerings with the launch of NVLink Fusion, a new interconnect platform designed to enhance the performance of custom-built artificial intelligence (AI) systems by improving chip-to-chip communication. The company made the announcement on Monday during the opening keynote of Computex 2025 in Taipei, with CEO Jensen Huang presenting from the Taipei Music Center.

NVLink Fusion: A Step Forward in AI Chip Interconnectivity

NVLink Fusion represents the latest evolution of Nvidia’s long-standing NVLink technology, which allows massive data exchange between processors. Unlike its predecessors, NVLink Fusion will be made available to third-party chip designers, enabling them to build custom AI systems with highly efficient, multi-chip architectures.

Industry players Marvell Technology and MediaTek have already committed to adopting NVLink Fusion for their future chip development projects, highlighting the growing demand for advanced interconnect solutions in the AI hardware landscape.

Huang emphasized that NVLink Fusion will help designers overcome the bottlenecks typically encountered when linking numerous chips for AI workloads. This is especially crucial in building large-scale AI models and deploying inference systems that demand ultra-fast data transfer and synchronized processing across multiple chips.

Nvidia’s Shift From Gaming to AI Dominance

During his keynote, Huang reflected on Nvidia’s transformation. Once known primarily for its graphics processing units (GPUs) used in gaming, Nvidia has now emerged as the leading supplier of AI chips, driving the boom in generative AI technologies since ChatGPT’s breakthrough in 2022.

Not long ago, 90% of my presentations were focused on graphics chips,” Huang remarked. “But the world has changed, and so has Nvidia.” Today, the company leads in designing AI accelerators, supercomputing systems, and the software ecosystem that supports AI deployment at scale.

Nvidia’s flagship GB200 chip, which integrates two Blackwell GPUs and a Grace CPU, already utilizes NVLink to transfer large volumes of data at high speed. The company sees NVLink Fusion as a vital enabler for more flexible and expansive custom chip solutions for AI workloads.

Taiwan Headquarters and a Long-Term Commitment to Asia

In a move underscoring the company’s growing global presence, Huang also announced plans to build a new Taiwan headquarters in the northern suburbs of Taipei. The facility will serve as a hub for regional operations, research collaborations, and support for Taiwan’s vibrant chip manufacturing ecosystem.

This expansion solidifies Nvidia’s strategic presence in Asia at a time when geopolitical tensions and trade policies are prompting many tech firms to diversify supply chains and production bases outside the U.S.

New Chip Roadmap: Blackwell Ultra, Rubin, and Feynman

Huang used the opportunity at Computex to outline Nvidia’s long-term roadmap for AI chips, indicating the company’s deep pipeline of future innovation. Key developments include:

  • Blackwell Ultra: An enhanced version of Nvidia’s current-generation Blackwell AI chip, expected to roll out later in 2025.

  • Rubin chips: A next-generation architecture following Blackwell, focused on greater energy efficiency and performance.

  • Feynman processors: Set for release in 2028, these chips are designed to power the next era of AI computing.

Nvidia also revealed that its DGX Spark, a desktop AI system aimed at researchers and smaller institutions, has entered full production and will begin shipping within weeks. The system is part of Nvidia’s ongoing effort to democratize access to high-performance AI computing beyond large data centers.

Designing CPUs for Windows and Expanding the Ecosystem

Adding to its growing ambitions, Nvidia is reportedly designing CPUs compatible with Microsoft’s Windows operating system, using technology from Arm Holdings. These chips could position Nvidia to compete more directly with Intel and AMD in the general-purpose computing market, especially for AI-driven applications.

This move would significantly expand Nvidia’s role in the broader computing landscape, bridging the gap between AI-specialized hardware and conventional desktop and server environments.

The Return of “Jensanity” and Global Attention

Jensen Huang’s presence at Computex once again drew intense public and media attention, reminiscent of last year’s phenomenon dubbed “Jensanity” in Taiwan. As one of the most prominent figures in global tech today, Huang was mobbed by attendees, underlining the excitement surrounding Nvidia’s innovations.

Computex 2025, which runs from May 20 to 23, is expected to host 1,400 exhibitors and serves as the first major gathering of semiconductor and computing executives in Asia since the U.S. introduced potential new tariffs aimed at incentivizing domestic production.

A Broader Vision for AI

During Nvidia’s annual developer conference in March, Huang emphasized that the company’s focus had shifted from merely building large AI models to enabling a wide array of applications that run on these models. This includes everything from healthcare diagnostics to autonomous vehicles, digital twins, and advanced robotics.

Monday’s announcements at Computex reinforced that vision. Nvidia’s new technologies—from NVLink Fusion to DGX Spark—highlight a broader ecosystem approach: enabling the AI revolution not just with raw power but with interconnected, scalable, and accessible infrastructure.

With these new developments, Nvidia is not only maintaining its lead in AI hardware but also setting the pace for how the global tech industry will build, deploy, and scale intelligent systems in the years ahead.

Share

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending Posts