Thursday, May 23, 2024
HomeMachine LearningSundar Pichai on Gemini, AI progress and extra

Sundar Pichai on Gemini, AI progress and extra


Infrastructure for the AI period: Introducing Trillium

Coaching state-of-the-art fashions requires lots of computing energy. Business demand for ML compute has grown by an element of 1 million within the final six years. And yearly, it will increase tenfold.

Google was constructed for this. For 25 years, we’ve invested in world-class technical infrastructure. From the cutting-edge {hardware} that powers Search, to our customized tensor processing models that energy our AI advances.

Gemini was educated and served totally on our fourth and fifth technology TPUs. And different main AI firms, together with Anthropic, have educated their fashions on TPUs as nicely.

At present, we’re excited to announce our sixth technology of TPUs, referred to as Trillium. Trillium is our most performant and best TPU thus far, delivering a 4.7x enchancment in compute efficiency per chip over the earlier technology, TPU v5e.

We’ll make Trillium accessible to our Cloud prospects in late 2024.

Alongside our TPUs, we’re proud to supply CPUs and GPUs to help any workload. That features the brand new Axion processors we introduced final month, our first customized Arm-based CPU that delivers industry-leading efficiency and vitality effectivity.

We’re additionally proud to be one of many first Cloud suppliers to supply Nvidia’s cutting-edge Blackwell GPUs, accessible in early 2025. We’re lucky to have a longstanding partnership with NVIDIA, and are excited to convey Blackwell’s breakthrough capabilities to our prospects.

Chips are a foundational a part of our built-in end-to-end system. From performance-optimized {hardware} and open software program to versatile consumption fashions. This all comes collectively in our AI Hypercomputer, a groundbreaking supercomputer structure.

Companies and builders are utilizing it to sort out extra advanced challenges, with greater than twice the effectivity relative to only shopping for the uncooked {hardware} and chips. Our AI Hypercomputer developments are made attainable partly due to our method to liquid cooling in our knowledge facilities.

We’ve been doing this for practically a decade, lengthy earlier than it grew to become state-of-the-art for the {industry}. And right this moment our whole deployed fleet capability for liquid cooling techniques is sort of 1 gigawatt and rising — that’s near 70 instances the capability of every other fleet.

Underlying that is the sheer scale of our community, which connects our infrastructure globally. Our community spans greater than 2 million miles of terrestrial and subsea fiber: over 10 instances (!) the attain of the following main cloud supplier.

We’ll preserve making the investments essential to advance AI innovation and ship state-of-the-art capabilities.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments