Meta AMD 6GW GPU Deal: Massive AI Build-Out

Ad News Live

February 24, 2026


Meta AMD 6GW GPU Deal: Meta has signed a long-term agreement with AMD to secure a major supply of artificial intelligence chips.

    
Data center racks powered by the Meta AMD 6GW GPU Deal featuring MI450 chips and EPYC CPUs for AI infrastructure.

The arrangement will enable Meta to acquire more than 6 gigawatts of AI chip capacity over several years.


The move reflects Meta’s aggressive expansion of its AI infrastructure to support future products and services.


AMD confirmed the agreement on Tuesday. As part of the deal, AMD will award Meta 160 million shares of its common stock.


The stock will vest in stages based on AMD meeting specific delivery and performance targets.


The first portion will vest after AMD completes shipment of its initial 1 gigawatt of AI chips to Meta.


The agreement underscores the intensifying competition among technology companies to secure high-performance chips.


It also strengthens AMD’s position as a key supplier in the rapidly growing AI hardware market.


Shares of AMD climbed sharply in early trading after the company announced a major artificial intelligence agreement with Meta.


The stock rose by as much as 10% in premarket activity. However, the rally slowed after the opening bell, with shares giving back some of the initial gains.


AMD Chief Financial Officer Jean Hu said the partnership is expected to deliver strong revenue growth over the coming years.


She stated that the deal will also have a positive impact on the company’s adjusted earnings per share.


Hu said the agreement represents meaningful progress toward AMD’s long-term financial objectives.


She emphasized that the deal’s performance-based structure connects both companies’ success to execution.


Hu said the arrangement is designed to ensure sustained collaboration and long-term value for both AMD and Meta.


AMD announced that its next-generation MI450 graphics processing units will begin rolling out later this year.


The chips will be installed in the company’s Helios rack-scale data center platforms. These systems will also feature AMD’s EPYC server processors.


The deployment is expected to begin in the second half of the year.


Meta said it plans to expand its processor purchases as part of its broader artificial intelligence strategy.


The company will acquire AMD’s Venice central processing units.


Meta will also adopt AMD’s upcoming Verano processor once it becomes available.


Central processing units are becoming essential in modern AI data centers. They support critical workloads tied to advanced AI applications.


This includes powering inference operations, which allow trained AI systems to generate responses and perform real-time tasks.


The growing demand reflects the industry’s shift toward scalable and efficient AI deployment.


Meta announced another long-term partnership last week with chipmaker Nvidia to strengthen its artificial intelligence computing capacity.


Under the agreement, Nvidia will supply Meta with millions of its advanced Blackwell and Rubin graphics processors.


These chips are designed to handle complex AI workloads across large data center networks.


Nvidia also said Meta will serve as the first company to deploy its Grace CPU server systems at scale.


The Grace processor is commonly paired with two Blackwell or Blackwell Ultra GPUs.


Together, these components form Nvidia’s GB200 and GB300 superchip platforms. These systems are engineered to support high-density, rack-scale AI infrastructure.


The partnership reflects Meta’s strategy to secure cutting-edge hardware from multiple suppliers.


It also reinforces Nvidia’s position as a key provider of processors powering the next generation of AI data centers.


Meta is planning to invest more than $135 billion in capital projects through 2026 to support its artificial intelligence growth strategy. The funding will be used to build new data center facilities.


It will also finance the purchase of advanced semiconductor hardware. In addition, the investment will support the training and scaling of AI models.


Other major technology companies are also increasing their AI spending. Amazon, Google, and Microsoft have announced similar plans.


Combined, the four companies are projected to spend approximately $650 billion on artificial intelligence infrastructure and development.


The scale of these investments reflects the intensifying race to expand AI capabilities.


Technology leaders are moving quickly to secure the computing power required for future AI products and services.


Heavy spending on artificial intelligence has prompted caution among some investors.


Many are evaluating whether these large financial commitments will translate into long-term gains. Meta has held up better than its peers in the market.


Its stock has slipped just 2.5% since the company released its earnings on Jan. 28 and detailed its AI expansion plans. Google has seen a sharper pullback.


Its shares have declined 8.7% since announcing its own artificial intelligence spending strategy.


Amazon has also experienced a notable drop. Its stock has fallen 11.9% following its investment disclosure.


Microsoft has recorded the steepest decline among the major technology firms. Its shares have dropped 15.5% since outlining its AI investment roadmap.


The differing stock performance highlights ongoing investor scrutiny as companies commit billions to artificial intelligence development.


Shares of major semiconductor companies have also struggled to maintain their earlier momentum.


Investor sentiment has weakened after a strong rally driven by artificial intelligence demand.


Market participants are increasingly questioning whether AI spending levels can be sustained.


There are also concerns about the long-term impact of custom-designed chips created by large technology firms.


Amazon, Google, Meta, and Microsoft are investing in proprietary processors for their data centers.


These custom chips are intended to reduce reliance on external suppliers and improve efficiency.


This trend has raised questions about the future dominance of traditional graphics processor manufacturers.


According to a report published in November by The Information, Meta held discussions with Google about using its Tensor Processing Unit chips for artificial intelligence workloads.


TPUs are designed specifically to accelerate machine learning performance.


Despite these developments, analysts believe it will be difficult for custom chips to fully replace existing solutions.


Processors produced by Nvidia and AMD remain essential for a wide range of AI applications.

Their ability to handle diverse computing tasks continues to make them critical components in modern data center infrastructure. 


Image: AI‑generated illustration, created and published by AD News Live.

Follow Us 

AD News Live