The SemiAnalysis AI accelerator model is used to gauge historical and future accelerator production by company and type. Our clients include hyperscalers and major semiconductor companies for competitive analysis and supply chain planning, as well as large investors in public and private markets. It can be used to gather upstream and downstream supply chain information from equipment requirements to deployed capacity and FLOPS. Many firms in the downstream and upstream supply chain can have their revenue estimated based on our data.
Our data is provided for 2023 to 2027 on a quarterly basis.
Number of shipments and ASPs of AI accelerators by SKU. Covered product lines include:
Nvidia (A100, H100, H200, H20 (China), B100, B200, GB200 NVL72 & NVL36, B200 NVL72 & NVL36, B200A, B200A Ultra, GB200A NVL36, B20 (China), R100, R200, VR200, VR200 Ultra, Y100, Z100),
Google TPU v4 Pufferfish, v4i Pufferlite, v5p Viperfish, v5e Viperlite, v6p Ghostfish, v6e Ghostlite, v7p Sunfish / Zebrafish, v8 Halofish
Meta MTIA Gen 2, MTIA Gen 3, MTIA Gen4,
AWS Inferentia2, Inferentia3, Inferentia4, Trainium1, Trainium2, and Trainium3,
Microsoft Maia Athena and Braga,
Tesla Dojo 1, 1.5, 2, and 3,
AMD MI300X, MI325X, MI350X, MI400X,
Intel Habana Gaudi2 and Gaudi3, Falcon Shores
Bytedance Gen 1, Gen 2
Huawei Ascend 910B, Ascend 910C
Iluvater CoreX
This provides accelerator revenue forecasts for merchant and semi-custom providers: Nvidia, AMD, Broadcom, Marvell, Intel, Alchip, Mediatek
Supply chain and capacity orders for these chips which includes:
Number of Foundry wafers
Number of 2.5D wafers, package sizes, and yields (TSMC, Samsung, Intel, Amkor, ASE SPIL)
Total number of die attach steps (BESI, ASMPT, etc)
We include the above from both a supply perspective (total potential units based on capacity orders) and actual demand perspective (actual shipments)
This information has important implications for upstream fabrication, packaging and equipment demand:
HBM type
Total capacity
Layer count
Total number of stacks
Total bits
Manufacturer
The above has implications for upstream fabrication, packaging, and equipment demand.
Shipments by customer and accelerator installed base for over 60 major customers. AI accelerator shipments and installed base by customer:
US and Chinese hyperscalers
Enterprises
Neoclouds, such as Coreweave
Other startups and sovereign AI initiatives
Total compute install base for the above which includes: peak theoretical FLOPS and effective FLOPS based on training Model Flops Utilization by chip. Subscription to this model grants the user quarterly updates for one year.
Contact us at Sales@SemiAnalysis.com for more details.