20 Comments
Jul 25, 2022Liked by Dylan Patel

To keep costs of verification and respins low, the best ASICs are highly repetitious use of simple elements. Even if you take the time to customize the heck out of those elements, like Nvidia GPUs, overall the scale of the tools and the rules is local and then repeated to fill out the chip.

AMD is not that much faster than Intel. Sapphire Rapids had delays due to moving from Intel 7 to Intel 4. A good decision, but it is a reason for the maybe 7 respins. AMD have been talking up Genoa for years, so it was not super-quick. The interesting comparisons with Intel will be in the generation after SPR, when it looks like Intel will have the same kind of per-chiplet process specialization that AMD have pioneered, with similar advantages.

SPR is far more complex than an A100, in the sense that it has more specialized accelerators and other different kinds of functional units than Nvidia need to worry about. It is more interesting to compare Intel to Apple or to Qualcomm, other designers who have a ton of different functionality on their chips. These add to verification complexity.

The physical cost of making a mask is not so huge, if you separate it from design and verification cost. It takes a day or two of time on a dedicated ebeam machine to write the leading edge masks, plus some setup time to calculate how to modulate the ebeam machine to match the design. When a mask is respun for a correction, normally 99% of the mask does not change. This means that the respins should be a minor fraction of the costs associated with the mask set. Most of the cost is in the first mask set, that requires full validation, full rules check, and comprehensive corrections for predicted manufacturability. Those keep a large team of engineers and their most expensive tools busy for months. The iterations are a smaller loss of time plus payments for running the extra pilot set of wafers to check the design.

Expand full comment
Jul 25, 2022Liked by Dylan Patel

looks like even for 7nm, total fixed costs converge around 10k wafer run, that is, 10k is probably the floor for any independent ASICs. There's also an opinion that given current market situation, its versatility and programmability, Nvidia's A100 is still the cheapest option for data center AI service, which is bad news for start-ups. An interesting architecture is still long way from mass adoption. By the way, why is Intel's cost for verification and validation per node much higher and takes longer than AMD and Nvidia? Is it because Intel did it in-house and competitors outsourced?

Expand full comment
Jul 24, 2022Liked by Dylan Patel

Very helpful to understand where this is headed.

Expand full comment

SWOT Yokogawa minimal fab

Expand full comment

Right now is the exact right time to study substrate producer AXTI. The market for specialized chips will be increasing and this will cause demand for AXTI to really increase revenue and profit. The company has increased production capacity and is in great shape. Study both its production capacity and what the recent news represents in future valuation. Plenty of info on the website do the homework.

Expand full comment

Guess who can have economy of scale, efficient supply chain and a huge number of STEMs who's secondary language is mathematics...

Hint 1: It's not the US.

Hint 2: It's not Taiwan

Hint 3: It's not South Korea

Please share your opinion :)

Expand full comment

"Semiconductors are an industry of economies of scale, and this is only becoming more apparent with each new technology generation."

China has scale, and more money than God.

Expand full comment