One of the most important and under-rated tasks in a semiconductor company is creating the cost model. This is needed in order to be able to price products, and is especially acute in an ASIC or foundry business where there is no sense of a market price because the customer and not the manufacturer owns the intellectual property and thus the profit due to differentiation.
For a given design in a given volume the cost model will tell you how much it will cost to manufacture. Since a design can (usually) only be manufactured a whole wafer at a time, this is usually split into two, how many good die you can expect to get on a wafer, and what the cost per wafer is. The first part is fairly easy to calculate based on defect densities and die size and is not controversial.
In fabs that run only very long runs of standard products there may be a standard wafer price. As long as the setup costs of the design are dwarfed by other costs since so many lots are run in a row, then this is a reasonable reflection of reality. Every wafer is simply assumed to cost the standard wafer price.
In fabs that run ASIC or foundry work, many runs are relatively short. Not every product is running in enormous volume. For a start, prototypes run in tiny volumes and a single wafer is way more than is needed although it used to be, and may still be, that a minimum of 3 wafers is run to provide some backup against misprocessing of a wafer and making it less likely to have to restart the prototype run from scratch.
Back when I was in VLSI we initially had a fairly simple cost model and it made it look like we were making money on all sorts of designs. Everyone knew, however, that although the cost model didn’t say it explicitly the company made lots of money if we ran high volumes of wafers of about 350 mils on a side, which seemed to be some sort of sweet spot. Then we had a full-time expert on cost-models and upgraded the cost-model to be much more accurate. In particular to do a better job about the setup cost of all the equipment when switching from one design to the next, which happened a lot. VLSI brought a design into production on average roughly daily and would be running lots of designs, and some prototypes, on any given day. The valuable fab equipment spent a lot of the day depreciating while the steppers were switched from the reticles for one design to the next. Other equipment would have to be switched to match the appropriate process because VLSI wasn’t large enough to have a fab for each process generation so all processes were run in the same fab (for a time there were two so this wasn’t completely true). Intel and TSMC and other high volume manufacturers would typically build a fab for each process generation and rarely run any other process in that fab.
The new cost model shocked everyone. Finally it showed that the sweet spot of the fab was high volume runs of 350 mils on a side. Large enough that the design was complex and difficult (which we were good at) but small enough not to get into the part of the yield curve where too many die were bad. But the most shocking thing was that it showed that all the low volume runs, I think about 80% of VLSI’s business at the time, lost money.
This changed the ASIC business completely since everyone realized that, in reality, there were only about 50 sockets a year in the world that were high enough volume to be worth competing for and the rest were a gamble, a gamble that they might be chips from an unknown startup that became the next Apple or the next Nintendo. VLSI could improve its profitability by losing most of its customers.
Another wrinkle on any cost model is that in any given month the cost of the fab turns out to be different from what it should be. If you add up the cost of all the wafers for the month according the cost model, they don’t total to the actual cost of running the fab if you look at the big picture: depreciation, maintenance, power, water, chemicals and so on. The difference is called the fab variance. There seemed to be two ways of handling this. One, which Intel did at least back then, was to scale everyone’s wafer price for the month so it matched the total price. So anyone running a business would have wafer prices that varied from one month to the next depending on just how well the fab was running. The other is simply to take the variance and treat it as company overhead and treat it the same way as other company overhead. In the software group of VLSI we used to be annoyed to have our expenses miss budget due to our share of the fab variance, since not only did we have no control over it (like everyone else) it didn’t have anything to do with our business at all.