What do you think the dominant design paradigm for electronic systems is going to be going forward?
As I’ve said before, I believe that it is going to be taking software, probably written in C and C++ , and synthesizing parts of it into FPGAs and compiling the rest into binary to run on processors in the FPGA. This is what I’ve been calling software signoff for a long time. It’s more than just the software necessary to run on the FPGA or SoC. It is signing off hardware that co-optimizes the software. The idea that conceptually we need to get the software that specifies the system right, and then hardware design is just creating a silicon fabric (SoC or FPGA) which is able to run the software high enough performance and at low enough power (because otherwise why bother to do anything other than simply execute it). Power, performance and price, the 3Ps again.
There are two key pieces of technology here. The first is high-level synthesis, which should be thought of as a type of compilation of behavior into hardware . In the end the system product delivers a behavior or application. It is not as simple as some sort of productivity tool as RTL designers move up the next level. RTL designers will be bypassed not made more productive.
The other key technology is FPGA technology itself. Today FPGAs offer almost unlimited capacity and unlimited pins. FPGAs will become the default implementation medium. The classic argument for not using FPGAs used to be that you could reduce the cost enough to amortize the cost of designing a chip. But very few designs will run in high enough volume to amortize the cost of doing an SoC or ASIC in today’s most leading edge processes, and the cost and risk of dealing with the variability (in terms of simulating hundreds of “corners” and the difficulty of getting design closure) is rising fast. FPGA takes a lot of the silicon risk out of the implementation.
Did you know that FPGAs represent more than half the volume of leading edge process nodes at the big foundries like TSMC and Samsung? FPGAs are the first logic in a new foundry process and drive the semiconductor learning curve. This is due to FPGA structural regularity that is much like memories but in a standard CMOS logic process.
If you need to do a 45nm design then far and away the easiest approach is to go and talk to Xilinx or Altera. To design your own chip is a $50M investment minimum so you’d better be wanting tens of millions of them when you are done. Only the highest volume consumer markets, such as cell-phones, or the most cutting edge performance needs, such as graphics processors, can justify it.
The decline in the FPGA market in the current downturn conceals the fact that new designs in the largest and most complex devices is growing at over 30% CAGR. It may only be 12% of the market (which, by the way, is something over 15,000 designs per year) but it generates 40% of the FPGA revenue. These designs, and the methodology for creating them, will go mainstream until it represents the bulk of the market. Not just the FPGA market, the electronic system market. Designing your own chip will be an esoteric niche methodology akin to analog design today. Howeve these new high complexity FPGA require an ASIC-like design methodology, not just a bunch of low-end tools from the FPGA vendor.
The challenge for EDA in this new world is to transition their technology base to take account of this new reality and go where system-scale designs are implemented in FPGAs. That is largely not in the big semiconductor companies that currently represent the 20% of customers that brings 80% of EDA revenue. It is much more dispersed similar to the last time that design was democratized with the invention of ASIC in the early 1980s that pushed design out into the system companies.
A lot of RTL level simulation will be required. And one of the high level synthesis companies will be a big winner. In the startup world there are a few companies attempting to offer HLS: Synfora, Forte and AutoESL. Synfora and Forte has been at it for a while (although Forte may be disqualifying themselves in this vision of the future by only supporting SystemC). AutoESL has started to make some progress as well, with one group at Microsoft using just this methodology. Mentor is the current leader with its Catapult synthesis; Cadence has created their own CtoSilicon technology. But Synopsys, who has synthesis running through their veins, have no real high level synthesis product (and, unless they are doing it with people who are unknown in the field, don’t have one in development). Synopsys does have FPGA DNA through the acquisition of Synplicity. My opinion is that once it becomes clear which HLS company is going to win, Synopsys will likely acquire them and for a serious price to complete their FPGA offering.