SaaS for EDA

SaaS, or software as a service, is the capability to deliver software over the net. In web 1.0 years this was called the ASP model, for application service provider. The archetypal company doing this is Salesforce.com, which provides customer relationship management (CRM) software for businesses, initially small and medium sized. This was in comparison with companies like Siebel (now part of Oracle) who used the traditional installed software model, basically the same model as almost all EDA companies use.

So will SaaS take over EDA? The advertised advantages of SaaS are a lower cost of sales, faster update cycle for the software, and that it can be incrementally adopted one seat at a time. And several EDA companies, some small, one called Cadence, have announced SaaS offerings already.

I think the attraction of SaaS for users comes from a misunderstanding: that with SaaS, which is one kind of “metered use” of tools, the tool bill would go down. This seems unlikely. One problem with all kinds of metered use for EDA is that the large users have licenses that run 24 hours per day, and small users just use tools occasionally (for example, during tapeout). If the tool/hour is priced so that the heavy users pay a little less than before, then occasional users pay almost nothing. If the tool/hour is priced so that the occasional users pay a little less than before, then the heavy users prices go up by multiples of the current cost. There is no good in-between price either. SaaS doesn’t get around this issue. And if many people don’t pay less than before they are not going to adopt metered use, SaaS or otherwise.

I think that the dynamics of the business process are very different for EDA. One assumption in SaaS is that by lowering the barrier to entry to a single seat bought over the net, as opposed to a corporate deal, the market can be expanded to people who corporate deals don’t reach, or at the very least it will steal significant market share from the big guys. This was salesforce.com’s base, initially selling to people who would not have bought a CRM system, and then stealing users from the big guys. It is classic Innovator’s Dilemma disruption, starting not by stealing business from the established market leaders, but going where the competition is "non-use."

But most EDA is not like that. There is no crowd of unsatisfied IC designers just waiting to build chips if only place & route were cheaper. And as to undercutting the big guys, any innovative business model that turns EDA into a smaller market is likely to reduce investment in EDA, which gets problematic very quickly. Stealing a market segment Craigslist style, by turning a billion dollar market into a million dollar market and dominating it, will not be able to sustain the R&D needed. The reality is that you can’t compete much on price in EDA: there is no market to expand into, and if you succeed in stealing a lot of market at a low price then you had better genuinely have lower costs (especially in R&D) to be able to do it again for the next process node. It is a similar problem to the one in pharmaceuticals. Drugs cost a lot less to make than they sell for, but if you artificially (by fiat or excessive discounting) reduce the prices too much then current drugs are cheap but no new ones will be forthcoming. Unlike with drugs not developed, if there are no workable tools for the next process node then we will all know what we are missing; it is not just a profit opportunity foregone, it is a disaster.

The next problem with EDA is that you can’t get the job done with tools from only one vendor. So if you use SaaS to deliver all your EDA tools, you will repeatedly need to move the design from one vendor to another. But these files are gigabytes in size and not so easily moved. So it seems to me that if SaaS is going to work, it has to be  through some sort of intermediary who has all (or most) tools available, not just the tools from a single vendor. If you use a Cadence flow but you use PrimeTime (Synopsys) for timing signoff and Calibre (Mentor) for physical verification then this doesn’t seem workable unless all are available without copying the entire design around.

Another problem is that SaaS doesn’t work well for highly interactive software. Neither Photoshop nor layout editors seem like they are good candidates for SaaS since the latency kills the user experience versus a traditional local application. Yes, I know Adobe has a version of Photoshop available through SaaS, but go to any graphics house and see if anyone uses it.

There are some genuine advantages in SaaS. One is that software update is more painless since it is handled at the server end. You don’t normally notice when Google tweaks its search algorithm. But designers are rightly very wary of updating software during designs: better the bugs you know than some new ones. So again, EDA seems to be a bit different, at least has been historically.

The early part of the design process and FPGA design are a better place for SaaS perhaps. The files are smaller even if they need to be moved, the market is more elastic (not everyone is already using the best productivity tools). But this part of the market already suffers from difficulty in extracting value from the market and SaaS risks reducing the price without a corresponding true reduction in cost. Walmart is not the low price supplier because it has everyday low prices; it has everyday low prices because it has got its costs lower than anyone else’s. Perhaps the ideal market is FPGA design where the main tool suppliers are not really trying to make money on the tools directly, and where few of the designs are enormous.

So if SaaS is going to succeed in EDA, my guess it that it will either be a virtual CAD organization offering tools from many vendors, or else in the FPGA world where single-vendor flows are common.

Posted in eda industry, marketing | Comments Off

Royalties

Venture capitalists love royalties. They love royalties because they think that they might get an unexpected upside since they are hoping that a customer, in effect, signs up for a royalty and sells far more of their product than they expected and thus has to pay much more royalty than expected. Since a normal successful EDA business is predictable (license fees, boring) it doesn’t have a lot of unlimited upside.

My experience is that you need to be careful how to structure royalty deals to have any hope of that happening. At VLSI I remember we once had a deal to build a chip for one of the Japanese game companies if we could do it really fast (we were good at that sort of thing). It needed lots of IP so we just signed everyone’s deal with no license fees for as long as possible, but which all had ridiculous royalty rates. We probably had a total of about 25% royalty on the part, more than our margin. But we reasoned as follows: “One in three, the project gets canceled and never goes into production (it did); one in three it goes into production, but we never ship enough volume to reach the point we pay more in royalties than we would in license fees; one in three it is a big success and we tell everyone the royalty they are going to get, and if they don’t accept, we design them out.”

IP is more mature now, so the royalty rates and contracts are more realistic. Back then the lawyers could consume the whole design cycle negotiating terms and there wasn’t enough time to wait. Everyone thought their non-differentiated IP was worth a big royalty of a few percent, even though a big SoC (even then) might have dozens of IP blocks that size. So perhaps the problem has gone away. If you were on a short time to market, you simply didn’t have time to negotiate terms or royalty rates. You had to get going. Hence our strategy of accepting everything on the basis we’d renegotiate if it became important.

The best form of royalty is one that is paid out elsewhere in the value chain. If TSMC pays a royalty to Artisan (now ARM) or Blaze (now Tela) then the customer ends up paying since TSMC adds it to their bill. But if the chip is a success then the customer doesn’t get to re-negotiate from a position of strength. If the overall deal is a huge success then TSMC has probably negotiated percentage breaks anyway (I don’t know any details so I’m guessing) and can choose either to reduce the cost it charges on or take a bigger profit.

However, when Artisan was bought by ARM for $900M every VC wanted a deal like that. And they saw royalties as the secret sauce. But royalties only work really well when they are much higher with unexpected success, otherwise they are just moving payments around in time. Plus they only work well with unexpected success if they can’t be renegotiated as a result.

The reality is that they are often disappointing. Mike Muller, CTO of ARM once told me that “royalties are always later and less than you expect.” It took Artisan 15 years to get to the stage it was sold to ARM, those royalties were largely licenses fees foregone in the early years. ARM was, in effect, paying for money pushed into the future that it would then collect.

At one level, many people say that it is a pity that EDA doesn’t get a percentage of semiconductor revenue. But in a way, they do. EDA has been around 1.5-2% of semiconductor revenue for years. Of course EDA would like that number to be 4% but it’s unlikely that semiconductor companies would have signed up for the deal of no upfront money and big royalties, even though they would probably have been served well by it. They would have avoided any of the business impact that many companies suffer due to having inadequate numbers of licenses.

Posted in finance, marketing | Comments Off

Wall Street values

Wall Street does a terrible job of valuing investment. I’ve talked earlier about how financial accounting standards do a poor job of capturing the value of many modern companies in their balance sheet. But Wall Street is driven by people who only know how to read a balance sheet (and a P&L which is just an explanation of changes to the balance sheet) and they act as if the balance sheet is the truth.

As a result, Wall Street loves acquisitions and hates investment, even if the investment is much cheaper than an acquisition.

Assume bigCo acquires startupCo for $100M. FASB considers the acquisition to have something to do with the business going forward as if it were a piece of land just purchased that needs to be carried on the books and have its value adjusted from time to time. But in reality it should adjust the prior quarters’ P&Ls to reflect the fact that all that R&D that was done should really have been set against revenue back then. When we were allowed to account for acquisitions through pooling of assets, it was closer to this but still got stuck with the goodwill which really also should be partially set against prior quarters. Anyway, Wall Street loves this sort of deal, whatever the price, since it is seen as a write-off (purchase price) leaving a leaner cleaner company to make more profit going forward. It doesn’t care about prior quarters anyway.

By contrast, if bigCo instead spent $1M per quarter for the previous couple of years, which is much less than the $100M it acquired startupCo for, then Wall Street would have penalized it by lowering its stock price due to the lower profitability. Since the investment doesn’t show on the balance sheet it is a pure loss with no visible increase in anything good. Of course it is hard for anyone, especially financial types on Wall Street, to know if the investment is going to turn out to be Design Compiler (good) or Synergy (not good), if it is Silicon Ensemble (good) or Route66 (not good). But the same could be said about any other investment: is that expensive factory for iPods (good) or Segways (bad).

When a company goes public, it sells some shares for cash, so ends up with lots of cash in the bank. But it then finds that it is hard to spend that cash except on acquisitions. If it invests it in R&D, then the net income will be lower than before and so the share price will decline from the IPO price due to the reduced profitability. If it uses it to acquire companies, then prior to the acquisition its profit is high (no investment) so its stock price is high. After the acquisition its profit is high (new product to sell with largely sunk costs). At the acquisition, Wall Street doesn’t care because it is a one-time event and Wall Street never cares about one-time events. Even if, like emergency spending in Congress, they happen every year.

I think it is bad when accounting rules in effect force a company to make tradeoffs that are obviously wrong. It is obviously better to develop a tool for $10M than buy a company that builds the same tool for $100M. Yet Wall Street prefers it, so that’s what happens. Of course there is a difference between developing a product that might succeed and buying a company that is already succeeding, and I’ve talked before about the issues of getting products into the channel. So the acquisition might make sense anyway. But accounting skews those decisions too much.

Posted in investment | Comments Off

FPGA software

Why isn’t there a large thriving FPGA software market? After all, something like 95% of semiconductor designs are FPGA so there should be scope for somebody to be successful in that market. If the big EDA companies have the wrong cost structure, then a new entrant with a lower cost structure, maybe.

In the early 1980s, if you wanted to get a design done then you got tools from your IC vendor. But gradually the EDA market came into being as a separate market, driven on the customer side by the fact that third-party tools were independent of semiconductor vendor and so avoided the threat of paying excessive silicon prices due to being locked into a software tool flows. Once the 3rd party EDA industry captured enough of the investment dollars then they could advance their tools much faster than any single company and the captive tool market was largely doomed.

For FPGAs that is not the case. If you want to do a design with Xilinx, then you get tools from Xilinx. With Altera, tools from Altera and so on. Yes, there are some tools like Synplicity (now part of Synopsys) and Mentor’s FPGA suite, but they are focused only on the high end. But it is hard to make money only at the high end. When, over time, the high end goes mainstream then the FPGA vendors produce good-enough tools at free/low prices. So the R&D costs need to be recovered from just those few high-end users since the tools never become cash cows like happens with IC design tools for any particular process node as time progresses. Like the red queen in Alice through the Looking Glass, it takes all the running one can do to stay in the same place.

There may be change coming as more and more FPGA designs are actually prototypes for ASIC designs, or might want to be cost-reduced into ASIC designs and so on. This means that people want to use the same tools for ASIC and FPGA design, and on the surface is one reason Synopsys acquired Synplicity.

One other issue is that it is FPGA architectures and their tools are more intimately bound together than with IC designs. It is a dirty secret of synthesis (and maybe place and route) that despite the lower price point, FPGA synthesis is harder, not easier, than mainstream synthesis for standard-cell libraries. Solving the general problem for all the different varieties of FPGA architectures seems to be extremely costly. By contrast, Xilinx only has to build tools that work with Xilinx and can ignore that its algorithms might not work with Altera’s arrays.

But probably the biggest factor is that there are not enough FPGA companies. If there were a dozen FPGA companies then enough of them would compete by supporting a 3rd party FPGA tool industry and eventually the investment there would overpower the companies that tried to keep it internal. This is just what happened when the Japanese and everyone else tried to get into ASIC. They had no internal tools so they leveled that playing field by making Daisy/Mentor/Valid and subsequently Cadence then Synopsys successful. Eventually companies like VLSI Technology and LSI Logic felt they should try and spin out their tools and adopt EDA industry tools.

It is unclear whether this was good financially. I told Al Stein, CEO of VLSI, when we spun out Compass that he shouldn’t expect his tool bill to go down. He would pay more to the EDA industry that Compass was entering (some of it to Compass) than he had been paying just to fund the division that became Compass. And this turned out to be a true prediction.

For ASIC designs today, IBM’s backend tools are the only ones internally developed. But they are #1 in cell-based design so it is hard to argue that the strategy of keeping that development internal is demonstrably bad.

And Xilinx and Altera are doing OK keeping their tools internal.

Posted in eda industry, marketing, methodology | Comments Off

Ready for liftoff?

I talked earlier about how it seems to take $6M to build a channel in EDA once you get to the “just add water” stage where all you need to do is to ramp up a salesforce and distribution. However, typically you are not really ready for this when you first think you are. More EDA (and other) companies are killed by premature scaling than anything else. Ramping up a channel is very expensive and will burn a lot of money very fast for little return if the product is not ready, either killing the company completely or requiring an additional unexpected round of funding at an unimpressive valuation, diluting everyone’s stock significantly.

When to scale is the most difficult decision a startup CEO faces. Too early and the company dies due to lack of financial runway. Too late and the company risks either missing the market window or losing out to competition during the land-grab phase of a new market.

There are two ways the product can be “not ready,” and there will usually be a mixture of the two. The first, and most obvious, is that the first release won’t include every feature that every user requires and so isn’t ready to serve the entire market. This is probably even known and acknowledged; it’s not as if engineering doesn’t have a long list of stuff for version 2.

The more dangerous way the product is “not ready” is that you are not completely sure precisely which is the most important problem that it solves for the user, and which subsets of users will value this the most. Value it enough to consider engaging with you, an unproven startup with an immature buggy first release. For instance, you might have a product that you think serves the entire market, everyone will need one, you can’t do 45nm designs without it. In fact, if you are just starting to engage with real customers, you might never have run a real 45nm design through your product, just doubled up 65nm designs and switched the library or something. It is often easier to get great results on those older designs. For example, in the last company where I worked, Envis, we got great power reduction results on 130nm designs, and public domain cores that had been around for even longer. After all, nobody cared that much about power back then so they didn’t put much effort into designing to keep power under control. When we tried our tool on 90nm and 65nm designs, the results were initially less impressive. Designers had already done many obvious things by hand meaning that we had to work harder to produce compelling incremental savings.

The reality is that your initial product certainly doesn’t serve the whole market. If it does, you should have gone to market earlier with a less complete product. Worse, the precise feature set you have implemented might not serve any submarket completely either. But you need to have a product that serves at least some of the market 100%, even at the cost of being useless to a large part of the market, as compared to a solution that is 90% for everyone. At least in the first case there is at least one customer who might buy the tool; in the second case, nobody is going to buy the tool, everyone is going to wait for you to add the remaining 10%. A different 10%, perhaps, for every customer.

It is a fallacy to think that taking a product to market is a linear process. Do the engineering, prepare sales collateral, start selling. Taking a product to market is more of an iterative exploratory process. There is a phrase, “throwing mud against the wall to see what sticks” that sounds derogatory. But, in fact, the early stage of going to market should be like that. In the best of all worlds you’ll have had one or two customer partners since the early stages of development, helping you spec the product and helping guide the early parts of development. But it doesn’t always work out that way. Sometimes you don’t have early partners, or your champion leaves, or sometimes those early partners turn out to be atypical in some important way, so that you are forced to choose between satisfying their unique requirements and developing features with wider applicability.

That leaves you with a product that you have to explore how to take to market, and to explore which if the many aspects of the product you should emphasize in your positioning. This is the City Slickers marketing problem, discovering the one thing that your customers value enough to buy the product and focusing your marketing and engineering on making that value proposition strong before worrying about other aspects of the product that might broaden the appeal to a larger segment of the whole market.

Posted in eda industry, management | Comments Off

Downturn

Superficially, the present downturn is similar to the “technology” crash of 2001. I put technology in quotes since very little of that first internet boom involved true technology, and many people who called themselves programmers were writing plain HTML. As somebody, I forget who, said to me at the time: “surely one day technology will count again.” Of course some companies, like Amazon, Webvan, eBay or Napster, had a differentiated technology foundation to go with what was mainly a business model play but most did not.

But undeniably the boom of investment created a huge number of jobs. When the crash finally came, large numbers of them were destroyed. A lot of those people had come to the bay area attracted by the boom, and when their jobs went away they went home again. The SoMa warehouses in San Francisco emptied out as fast as they had filled and apartment rents came back down. Many people who had left the EDA industry to make their fortune returned to a place where their knowledge was still valued. As is often the case, the people in EDA (at least the ones I know) who made it big in the internet companies were people who left early, before it was obvious that it was a good idea. People who joined Yahoo before it was public, who formed eBay’s finance organization or founded small companies that built important pieces of the plumbing.

This downturn seems different. Many of the people being laid off (and I don’t just mean in EDA, in silicon valley in general) are people who have been here for decades, not people who came here in the last few years as part of the gold rush. Of course, veterans have been laid off before and then immediately re-hired when the eventual upturn came.

But again this downturn seems different. I don’t think that many of these jobs are coming back again. Ever. EDA in particular is undergoing some sort of restructuring, as is semiconductor. We can argue about precisely what we will see when the dust settles, but I don’t think many of us expect to see the 2007 landscape once again.

I’ve pointed out before that it is obvious that EDA technology is required since you can’t design chips any other way. But the EDA industry as it was configured will not be the way that tools continue to be delivered. It is hard to imagine that Cadence will employ 5000 people again any time soon, to pick the most obvious example.

The many dozens of EDA startups that used to employ significant numbers of people in aggregate aren’t coming back either. Any startups that do get formed will be extremely lean with just a handful of people. Partially this is driven by technology: with modern tools and open infrastructure, it doesn’t take an EDA startup half a dozen people and a year or two to build (duplicate) the infrastructure they need on which to create differentiated technology. It takes a couple of guys a couple of months. Partially size is driven by investment. With public markets closed to EDA companies (to everyone right now but to small software companies probably forever) then the only investments in EDA that makes sense are ones that still make sense with $25M as a target acquisition price, not $250M.

A recent report by Accenture (it’s called "How Semiconductor Companies Can Achieve High Performance by Simplifying Their Businesses” but you have to pay lots of $ to read it) reveals that some semiconductor engineers are "disenchanted" in their work, and "fearful of losing their jobs." That’s the kind of revelation that really makes you want to reach for your wallet.

Facetiousness aside, the report also points out that as semiconductor companies go fabless (or at least fab-lite in the meantime) then the dynamics of what is valuable change. Most obviously if you are in technology development (i.e. semiconductor process development), which is no longer required. And as I’ve pointed out, once you don’t have a fab, there is often not a lot of justification for the particular combination of businesses that find themselves in the same semiconductor company. The weak nuclear force has gone to zero and all those nuclei are going to fly apart.

Silicon Valley is a unique ecosystem, the center of the unverse for technology. But it is changing in form in ways that are not yet clear.

Posted in silicon valley | Comments Off

Creating demand in EDA

I talked earlier about how EDA marketing can’t create demand. Small companies cannot afford much marketing and large companies are in the vicious cycle of not being able to get innovation into the channel since they can’t create demand even though they could have money to spend if it were effective.

EDA used to be rich enough that it would advertise anyway, at least to get the company name out in front of people (remember all those in-flight magazine ads for Cadence and the “curfew key” and suchlike). But as times got tighter, EDA stopped advertising since it was ineffective. In turn, the books that used to cover EDA, like EE Times and EDN, cut back their coverage and laid off their specialist journalists like Richard Goering and Mike Santarini. To be honest, I think for some time before that the major readers of EDA coverage were the other EDA companies, not the customers. I don’t have any way to know, but I’m sure the readership of this blog is the same.

Trade shows seem to be a dying breed too, and not just in EDA. DATE seems to be dead, as a tradeshow, with almost no exhibitors any more. I wouldn’t be surprised if this year it has almost no visitors any more either, and gives up next year. EDA seems like it can support one real tradeshow, which is DAC. It is mainly for startups for whom it is really the only way to get discovered by customers outside of having a half-reasonable website. The large EDA companies run their own tradeshows in an environment that leverages their costs better than paying a ridiculous rate for floor space, paying rapacious convention center unions to set up the booth, and putting up with whatever restrictions show management has chosen for this year (“you can’t put a car on the booth, just because” was one memorable one that I ran into once).

The large EDA companies, with some justification, feel that a big presence at DAC is subsidizing their startup competitors as well as not being the most cost-effective way to reach their customers to show them the portfolio of new products. The best is to avoid the political noise by at least showing up, but the booths with 60 demo suites running continuously with a 600 employee presence are gone.

That leaves websites and search engines as the main way that customer engineers discover what is available. So you’d think that EDA company websites, especially for startups who have no other channels, would be good. But there are very few websites that do a good job of explaining the product line, the focus of the company and so on in a way that is customer-oriented.

If you talk to PR agencies, they’ll tell you that the new thing is using social networks and blogging to reach customers. But they don’t really seem to know just how that would work. I mean I’m reaching you through a blog because you are reading this. But if every other blog item were a thinly disguised regurgitated press release you’d soon give up reading. But it’s not really possible to do anything more technically in-depth. I don’t have the knowledge to be the Roger Ebert of EDA even if I had the time to go to all the “screenings”. But that leaves the problem that there isn’t an easy way to find out what is coming soon to a theatre, sorry, server-farm, near you and whether it is worth the investment of time to take a serious look.

Posted in eda industry, marketing | Comments Off

Presentations without bullets

I talked earlier about the typical hi-tech presentation where the content is largely on the slides. In that case you must add color by what you say rather than simply reading what is on the slides.

The alternative approach is essentially to make a speech. The real content is in what you say. The slides then should be graphical backup (pictures, graphs, key points) to what you are saying. Watch a Steve Jobs keynote from MacWorld (example) to see this type of presentation done really well, or presentations from TED (but beware, not all of them have slides at all).

But just like Steve Jobs or the TED presenters, to carry this off well you need to rehearse until you have your speech perfect, either basically memorizing it or doing it from notes. Whatever you do, don’t write it out word for word and read it. The slides are not going to help you remember what to say, they are another complication for you to make sure is synchronized with your speech. So rehearse it without the slides until you have that perfect. Then rehearse it with the slides. Then rehearse it some more. Like a good actor, it takes a lot of repetition to make ad libs look so spontaneous.

This approach will not work presenting to foreigners who don’t speak fluent English. There is simply not enough context in the visuals alone, and your brain has a hard time processing both visuals and speech in a second language. If you know a foreign language somewhat, but are not bilingual, then watch the news in that language. It is really hard work, and you already know the basic story since they cover the same news items as the regular network news.

If you are giving a keynote speech, then this is the ideal style to use. You don’t, typically, have a strong "demand" like you do when presenting to investors (fund my company) or customers (buy my product). Instead you might want to intrigue the audience, hiding the main point until late in the presentation. So instead of opening with a one-slide version of the whole presentation, you should try and find an interesting hook to get people’s interest up. Preferably not that Moore’s Law is going to make our lives harder since I think we’ve all heard that one.

I find the most difficult thing to achieve when giving speeches to large rooms of people is to be relaxed, and be myself. If I’m relaxed then I’m a pretty good speaker. If I’m not relaxed, not so much. Also, my natural speed of speaking is too fast for a public speech, but again if I force myself to slow down it is hard to be myself. This is especially bad if presenting to foreigners since I have to slow down even more.

I also hate speaking from behind a fixed podium. Sometimes you don’t get to choose, but when I do I’ll always take a wireless lavalier (lapel) mike over anything else, although the best ones are not actually lapel mikes but go over your ear so that the mike comes down the side of your head. That leaves my hands free, which makes my speaking better. Must be some Italian blood somewhere.

Another completely different approach, difficult to carry off, is what has become known as the Lawrence Lessig presentation style, after the Stanford law professor who originated it. An example is here where he talks about copyright and gets through 235 slides in 30 minutes, or watch a great presentation on identity with Dick Hardt using the same approach here. Each slide is on the screen for sometimes just fractions of a second, maybe containing just a single word. I’ve never dared to attempt a presentation like this. The level of preparation and practice seems daunting. I’d be interested if anyone else has any experience of trying this.

Posted in presentations | Comments Off

Swiffering new EDA tools

Why isn’t a new EDA tool like Swiffer?

One point that I’ve made before is that big EDA companies suffer from being unable to get new products into their channel. As I said earlier:

“When so much is riding on just keeping the customer on-board using the existing tools, the salesperson becomes very risk averse about selling new products.”

The effect of this is that big EDA companies can only sell to customers once there is market demand.  But that is the same problem as Proctor and Gamble faced with, say, Swiffer. Nobody was demanding mop with replaceable sheets, nobody knew one was available. So traditional marketing showed how useful it could be and that it was available at your local supermarket and now Swiffer is on track to be a billion dollar business.

Why can’t marketing do much to create demand in EDA?  I don’t entirely know, but here are some plausible relevant things.

Firstly, the EDA market (for IC design, not for FPGA or embedded software) is inelastic. No matter how much advertising is done, no matter how low the price, no matter how appealing the packaging, the market for EDA tools is fixed. Sure, we can steal market share from each other, maybe we can increase ASPs, we can expand the definition of EDA. But there is no untapped market of people out there who never knew they wanted to design a chip, in the same way as we all turned out to be a market of people who never knew we needed a post-it note. So we are only marketing to people who already know they are designers.

EDA is not even like other software industries. It values different things because it moves so fast. All users complain, with justification, about the bugginess of EDA software, but they can’t get by with the old solid version in the same way as in slower moving software industries. In books like Crossing the Chasm and The Innovator’s Dilemma, marketers are told to worry about the job that the customer hires you to do. The customer doesn’t want a drill, they want a hole. The job is holes. But as Tom Kozas and Mike Sanie point out here, when the EDA engineer goes to Home Depot, he’s not looking for ways to make a hole. He’s already decided that he wants an 18V cordless drill with two gear ratios. Maybe he’ll pick between DeWalt and Bosch but he’s not looking at those ads for explosive nail-guns.

Next, the design engineer has been burned before. Because the technology treadmill moves so fast, tools don’t always work well (or sometimes at all) but the purchaser doesn’t have the luxury of waiting for code to mature, for standards to be in place, for the landscape of winners and losers to be clear. But a lot of IC design is about reducing risk (because we can’t just fab the chip repeatedly in the equivalent way to a software engineer compiling and testing the code). One component of risk is using a new tool so there is always a push of potential advantage of the new tool against the pull of potential disaster if it fails. So designers have learned to evaluate new tools in enormous detail, to understand not just what they should do, but what they actually do and how they work internally to do it. Other people don’t take the cylinder-head off the engine before buying a car.

Brand name counts for very little in EDA. To the extent it counts for anything in this context, it stands for a large organization of application engineers who can potentially help adoption. It certainly doesn’t stand for rock-solid reliability. The speed of development means that every large EDA company has had its share of disastrous releases that didn’t work and products that never made it to market. There are no Toyotas and Hondas in EDA with a reputation for unmatched quality. I don’t think anyone knows how it would be possible to create one without it also having a reputation for the unmatched irrelevance of many of its products due to lateness.

So there are a few theories. Like all stories after the fact, they are plausible but it is not clear if they are the real reason. But the facts are clear: traditional marketing, such as advertising, doesn’t work for EDA products.

Posted in eda industry, management | Comments Off

The art of presentations

As a marketing guy, and even when I was an engineering manager, I make a lot of presentations. I’ve also been on a couple of presentation courses over the years. The most recently by Nancy Duarte, whose biggest claim to fame is doing Al Gore’s slides for his Inconvenient Truth presentation. The most amazing thing about that was not the course itself but the location: a whole building of professional slide designers doing nothing but presentations for large companies for tens of thousands of dollars a time.

Most problems with presentations come about from making the presentation serve too many purposes. They are what will be on the screen for the audience to see, they may be your own way of keeping track of what you need to say, and they may be a handout that is meant to stand on its own for people who missed the presentation. The problem is that the first function, adding to what you are saying, requires different content from the other two, reminding you what to say or serving as a substitute for what you say.

The reality is that your audience can only concentrate on one verbal thing at a time. If you put a lot of text on your slide then your audience will be reading it and not listening to you. You need to decide which is going to win. You cannot have it both ways and make a detailed content-rich speech accompanied by a detailed content-rich presentation. If the content is identical in both places, it is very boring. If it is different, it is very confusing. There are even studies that show that if what you say is all on the slides, then you are better either giving a speech (without slides), or handing out the slides (without saying anything).

The rest of this entry assumes that you are doing the most common form of hi-tech presentation, where a good part of the content is on the slides. When you deliver it you should emphasize the key points but don’t go over every line. Instead, tell anecdotes that back up the dry facts on the screen. Personalize them as much as you can to make them more powerful and memorable. This approach works well for presentations that you are not going to rehearse extensively, or where someone else may be the presenter. If it’s not on the slide it doesn’t exist.

When putting together a presentation, like any sort of writing, the most important thing is to have a clear idea in your own mind of what you want to say. So the first rule is to write the one slide version of the presentation first. If you can’t do this then you haven’t decided what point you are trying to make, or what your company’s value proposition is, or how to position your product. Until you get this right, your presentation is like a joke where you have forgotten the punch line. Once you have this, then this should be very close to the first slide of your eventual presentation. After all, it is the most important thing so you should open with it; and probably close with it too.

When you have the one slide version worked out you can go to 3 or 4 slides. Get that right before you go to the full-length presentation. When you expand the few points from those few slides to a full-length presentation, make sure that you presentation “tells a story”. Like a good story, it should have a theme running through it, not just be a collection of random slides. How many slides? No more than one every 2 minutes max. If you have 20 minutes to speak, 10 slides or so.

In the consulting work I do, I find that not getting these two things right are very common. Presentations where the basic message is not clear, and presentations that do not flow from beginning to end. Not to mention people trying to get through 20 slides in 10 minutes.

If you are presenting to foreigners who don’t speak good English, you must make sure that everything important is on the slides since you can assume they will not catch everything that you say (maybe anything you say). You will also need to avoid slang that non-Americans might not understand (although you’d be surprised how many baseball analogies Europeans use these days without knowing what they really mean in a baseball context). I remember the people at a Japanese distributor being confused by “low-hanging fruit.” They thought it must have some sort of sexual connotation!

So make sure you know the main point, and make sure that the presentation tells a story that starts from and finishes with the main point.

Oh, and here is another rule of thumb. Print out your slides. Put them on the floor. Stand up. If you can’t read them the type is too small. Or go with Guy Kawasaki’s rule of using a minimum font size at least half the age of the oldest person in the room.

Posted in presentations | Comments Off