Guest blog: Soha Hassoun

DAC is coming up late July, as I’m sure you know. For the first time this year there is a real user track in which users, unmoderated by EDA marketing droids, can talk about their experiences. Soha Hassoun of Tufts University was the person who didn’t step three paces back fast enough and so found herself volunteered as Design Community chair. Of course, her first question to answer was: what is a design community chair? And the last question, what flavors of ice-cream do users prefer?

My design community: the users

It’s three and a half weeks before DAC—and we, the executive committee and organizer MP Associates—are in the final stages of preparing for the User Track at DAC.  It’s been a joy ride, with some ups and downs.

I was recruited about a year ago to be the Design Community Chair for DAC.  I said I would take the job only if I can re-define what that role means. Andrew Kahng, DAC’s chair, agreed and signed me up.  

The questions began: What is a design community? Does it even exist? And, in what forms? How far does it stretch? And, if these design communities already are associated with established conferences, why would they be interested in connecting with DAC? Can we characterize the profile of a typical “design community member”?

Well, after dwelling on these questions for a few days, looking at years’ worth of survey data from DAC, chatting with members of the DAC Executive Committee and many colleagues designing hardware and software, two ideas crystallized.  First, DAC in the past has had a hard time identifying with a “design community.”  Second, the common characteristic of members of this “design community” was that they “use EDA tools.”

Bingo!  That was it.  The “design community” label is potentially a mismatch: we were really talking about the “EDA Tool Users.” Relief.  At least now we decoded the word “Design” in “Design Community,” and we started talking about “User Community.”  

The quest shifted to find a way of connecting a large community, scattered geographically and topically, yet focused on EDA tool use.  What would bring members of this EDA User Community to DAC?  Certainly not to shop for tools.   The user community is much larger than the folks that typically show up at DAC to visit the exhibit floor.  The question became, what value can DAC provide to the User Community? 

More questions and discussions.  Clearly, users would not be willing to spend the time writing length DAC papers, and the user contributions would not be looked upon with enthusiasm considering the main focus of DAC on algorithmic contributions and methodologies.

So, the User Track was born:  by users, for users, and chosen by users.  We decided on a conference format.  The submissions would be shorter.  The quality had to be high to attract others.  The event would be more inclusive and viewed as a community event to share knowledge and experiences.  The event had to offer networking opportunities.  Submissions must be evaluated using experts in tool use.

It is now eight months later.  We put together a committee of more than 20 super EDA users.  We put the word out that we are looking for submissions –– just one page abstracts.  We received 117 submissions spanning both front and back end design topics.  We accepted about one third as presentations, one third as posters, and one third did not make the quality cut for various reasons. 

I’ve already reviewed some of the presentations and posters that will be made at DAC, and they look GREAT.  Presenters include users from Infineon Technologies, Cisco, TI, Xilinx, ST Microelectronics, Intel, Virtutech, ClueLogic, ST Microelectronics, Samsung, Qualcomm, Intel, Fujitsu, IBM, Sun, and others.

The User Track is this three-day event with 40 presentations running in parallel with regular technical sessions, and a poster session, held Wednesday 1:30-3 p.m., which provides 42 posters.  Did I mention there will be ice cream, too, at the poster session?

Where else can you get such an experience –– the sessions, of course, and not the ice cream?  See you there! 

Posted in guest blog | Comments Off

Patents

Patent law is a controversial subject and keeps popping up unexpectedly (for instance Ron Wilson writes about a case at Applied Materials here). I talked about it here in the context of CDMA and Qualcomm.

The basic “tradeoff” in having a patent system is that without the promise of some sort of state-sanctioned monopoly innovation would be a something that would be underprovided. Let’s not argue about that dubious point today, and just take it as a given. Another positive for the system is that requiring the inventor receiving the monopoly to disclose the details of the invention, means that once the monopoly period ends then the details are freely available for everyone to copy.

Let’s see how that seems to work in practice in the two industries I know well, EDA and semiconductors.

I knew nothing about patents until the mid-1980s. I was at VLSI Technology and we didn’t bother patenting stuff since we were small and patenting was expensive. Once VLSI reached about $100M in revenue, other semiconductor companies with large patent portfolios (IBM, Motorola, TI, AT&T, Philips, Intel and so on) came knocking on our door with a suitcase of patents, saying we probably infringed some of them and would we please pay several million dollars in licensing fees. We probably were infringing some of them, who was even going to bother to try and find out, so that first year the only negotiation was how much we would pay. VLSI started a crash program to patent everything we could, especially in EDA where we were ahead of the work going on inside other semiconductor companies. When the patent licenses came up for renewal we were in a much stronger position. They were infringing our patents and how much were they going to pay us. Well, how about we license your patents and you license ours and no money (or at least a lot less) needs to change hands? No lawyers on either side had any intention of actually reading the patents, or disturbing their own engineers to find out if they were infringed. It was patent licensing by the ton.

To me, in these industries patents seem to be entirely defensive created purely on the basis that other people have patents and therefore might seek license revenue. If there were no patent system, both EDA and semiconductor would proceed exactly as they do today. There may be the occasional patent that is so valuable that it is created to attempt to get monopoly licensing out of the rest of the industry (Rambus, Blueray) but these seem to be mainly political issues around trying to get proprietary technology into standards. Most patents are incremental improvements on existing technology that are created only for defensive reasons, with no expectation of ever truly licensing anyone or even going looking for infringement. Every company needs a portfolio of patents so that when other players in the industry come seeking license royalties, the “victim” has a rich portfolio that the licensor is probably violating and so the resolution is some sort of cross-license pact. There is some genuine licensing of patents in semiconductor, but none that I know of in EDA.

As to patents being a way of disseminating information, there are two problems. The first is that in semiconductor and EDA, waiting 20 years for a patent to expire and then implementing the protected invention using the patent as a guideline is laughable. The timescales are just too long to matter in this industry, and secondly, have you read a patent? There is no way you can really discern what it even covers, let alone use it as a blueprint for implementation. For example, Kernighan and Lin’s patent from 1969 on their well-known partitioning algorithm. My guess is that every placement tool in every EDA suite violated this patent, but was written without ever looking at the patent. It’s standard graduate level graph optimization and has probably been independently invented several times.

Patent law provides for damages in the event of patent infringement. But willful patent infringement, when you know that the patent exists, carries punitive triple damages. So the advice I’ve always been given by lawyers is to tell my engineers never to read any patents. That way, even if a patent is infringed it is not being willfully infringed since there is no way for whoever wrote the code, or whatever, to know that it was violating that particular patent.

So the situation comes down to this: companies patent inventions in order to have currency to negotiate with other companies with patent portfolios and not to disclose important techniques to the general public, and not because without the protection of a patent, innovation in semiconductor and EDA would grind to a halt. It is like mutual assured destruction in with nuclear weapons. The purpose of all that effort and investment in nuclear weapons was purely to ensure that they other guy’s weapons weren’t a threat.

Companies that purchase a few patents simply to demand licensing fees, so-called patent trolls, violate this game. They are like a terrorist with a nuclear bomb. No matter how many missiles we have to “cross-license” the terrorist isn’t interested. At least when it was just companies threatening each other and then cross-licensing the game wasn’t played with real money. The shakedown of RIM (Blackberry) a year or so ago was a complete indictment of the ridiculous situation we have got reached.

So in EDA and semiconductor, patents are largely a joke. If they didn’t exist, people would not be clamoring for them. There was plenty of innovation in software in the 1960s when software was not even patentable. Nobody cares about patents except for defense, so for our industry patents are a cost not a benefit, a distraction for engineers who could better be spending their time engineering. In fact, I’d go further. If patents were actually enforced, in the sense of requiring a license to be negotiated to every patent actually violated, then innovation would grind to a halt.

Posted in eda industry, management, semiconductor | Comments Off

Hiring and firing in startups

FiredStartups have unique problems in human resources. For a start, they don’t have human resource departments or even, in the earliest days, anyone to even do the mechanical stuff of making sure the right forms are filled out. You have to do that yourself.

There’s some obvious stuff that is unlikely to trip anyone up: people need to have a legal right to work at the company, meaning be US citizens or permanent residents. In the earliest days you are not likely to want to have to go through the visa application process so that is probably the end of the list of people you’d want to bring on board. One exception might be someone who has an H-1 (or other appropriate) visa already; it is fairly straightforward to reassign it to a new company and doesn’t run into any quota issues and only takes a few weeks.

One thing that is incredibly important is to make sure to create a standard confidentiality disclosure agreement and make sure that, without fail, every employee signs it. This binds the employees to keep company confidential information confidential (and survives their quitting), and also assigns to the company copyright and patent rights in the code (or whatever) they create. If an employee leaves to go to a competitor, that is not the moment to discover that the employee never signed his or her employment agreement and that it is legally murky what rights they have to ideas they came up with on your watch.

But the most difficult challenge is building the right team. This is probably not a problem with the first handful of hires. They are likely either to be founders or else already known to the founders from “previous lives,” working together in a similar company.

One small point to be aware of is if any of the founders was packaged out from a previous company (as part of a layoff, for example) and signed a release. Almost certainly the release will explicitly prohibit the ex-employee from recruiting people from the old company for a period of time. However, that doesn’t mean you can’t hire them; they have a right to work where they want (at least in California, ymmv). The best way to play safe is for such ex-employees not to interview the candidate. That way they can’t be accused of “recruiting.”

The first problem about hiring, especially if the founders are doing their first startup, is the deer-in-headlights phenomenon of not being able to make a decision about who to hire. Most of the time a candidate will never want to work for you more than right after the interview, having heard the rosy future, seen the prototype, met the team and everything. The quicker you can get them an offer, the more likely they are to accept. Firstly, they won’t have had time to interview with anyone else equally attractive and secondly they won’t have had time to start to get to the sour-grapes stage of rationalizing why you haven’t given them an offer already. One advantage startups have over bigger companies is that they can make people an offer very fast. It can make a big difference: when I first came to the US I was promised an offer from Intel and H-P. But VLSI Technology gave me an offer at the end of the day I interviewed, so I never even found out what the others might have offered (Intel had a hiring freeze before they’d have been able to get me an offer, as it happened). Don’t neutralize the fast offer advantage that startups have by being indecisive.

The second problem about hiring is hiring the wrong people. Actually, not so much hiring them. It goes without saying that some percentage of hires will turn out to be the wrong person however good your screening. The problem comes when they start work. They turn out to be hypersmart, but think actually delivering working code is beneath them. They interview really well but turn out to be obnoxious to work with. They don’t show up to work. They are really bright but have too much still to learn. Whatever. Keeping such people is one of the reason startups fail or progress grinds to a halt.

Firing people is an underrated skill that rarely gets prominence in books or courses on management. Even in large companies, by the time you fire someone, everyone around you is thinking, “what took you so long?” In a startup, you only have a small team. You can’t afford to carry deadweight or, worse, people who drag down the team. It doesn’t matter what the reason is, they have to go. The sooner the better. One thing to realize is that it is actually good for the employee. They are not going to make it in your company, and the sooner they find a job at which they can excel, the better. You don’t do them any favors by keeping them on once you know that they have no future there.

It may be the first time that you’ve fired someone in your life, which means that it will be unpleasant and unfamiliar for you. Whatever you do, don’t try and make that point to the employee concerned. No matter how uncomfortable you might feel, he or she is going to be way more uncomfortable. It doesn’t get much easier with experience. It will always be more fun to give someone a bonus than to terminate them.

Make sure to have someone else with you when you terminate someone. In a big company that will be someone from HR, in a small company you just want someone to be a witness in case of a lawsuit (“he told me he fired me because I was a woman”). In California you must give them a check for all pay due there and then (actually I think you have until the end of the day) so make sure you have it ready. Normally you will want the employee to sign a release saying that they won’t sue you and so on. If you give the employee severance (a good idea to give at least a little so the other employees feel that they work for a company that is fair) then the severance is actually legally structured as payment for that release. So don’t give them the check until they sign (and if they are over 40, there is a waiting period during which they have the right to rescind their signature, so don’t give them the check until that expires).

Posted in management | Comments Off

More Moore: that iSuppli report

The recent iSuppli report has been getting a lot of coverage (EDN, Wall Street Journal if you have a subscription). It somewhat predicts the end of Moore’s law. If you look at the graph you can see that no process is ever predicted to make as much money at its peak as 90nm but that all the different subsequent process generations live on for a long time as a many-horse race.

I’ve often said that Moore’s law is an economic law as much as a technical one. Semiconductor is a mass production technology, and the mass (volume) required to justify it is increasing all the time because the cost of the fabs is going up all the time. This is Moore’s second law: the cost of the fab is also increasing exponentially over time.

So the cost of fabs is increasing exponentially over time and the number of transistors on a chip is increasing exponentially over time. In the past, say the 0.5um to 0.25um transition, the economics were such that the cost per transistor dropped, in this case by about 50%. This meant that even if you didn’t need to do a 0.25um chip, if you were quite happy with 0.5um for area, performance and power, then you still needed to move to 0.25um as fast as possible or else your competitors would have an enormous cost advantage over you.

We are at a different point on those curves now. Consider moving a design from 65nm to 32nm. The performance is better, but not as much as it used to be moving from one process node to another. The power is a bit better, but we can’t reduce the supply voltage enough, so it is not as big a saving as it used to be moving from one node to another and the leakage is probably worse. The cost is less, but only at high enough volumes to amortize the huge engineering cost, so not as much as it used to be. This means that the pressure to move process generation is much less than it used to be and this is showing up in the iSuppli graph as those flattening lines.

Some designs will move to the most advanced process since they have high enough margins, need every bit of performance, every bit of power saving, and manufacture in high enough volume to make the new process cheaper. Microprocessors, graphics chips are obvious candidates.

FPGAs are the ultimate way to aggregate designs that don’t have enough volume to get the advantages of new process nodes. But there is a “valley of death”, where there is no good technology, and it is widening. The valley of death is where volume is too high for an FPGA price to be low enough (say, for some consumer products) but the volume isn’t high enough to justify designing a special chip. Various technologies have tried to step into the valley of death: quick turnaround ASIC like LSI’s RapidChip, FPGAs that can be mass produced with metal mask programming, laser programming, e-beam direct-write. But they all have died in the valley of death too. Canon (steppers) to the left of them, Canon to the right of them, into the valley of death rode the six hundred.

Talking of “The charge of the light brigade,” light and charge are the heart of the problem. Moore’s law involves many technologies but the heart of them all is lithography and the wavelength of light used. With lithography we are running into real physical limitations writing 22nm features with 193nm light, with no good way to build lenses for shorter wavelengths. And on the charge side no good way to speed up ebeam write enough.

So today, the most successful way to live in the valley of death is to use an old process. Design costs are cheap, mask costs are cheap, the fab is depreciated. Much better price per chip than FPGA, better power than FPGA, nowhere near the cost of designing in a state-of-the-art process. For really low volumes, you can never beat an FPGA, for really high volumes you won’t beat the most advanced process, but in the valley of death different processes have their advantages and disadvantages.

However, if we step back a bit and look at “Moore’s law” over an even larger period, we can look at Ray Kurzweil’s graph of computing power growth over time. This is pretty much continuous logarithmic growth for over a century through five different technologies (electromechanical, relay, tube/valve, transistor, integrated circuit). If this logarithmic growth continues then it might turn out to be bad news for semiconductor, just as it was bad news for vacuum tube manufacturers by the 1970s. Something new will come along. Alternatively, it might be something different in the same way as integrated circuits contain transistors but are just manufactured in a way that is orders of magnitude more effective.

We don’t need silicon. We need the capabilities that most recently silicon has delivered as the substrate of choice. On a historical basis, I wouldn’t bet against human ingenuity in this area. Software performance will increase somehow.

Posted in semiconductor | Comments Off

Friday puzzle: cube

Last week’s puzzle was the “two switch” puzzle. To solve it, the group of prisoners must identify one person, the counter. Ignore switch B, it is there just to give people something to toggle if they don’t want to toggle switch A. Everyone except the counter follows the rule: if switch A is up, and you’ve never previously put it down, then put it down. The counter follows the rule, if switch A is down, then put it up and add one to the count. If the count matches the number of prisoners (less one to take account of the counter), announce that everyone has visited the room.

With these rules, each prisoner (except the counter) will put switch A down exactly once, and the counter will eventually notice, increment the count and reset the switch.

It is possible to adapt these rules slightly for the more difficult puzzle where the initial positions of switch A is unknown, so as to avoid the off-by-one error that results from the counter thinking there is one extra person if switch A starts off down. Exercise for the reader.

Today something a bit more electrical, since this is an EDA blog. What is the resistance between diagonally opposite corners of a cube of identical resistors?

Posted in puzzles | Comments Off

Chicken and the egg

Which came first, the chicken or the egg? This question often gets posed as an example of a question that is impossible to answer, since plainly chickens come from eggs and eggs come from chickens. In EDA, there are chicken and egg business issues: how do you get people to use synthesis when there are no libraries? So then how do you get people to create libraries when nobody uses synthesis? How do you get software engineers to use virtual platforms when the models are not created in advance? And get the component vendors to create the models when the user base is not large enough?

However, the answer to the actual chicken and egg problem seems obvious to me. Dinosaurs come from eggs. Reptiles come from eggs. All birds come from eggs. So if you take a chicken today and follow its ancestry back through egg and chicken all the way to primordial soup, then there is some point at which you have to decide that the creatures are sufficiently different from chickens that you no longer can call them chickens, you have identified the last chicken. For any reasonable definition of chicken, that creature came from an egg. So at that point you have a non-chicken laying an egg, and that egg hatching into a chicken. So the egg came first.

Going back up the line of ancestry has some interesting aspects. As you almost certainly know, only males have a Y-chromosome. So, if you are male, your Y-chromosome came from your father, not your mother. And his Y-chromosome came from your paternal grandfather. If you follow that line back thousands of generations, every one of those men has something atypical about them. They managed to avoid fatal childhood disease, avoid dying in war, find a mate and have a male child. In one sense that is really obvious, in another sense it is really deep. After all, the odds weren’t that great in the days with low life expectancy, high levels of violence in society and so on.

You may also know that the mitochondria in your cells come from your mother. So in the same way, they came from your maternal grandmother and so on all the way back. And all those women also managed to avoid dying young, less likely to die in war but more likely to die early in adulthood during childbirth, but they all managed to get through all that and have at least one daughter (who lived to adulthood and had a daughter of her own).

If you go back about 170,000 years (about 8,000 generations) you arrive at Mitochondrial Eve, the earliest ancestor (female, obviously) from which all humans today can trace their mitochondria. And about 60,000 years back (only about 3,000 generations) is Y-chromosomal Adam, the earliest ancestor from which all men inherit some of their Y-chromosome.

Firstly, note that these two lived in very different times, 100,000 years apart. There isn’t a single Adam and Eve from which everyone is descended in Biblical style. But we know, by definition, that all the other males in whatever group Y-chromosomal Adam lived failed to have an unbroken line of mail heirs, whereas he did. Similarly other female contemporaries of Mitochondrial Eve failed to have unbroken lines of female descendents, like she did.

The most recent common ancestor of all mankind is actually more recent than either of these two since that is a much less restrictive condition (the line of descent can pass through both males and females). Depending on which groups of people fail to make it in the future due to war, catastrophe or epidemic, the earliest common ancestor (and in a similar way the identity of Mitochondrial Eve and Y-chromosomal Adam) might get pushed back to different individuals as some lines of descent die out.

Posted in culture | Comments Off

EDA on the iPhone

One thing that I’ve done in the last few months during my involuntary unemployment, other than writing this blog, has been to teach myself how to program the iPhone. Despite having been in marketing for over a decade, my background is as a software engineer. I wanted to bring my programming skills up to date on a state-of-the-art platform.

One other thing I was interested in was the extent to which software productivity tools had improved. We all know that IC design productivity has improved by about a million times since the early 1980s when you would do well to design and layout a single gate in a day. Software development hadn’t seemed to be on the same sort of track when I was a programmer. Now, mainly through the libraries that are now available, applications need a lot less new code. Knowing how to program the iPhone is more about knowing the library calls than it is about writing code in Objective-C. Once you don’t need to run to the help files every time you want to call a procedure, you get very productive. With infrastructure like openAccess now available, EDA tools have got more like that too, but that is a bigger topic for another day.

I wrote some toy applications and got them running on my iPhone. Of course I bought a couple of books (at the time, that was all of them, there seem to be lots more about to come out in the next month or two). Having tried several of them, if you too are interested in writing an iPhone app then easily the best book I came across for getting started is the aPress one Beginning iPhone Development. Other books are probably better once you are up to speed; as with any domain, what you want as a tutorial to get started is different from what you want as a reference once you are up to speed. Stanford has also put online the course materials for CS193P on iPhone programming.

However, it never occurred to me to write an EDA application for the iPhone. However, it did occur to Michael Sanie (disclosure: Michael is a friend and he has worked for me as both employee and consultant in the past). Obviously nobody is going to be doing place and route on their iPhone. Michael has written a tool iWafer for estimating die size, estimating how many die fit on a wafer, and estimating yield. Not all that different from a workstation-based tool I wrote back in the mid-1980s called the DesignAssistant. It did die size estimation and would have done yield estimation too, except that data was considered commercially sensitive back then. iWafer is a handy calculator for working out how big a die you are likely to need for a certain number of gates, how many will fit on various sized wafers, and how many of those die (dice) are likely to turn out good. Various foundry salespeople are already finding it to be a useful tool to have in their pocket (pun intended).

Anyway, everyone always complains that EDA tools cost too much. But this one doesn’t merit that criticism, it’s just $9.99.

Posted in eda industry, engineering | Comments Off

Guest blog: Rob Dekker

Today’s guest blog is Rob Dekker, the President and principal developer at Verific, who produce front-ends used by many EDA tool suppliers both big and small. Prior to founding Verific, Rob was at Exemplar where he was the principal developer of their Leonardo FPGA synthesis product (which was acquired by Mentor in 1995). He started his career at Philips Research in the Netherlands after graduating from Delft University of Technology.

SystemVerilog and the triangle of truth

With the advent of the IEEE 1800-2009 SystemVerilog standard, currently in the proposal stage, we thought it would be a fun exercise to take stock of support for the 1800-2005 version of this mighty hardware description language (HDL).  For this purpose, we compared the two leading SystemVerilog simulators Questa (from Mentor Graphics) and VCS (from Synopsys) with our own Verific SystemVerilog parser, a front end widely used in EDA applications marketed by our customers.

As an aside for those not familiar with Verific Design Automation, we build VHDL, Verilog, and SystemVerilog parsers, analyzers and elaborators, and license those to a wide variety of semiconductor, FPGA and EDA companies.  A representative list of our customers can be found on our website (www.verific.com) and there is no need to bore the reader with those here.  If you do RTL design, chances are you have used one of our parsers.  We never disclose which components our customers’ license from us, but we have our fair share of SystemVerilog licensees and carry the battle scars to prove it.

We also have a SystemVerilog test suite that we run internally to validate our own results, and several of our customers have licensed this test suite as well.  It contains 2,500 tests with positive tests (“this should pass”) and negative tests (“this should fail”).  The majority of these are semantics tests, and 40% is synthesizable.  By no means do we claim that this test suite is complete or even sufficient.  As a matter of fact, its size is dwarfed by our Verilog 2001 and VHDL regression suites.  We are sure that over time we will be able to extend our test suite as more SystemVerilog design activity starts occurring in the industry.

Our experiment was simple.  We took the tests from our SystemVerilog test suite and ran them simultaneously with VCS version 2008.12, Questa version 6.5a, and Verific 2009.01.  We aptly named it the “Triangle of Truth” and categorized the results in eight bins ranging from “0 0 0” (all three tools parse and report the same error on the test) through “1 1 1” (all three tools parse and pass the test).  The good news is that in 73% of the tests, all three parsers fully agreed.  The bad news, of course, is that there are discrepancies in the remaining 27%.  

The bin where Questa and VCS both successfully parsed the test but Verific failed were considered defects and quickly fixed in the Verific parser.  There is also a category where both VCS and Questa reported an error in the design as expected but where Verific failed to flag the error.  These results are called “false positives” and require improved semantic checking in the Verific parser.  They are less onerous but should get fixed over time as well.

And, then there were situations were either Verific and Questa agreed, or Verific and VCS agreed, but not Questa and VCS.  Remember the saying about the person with two watches who never knows what time it is?  Well, that’s how we feel in situations like this.  Predictably, Questa customers nudge us toward their simulator’s implementation and VCS users like it exactly the other way around.  Where does that put us?  Between a rock and a hard place! 

Worse yet are those situations where one of those leading EDA tools clearly violates the standard.  What are the others to do?  Fortunately, that happens only rarely.  Conflicts between the three vertices of the “Triangle of Truth” mainly relate to design checking, where tool A issues an error while tool B parses the same design just fine, or vice versa.  It will be interesting to see what the 1800-2009 version of the SystemVerilog standard will do to the “Triangle of Truth.”  Improved clarity of the standard should make it easier for EDA tools to agree on interpretation, but additions to the language will undoubtedly generate their own category of mismatches.  Whichever way it goes, the “Triangle of Truth” will remain a useful tool to measure SystemVerilog interoperability.

Posted in guest blog | Comments Off

Board games

I talked earlier about changing the CEO in startups. The board in any company really has two main functions. One is to advise the CEO since the board often has complementary experience. For example, older venture capital investors have probably seen before something very similar to any problem that may come up,  or board members with industry experience may have a more “realistic” view on how taking a product to market is likely to turn out than the spreadsheets describing the company’s business plan.

The second, and most important, job of the board is to decide when and whether to change the CEO. In one way of looking at things, this is really the only function of the board. The CEO can get advice from anywhere, not just the board. But only the board can decide that the company leadership needs to change. It is the rare CEO that falls on his own sword, and even then it is the board that decides who the new CEO is going to be.

Usually there is some controversy that brings a crisis to a head. The CEO wants to do one thing. There is some camp, perhaps in the company, or perhaps outside observers, or perhaps on the board itself, that thinks that something else should be done. The issues may be horribly complicated. But in the end the board has a binary choice. It can either support the CEO 100%, or it can change the CEO. It can’t half-heartedly support the CEO (“go ahead, but we don’t think you should do it”). It can’t vote against the CEO on important issues (“let’s vote down making that investment you proposed as essential for the future”).

I was involved in one board level fight. I was about to be fired as a vice-president even though the board supported my view of what the company needed to do and told me that they wouldn’t let the CEO fire me. But in the end, they only had those two choices: support the CEO, or fire the CEO. The third choice, don’t fire the CEO but don’t let him fire me, didn’t actually exist. So I was gone. And the new CEO search started that day and the old CEO was gone within the year.

Boards don’t always get things right, of course. I don’t know all the details, but there is certainly one view of the Carly Fiorina to Mark Hurd transition at H-P that Carly was right, and Mark has managed to look good since all he had to do was manage with a light hand on the wheel as Carly’s difficult decisions (in particular the Compaq merger) started to bear fruit. If she had been allowed to stay, she’d have got the glory in this view.

Almost certainly, Yahoo’s board got things wrong with the Microsoft acquisition offer. Jerry Yang wanted (and did) refuse it. The board supported him. Their only other choice was to find a new CEO, which they eventually did.

When Apple’s board fired Gil Amelio and brought Steve Jobs back, hindsight has shown that it was a brilliant decision. But in fact it was extraordinarily risky. There are very few second acts in business, where CEOs have left a company (and remember, an earlier Apple board had stripped Steve Jobs of all operational responsibility effectively driving him out of the company) and then returned to run them successfully later. Much more common is the situation at Dell or Starbucks, where the CEO returns when the company is struggling and the company continues to struggle.

Posted in management | Comments Off

Your comp plan is showing

I talked recently about setting up separate channels and when it made sense to do it, and about some aspects of channel conflict. One area where separate channels are usually required is when a business is global. Most EDA products, even from quite small companies, have business in Japan, Taiwan, Korea and Europe as well as the US. Most of these cannot be serviced with a direct sales organization until the company is pretty sizeable and maybe not even then. But customers don’t always view the world the way your sales compensation structure does. It is really important not to let the way you structure and compensate your internal organisations, especially sales, show through and limit how customers can do business with you.

When I was at VLSI Technology, we wanted to standardize on a single Verilog simulator and we had decided that it would be ModelSim. So we wanted to negotiate a deal for using ModelSim throughout the company. At the time, Mentor had already acquired ModelSim but it was still sold partially through the old ModelSim channels, which were distributors and VARs (value-added-resellers). I don’t think ModelSim ever had any direct sales force. We met with our Mentor account manager.

Mentor basically refused to do any sort of global deal because they felt unable to go around their distributors in each region; we would have to do a separate deal with each region. Also, licenses sold in one region would not be usable in other regions since the distributor/VARs provided first line support. The US alone was several different regions so this wasn’t very attractive.

Part of the reason for doing a global deal was that we could get better pricing, we thought, since Mentor’s costs would also be lower if we wrote one contract for a large amount, as opposed to negotiating lots of smaller contracts with each region.  We also didn’t want to have to worry about where a license was used, we wanted a certain amount of simulation capacity for a certain number of dollars. Internally we didn’t even track where tools were used. There is always some issues about using licenses in regions other than the one where they were sold. Firstly, the salespeople get annoyed if someone in region A sells a lot of software that is largely used in region B, especially when the salespeople for region B starts to get lots of calls from “their” customer. Even if the customer promises that all support will go through region A, this usually doesn’t stick, especially once different languages are involved. It is just not credible that all Japanese customers will be supported through, say, Dallas, whatever the software license says.

It can be a major problem when the internal scaffolding of the sales organization shows through to customers like that. “I can’t sell you that because I won’t get any commission” is not a very customer-focused response. You get the same problem, on a smaller scale, in many restaurants if you ask someone who is not your waiter for another glass of wine. The server won’t ignore you totally but they won’t bring the wine either, just tell your server if they remember.

Whenever possible, you want your channel to look as unified as possible to the customer, no matter what battles are going on internally. Like a swan, serene on top and paddling like hell underneath.

At the other extreme, my girlfriend works for a medical education company. It’s largely grown by acquisition but has the (to me insane) strategy of keeping each company’s sales force and branding intact. So any given hospital may have half-a-dozen people calling on it, selling them different products under different brand names, but from the same company. The financial inefficiency of doing this is huge, and as more and more of their business moves into the electronic space and is integrated into electronic medical record systems, more and more of their business will be through indirect channels in any case. But they don’t see this as either inevitable nor a good thing (since it is less profitable) and worry a lot about channels that conflict with their own salespeople. Some of their competitors have bitten the bullet, got rid of their direct sales force and only sell indirectly. Lower costs, one brand name, and no channel conflict. The straps of their compensation scheme aren’t showing.

As for VLSI and ModelSim, we ended up doing a deal with another company, Cadence I think. It’s not just a minor inconvenience to let your sales compensation drive the business. It can drive it away.

Posted in sales | Comments Off