Tales of CDMA

It is very rare for a company to develop a new standard and establish it as part of  creating differentiation. Usually companies piggy-back their wares on existing standards and attempt to implement them better than the competition in some way. There were exceptions with big companies. When AT&T was a big monopoly it could simply decide what the standard would be for, say, the modems of the day or the plug you phone would use. IBM, when it was an effective monopoly in the mainframe world, could simply decide how magnetic tapes would be written. I suppose Microsoft can just decide what .NET is and millions of enterprise programmers jump.

Qualcomm, however, created the basic idea of CDMA, made it workable, owned all the patents, and went from being a company nobody had heard of to being the largest fabless semiconductor company and have even broken into the list of the top 10 largest semiconductor companies.

The firs time I ran across CDMA it seemed unworkable. CDMA stands for code-division multiple access, and the basic technique relies on mathematical oddities called Walsh functions. These are functions that everywhere take either the value 0 or 1 and are essentially pseudo-random codes. But they are very carefully constructed pseudo-random codes. If you encode a data stream (voice) with one Walsh function and process it with another at the receiver you get essentially zero. If you process it with the same Walsh function you recover the original data. This allows everyone to transmit at once using the same frequencies, and only the data stream you are trying to listen to gets through. It is sometimes explained as being like at a nosiy party, and being able to pick out a particular voice by tuning your ear into it.

Years ago I had done some graduate work in mathematics, so I’d actually come across Walsh functions and so the idea of CDMA was very elegant. However, my experience of very elegant ideas is that they get really messy when they meet real-world issues. Force-directed placement, for example, seems an elegant concept but it gets messier once your library cells are not points and once you have to take into account other constraints that aren’t easily represented as springs. So I felt CDMA would turn out to be unworkable in practice. CDMA has its share of complications to the basic elegant underpinning: needing to adjust the transmit power every few milliseconds, needing to cope with multiple reflected, so time-shifted, signals and so on.

At the highest level what is going on is that GSM (and other TDMA/FDMA standards) could get by with very simple software processing since they put a lot of complexity in the air (radio) interface and didn’t make optimal use of bandwidth. CDMA has a very simple radio interface (ignore everyone else) but requires a lot of processing at the receiver to make it work. But Moore’s law means that by the time CDMA was introduced, 100 MIPS digital signal processors were a reality and so it was the way of the future.

Of course, my guess that CDMA was too elegant to be workable was completely wrong. Current and future standards for wireless are largely based on wide-band CDMA, using a lot of computation at the transmitter and, especially, receiver to make sure that bandwidth is used as close to the theoretical maximum as possible.

But before CDMA turned out to be a big success Qualcomm was struggling. In about 1995 VLSI tried to license CDMA to be able to build CDMA chips as well as the GSM chips that they already built. Qualcomm had “unreasonable” terms and were hated in the industry since they charged license fees to people who licensed their software, people who built phones (even if all the CDMA was in chips purchased from Qualcomm themselves) and people who built chips (even if they only sold them to people who already had a Qualcomm phone license). They were hated by everyone. Now that’s differentiation. The royalty rates were too high for us and we ended up walking from the deal.

I was in Israel 2 days from the end of a quarter when I got a call from Qualcomm. They wanted to do a deal. But only if all royalties were non-refundably pre-paid up front in a way they could recognize that quarter. Sounds like an EDA license deal! We managed to do a deal on very favorable terms (I stayed up all night two nights in a row, after a full day’s work, since I was 10 hours different from San Diego, finally falling asleep before we took off from Tel Aviv and having to be awakened after we’d landed in Frankfurt). The license was only about $2M or so in total I think, but that was the relatively tiny amount Qualcomm needed to avoid having a quarterly loss and impacting their stock price and so their ability to raise the funds that they would need to make CDMA a reality. Which they proceeded to do.

Posted in semiconductor | Comments Off

Oasys

I see a fair number of EDA startups. Most of them have some potentially innovative technology that solves a problem that is getting or going to get worse at future process nodes. But it is really hard to assess whether the technology works well, whether it will work on the type of designs that will be done in the future, and whether the approach will turn out to form a sizeable market. At the last few jobs I did I had to work as a consultant for a time before I could even assess whether the technology was good enough to want to sign onto the company.

Sometimes, though, I see a company that is instantly interesting. One such company I talked to recently is Oasys, where I spend an interesting hour or two with Paul van Besouw, the CEO. A disclosure: all the founders of Oasys worked for me at Ambit, although I haven’t done any work for them and don’t own any stock. But I’d like to.

Oasys has, to my mind, have developed a true next generation approach to synthesis. All synthesis tools to date, Synopsys Design Compiler and the equivalents, along with all the FPGA synthesis tools, take the same basic approach. The RTL is read and analyzed into a control-dataflow graph. This is naïvely synthesized into essentially a network of nand gates with the correct functionality. Optimization then takes place at the gate level, perhaps involving some gate-level placement so that physical information is taken into account during optimization. The algorithms can be clever but they are limited because they are operating at the gate level. If the netlist is several million gates then it is bound to require a huge amount of memory and bound to be slow. Eventually the optimization achieves the best result it can, it is written out along with placement information and then the place and route tools take over.

Oasys takes a different approach. They take the view that optimization should start at the RTL level, on the usual rationale that higher-level tradeoffs bring bigger changes, and so getting it right at the RTL level saves a lot of time-consuming small-scale gate-level optimization that might not even get there due to the usual local minimum and other problems.

However, in a modern process there is little point in trying to do any timing without first having placement, and so it is necessary to start to place the hierarchical blocks and then the RTL level objects such as registers and adders even before they are reduced to gates. Most of the effort goes at this level, and only right at the end of the process is the netlist finally reduced to placed gates.

So what is the result of this? The first result is that it is tens of times faster. The Sun Ultrasparc open source core, which is a day or so to synthesize using current tools, completed while I talked to Paul van Besouw, the CEO, in about 20 minutes, maybe 30 times faster, running on an old laptop. I didn’t measure it directly, but the amount of memory required was about one hundredth of what was required by Design Compiler which would never have been able to handle a design of such as size on a regular laptop. People who have used the tool on real designs report that it produces equal or better QoR (quality of results, usually some measure of total or maximum negative slack). I’m not one of those people, of course, so your mileage may vary.

The good news doesn’t even stop there. The place and route is faster than before because the placement is already so good (it is the dirty secret of place and route tools that it’s all about the placement) and the good timing results are not lost during physical design.

Today people have to break down designs in order to get the design through synthesis in a reasonable time. Unless the blocks correspond to physical blocks in the eventual hierarchical layout it is not really even possible to do realistic timing at this point, that needs to be left for the place and route tool to tackle. Oasys’s RealTime Designer can just gulp down the whole design and simultaneously optimize everything, leaving the place and route tool with a little bit of cleanup and the detailed routing to complete.

Posted in eda industry, methodology | Comments Off

Fab rankings

The latest semiconductor rankings are out for Q1 and there are a few moves around. Intel of course remains #1 at $6.5B for the quarter, nearly twice the size of #2 Samsung at $3.6B. In turn, Samsung is almost twice the size of the next cluster of Toshiba, TI and ST all around the $1.6B to $2B level, slightly ahead of the next cluster of Qualcomm, Sony, Renasas, AMD and, surprisingly, TSMC, all with revenues around $1.1B to $1.3B.

TSMC was at around 40% of capacity in Q1 as all the fabless companies canceled orders while the semiconductor already manufactured was built into final product and moved through the retail pipeline. Supposedly they are up 50% in Q2, which sounds great until you realized that means that they are only at 60% of capacity.

One interesting metric is to look at Q1 revenue as a percentage of (a quarter of) 2008 revenue. Almost everybody in the top 10 is around 65-75% of where they were, with only Qualcomm and AMD breaking the 80% barrier on the upside, and TSMC the big loser at just 44% of their average quarterly revenue from last year. Even Hynix and NEC, the worst performers measured this way, managed to be at 60% of their 2008 level.

Further down the list, both Sharp and Mediatek are running at about 100% of their 2008 levels, which counts as excellent performance in these trying times. Interestingly too, if you simply add NEC and Renasas together, to naively reflect their proposed merger, then they take the #3 spot, just pushing ahead of Toshiba.

I’m not entirely sure why the falloff is so much worse at TSMC than anywhere else. Since they are a foundry, this presumably reflects poor performance at their customers. It may simply be a bug (or feature) of the fabless business model. If you don’t have a fab then you can cancel orders, although some customers get better pricing with "pay or play" deals, where they irrevocably commit to a minimum number of wafers. Canceling the order is a way to avoid having to ship them at a loss. On the other hand, if you have a fab to fill, then first you cancel anything outsourced (to, say, TSMC) and then you sell product at whatever price you can. Much of the cost of a wafer is depreciation of the fab, and you have that whether you build product or not. Better to sell at a small loss than not sell at all and have a large loss. As I’ve said before, a wafer start in a fab is like a seat on a plane or a room-night in a hotel. You can’t put it in inventory, and if the plane leaves with an empty seat, it is ticket that could have been sold “profitably” at any price that covers the marginal.

Another thing. I’m not quite sure how fabless companies like Qualcomm and Mediatek are handled in these rankings, since their silicon can potentially show up twice, once under their own name and once under the name of whoever manufactured it. As more and more companies, but not all, go completely fabless, it’s not clear what the best way to measure the market is.

Posted in semiconductor | Comments Off

Friday puzzle: series

Friday is the day before the weekend, of course, so time for something less directly related to EDA, design, semiconductors and, well, work. In the UK, there is an expression about being POETS, which stands for “p*** off early tomorrow’s Saturday.” So here is some Friday poetry for you, in the form of a puzzle.

Continue the following infinite series: 110, 20, 12, 11, 10…

Answer next Friday

Posted in puzzles | Comments Off

San Francisco: Silicon Valley’s dormitory

San Francisco is a dormitory town for Silicon Valley. Not completely, of course. But unless you regularly drive between Mountain View and San Francisco you probably aren’t aware of the huge fleet of buses that now drives people from San Francisco to other cities: Google in Mountain View, Yahoo all over, Genetech in South San Francisco, Ebay in San Jose. I have a friend who knows Gavin Newsom, the mayor, and keeps trying to get him to come and stand on a bridge over the freeway one morning to see just what is happening where lots of people (me included) largely work in Silicon Valley but live in the city. The traffic is still more jammed entering the city than leaving but it’s getting close. Bauer, who used to just run limos I think, now has a huge fleet of buses with on-board WiFi that they contract out to bring employees down to the valley from San Francisco. They cram the car-pool lane between all those Priuses making the not-so-green 40 mile trip.

San Francisco seems to have a very anti-business culture. Anything efficient and profitable is bad. So if, like me, you live in San Francisco you have to drive for 15 minutes and give your tax dollars to Daly City if you want to go to Home Depot. They finally gave up trying to open a store in San Francisco after 9 years of trying.  Of course a Walmart, Ikea or Target is unthinkable. And even Starbucks has problems opening new stores since they (big) compete too effectively against local coffee shops (small, thus good by definition). The reality is that some small coffee shops (like Ritual Roasters) are among the best in the US, and a Starbucks next door wouldn’t do well; and for some a Starbucks in the area would be an improvement. But in any case it makes more sense to let the customers of coffee shops decide who is good rather than the board of supervisors trying to burnish their progressive credentials.

Those two things together—much commerce is out of the city, many inhabitants work outside the city—are warnings that San Francisco is not heeding. San Francisco already has one big problem (as do many cities) that housing is really expensive (at least partially due to economically illiterate policies like rent control and excessive political interference in the planning process making it difficult to build any new housing) and the public schools are crappy. So when a resident has a family, they have to be rich to afford a large enough house and a private school, or they move out. So every year San Francisco can close some schools since there are ever fewer children in the city; famously there are more dogs than kids.

The trend, which is not good, is for San Francisco to depend increasingly on three things: out of town rich people who live elsewhere (often in Nevada due to California’s taxes) but like to keep an apartment in San Francisco (about 20% of the people in the building where I live are like that); people who live in San Francisco and work somewhere else; and tourism. Two of those three groups are spending a lot of money and generating a lot of tax that San Francisco doesn’t get to see, but it does have a lot of the costs associated with them. Of course, tourism brings dollars in from outside but most of the employment it creates is not at the high valued added end of the scale: restaurants, hotels and retail largely generate low-productivity low-pay jobs.

Busboys for San Francisco; on-chip buses in Silicon Valley; wi-fi equipped buses in between.

Posted in silicon valley | Comments Off

Test cases

I talked recently about customer support and how to handle it. One critical aspect of this is the internal process by which bugs get submitted. The reality is that if an ill-defined bug comes in, nobody wants to take the time to isolate it. The AEs want to be out selling and that if they just throw it over the wall to engineering then it will be their job to sort it out. Engineering feels that any bug that can’t easily be reproduced is not their problem to fix. If this gets out of hand then the bug languishes, the customer suffers and, eventually, the company too. As the slogan correctly points out, “Quality is everyone’s job.”

The best rule for this that I’ve ever come across was created by Paul Gill when we were at Ambit. To report a bug, an application engineer must provide a self-checking test case, or else engineering won’t consider it. No exceptions. And he was then obstinate enough to enforce the “no exceptions” rule.

This provides a clear separation between the AE’s job and the development engineers job. The AE must provide a test case that illustrates the issue. Engineering must correct the code so that it is fixed. Plus, when all that activity is over, there is a test case to go in the regression suite.

Today, most tools are scripted with TCL, Python or Perl. A self-checking test case is a script that runs on some test data and gives a pass/fail test as to whether the bug exists. Obviously, when the bug is submitted the test case will fail (or it wouldn’t be a bug). When engineering has fixed it, then it will pass. The test case can then be added to the regression suite and it should stay fixed. If it fails again, then the bug has been re-introduced (or another bug with similar symptoms has been created).

There are a few areas where this approach won’t really work. Most obviously are graphics problems: the screen doesn’t refresh correctly, for example. It is hard to build a self-checking test case since it is too hard to determine whether what is on the screen is correct. However, there are also things which are on the borderline between bugs and quality of results issues: this example got a lot worse in the last release.  It is easy to build the test case but what should be the limit. EDA tools are not algorithmically perfect so it is not clear how much worse should be acceptable if an algorithmic tweak makes most designs better. But it turns out that for an EDA tool, most bugs are in the major algorithms under control of the scripting infrastructure and it is straightforward to build a self-checking test case.

So when a customer reports a bug, the AE needs to take some of the customer’s test data (and often they are not allowed to ship out the whole design for confidentiality reasons) and create a test case, preferably small and simple, that exhibits the problem. Engineering can then fix it. No test case, no fix.

If a customer cannot provide data to exhibit the problem (the NSA is particularly bad at this!) then the problem remains between the AE and the customer. Engineering can’t fix a problem that they can’t identify.

With good test infrastructure, all the test cases can be run regularly, and since they report whether they pass or fail it is easy to build a list of all the failing test cases. Once a bug has been fixed, it is easy to add its test case to the suite and it will automatically be run each time the regression suite is run.

That brings up another aspect of test infrastructure. There must be enough hardware available to run the regression suite in reasonable time. A large regression suite with no way to run it frequently is little use. We were lucky at Ambit that we persuaded the company to invest in 40 Sun servers and 20 HP servers just for running the test suites

A lot of this is fairly standard these days in open-source and other large software projects. But somehow it still isn’t standard in EDA which tends to provide productivity tools for designers, without using state of the art productivity tools themselves.

On a related point, the engineering organization needs to have at least one very large machine too. Otherwise inevitably customers will run into problems with very large designs where there is no hardware internally to even attempt to reproduce the problem. This is less of an issue today when hardware is cheap than it used to be when a large machine was costly. It is easy to forget that ten years ago, it cost a lot of money to have a server with 8 gigabytes of memory; few hard disks were even that big back then.

And with perfect timing, here’s yesterday’s XKCD on test-cases:

Posted in engineering | Comments Off

Old standards

About 12 years ago I went on a three-day seminar about the wireless industry presented by the wonderfully named Herschel Shosteck (who unfortunately died of cancer last year although the company that bears his name still runs similar workshops). It was held at an Oxford college and since there were no phones in the rooms, they didn’t have a way to give us wake-up calls. So we were all given alarm clocks. But not a modern electronic digital one. We were each given an old wind-up brass alarm clock. But there was a message behind this that Herschel had long espoused: old standards live a lot longer than you think and you can’t ignore them and hope that they will go away.

In the case of the wireless industry, he meant that despite the then-ongoing transition to GSM (and in the US to CDMA and US-TDMA) the old analog standards (AMPS in the US, a whole hodge-podge of different ones in Europe) would be around for a long time. People with old phones would expect them to continue to work and old base stations would continue to be a cheap way of providing service in some areas. All in all it would take a lot longer than most people were predicting before handset makers could drop support for the old standard and before base stations would not need to keep at least a few channels reserved for the old standard. Also, in particular, before business models could fold in the cost saving from dropping the old standard.

My favorite old standard is the automobile “cigarette lighter” outlet. According to Wikipedia it is actually a cigar lighter receptacle (hence the size, much larger than a cigarette). The current design first started appearing in vehicles in 1956. Originally, they were simply intended to be a safer way for drivers to light their cigars than using matches. After all “everyone” smoked back then. Since cars had no other power outlet, anything than needed power used that socket as a way of getting it without requiring any special wiring. Who knew that in an age where few of us smoke, and where we can’t smoke on planes, that we’d be plugging our computers into outlets on (some) planes that are designed to match that old design. If you’d told some engineer at GM in the 1950s that the cigarette lighter socket would be used by people like him to power computers on planes, he’d have thought you insane. Computers were million dollar room-sized things that only a handful of big companies used, and planes were too expensive for ordinary people. Talking of planes, why do we always get on from the left-hand side? Because it is the "port" side that ships would put against the port for loading, unobstructed by the steering-oar that was on the right-hand side before the invention of the rudder, hence steer-board or "starboard". The first commercial planes were sea-planes, so they naturally followed along. Another old standard lives on, a thousand years after steering-oars became obsolete.

We see some of the same things in EDA. OK, the 1970s weren’t a thousand years ago, but in dog years it seems like it. For physical layout, it is still the case that a lot of designs are moved around in what is basically the Calma system tape backup format, a standard that dates back to the mid 1970s. Verilog is not going away any time soon to be replaced with something more “modern.” Sometimes new standards come along but it is rare for the old ones to die completely. We can probably drop Tegas netlist support, I suppose, but I’m sure somebody somewhere has a legacy design where that is the only representation available.

In simulation, there are so many standards around that simply providing support for them all is more than a startup company in the simulation space can do easily. I was talking to Venk Shukla, CEO of Nusym, a few weeks ago and he was grousing about the amount of engineering that it took to support the various languages, the various incarnations of the Verilog API, and the various verification languages. That’s before his engineering team could go about delivering the cleverness the company was founded to provide. In this age of IP-based design you ignore the standards at your peril. Nobody (well, hardly anybody) is writing VHDL models any more, but for sure there are blocks of IP around that have no other model except VHDL, so it had better be supported properly.

So new standards come along all the time, but the old standards simply don’t die. At least not for a lot longer than you would expect. Rrrrrnnnnggggg.

Posted in culture | Comments Off

Customer support

Customer support in an EDA company goes through three phases, each of which actually provides poorer support than the previous phase (as seen by the long-term customer who has been there since the beginning) but which is at least scalable to the number of new customers. I think it is obvious that every designer at a Synopsys customer who has a problem with Design Compiler can’t simply call a developer directly, even though that would provide the best support.

There is actually a zeroth phase, which is when the company doesn’t have any customers. As a result, it doesn’t need to provide any support. It is really important for engineering management to realize that this is actually happening. Any engineering organization that hasn’t been through it before is completely unaware of what is going to hit them once the immature product gets into the hands of the first real customers who attempt to do some real work with it. They don’t realize that new development is about to grind to a complete halt for an extended period. “God built the world in six days and could rest on the seventh because he had no installed base.”

The first phase of customer support is to do it out of engineering. The bugs being discovered will often be so fundamental that it is hard for the customer to continue to test the product until they are fixed, so they must be fixed fast and new releases got to the customer every day or two at most. By fundamental I mean that the customer library data cannot be read, or the coding style is different from anything seen during development and brings the parser or the database to its knees. Adding other people between the customer engineer and the development engineer just reduces the speed of the cycle of finding a problem and fixing it, which means that it reduces the rate at which the product matures.

Eventually the product is mature enough for sales to start to ramp up the number of customers. Mature both in the sense that sales have a chance of selling it and the company has a chance of supporting it. It is no longer possible to support customers directly out of engineering. Best case, no engineering other than customer support would get done. Worst case, there wouldn’t even be enough bandwidth in engineering to do all the support. Engineering needs to focus on its part of the problem, fixing bugs in the code, and somebody else needs to handle creating test cases, seeing if bugs are fixed, getting releases to the customer and so on. That is the job of the application engineers.

During this second phase, a customer’s primary support contact is the application engineer who they work with anyway on a regular basis. But as the company scales further, each application engineer ends up covering too many customers to do anything other than support them. Since their primary function is to help sales close new business, this is a problem. Also, AEs are not available 24 hours per day which can start to be a problem as real designs with real schedules enter crunch periods. So the third phase of customer support is to add a hotline.

The hotline staff are typically not tool users, they are more akin to 911 dispatchers. Customers hate them since they are not as knowledgeable as they are themselves. Their job is to manage the support process, ensure that the problem is recorded, ensure that it eventually gets fixed, and that the fix gets back to the customer and so on. It is not to fix anything except the most trivial of problems themselves.

However, it turns out that one problem the hotline can do a lot to help with, and that is problems with licenses, license keys and the license manager. In every EDA company I’ve been involved with this has represented almost half of all support calls. EDA product lines are very complex and as a result there are a lot of calls about licenses that don’t require the intervention of engineering to get fixed.

At each phase of support, the quality (and knowledge) of the engineer directly interfacing to the customer goes down but the bandwidth of available support  increases. Engineering can only directly support a handful of customers themselves. Each AE can only directly support a handful of customers but more AEs can easily be added as sales increase. A hotline can scale to support a huge number of customers 24 hours per day, and it is easy to add more hotline engineers. The hotline can also be located in an area where it is cheaper to staff, since it doesn’t need to be in Silicon Valley.

This isn’t specifically an EDA problem. I’m sure we’ve all had experience of calling customer support for Comcast or our wireless router, and been told to do all the things we’ve already tried. It’s frustrating, but it’s also obvious that they can’t simply put us through to the guy who wrote the code in the cable modem or our router.

Posted in management | Comments Off

50th anniversary of the IC

On Friday the IEEE unveiled a plaque commemorating the 50th anniversary of the first practical IC, which was created at Fairchild’s original building at 844 Charleston Road in Mountain View (it’s just off San Antonio Road near 101). The plaque was unveiled by Margaret Abe-Koga, the mayor of Mountain View who wasn’t even born back then.

The story of the founding of Fairchild is pretty well known. Shockley invented the transistor at Bell Labs in New Jersey (for which he eventually won the Nobel prize in physics) and then moved to California to commercialize it. This was truly the founding of Silicon Valley.

Unable to persuade any of his colleagues to join him, he hired young graduates. But his abrasive management style and his decision to discontinue research into silicon-based transistors led eight key engineers, the “traitorous eight,” to leave and form Fairchild Semiconductor (Fairchild Camera and Instrument put up the money).

Two of the eight, Gordon Moore and Jay Last spoke at the ceremony that commemorated the work of two more of the eight, Robert Noyce and Jean Hoerni. Jean invented the planar process that was (and is) the foundation of integrated circuit manufacture and Robert Noyce took it and ran with it to create the first true integrated circuit in 1959, 50 years ago. Both Robert Noyce and Jean Hoerni unfortunately passed away in 1990 and 1997 respectively.

Of course those are some famous names. Robert Noyce and Gordon Moore went on to co-found Intel and employee #3 was Andy Grove. If you drive down 101 past Montague Expressway, that huge Intel building to the east side of the freeway is the RNB, the Robert Noyce Building.

After the unveiling of the plaque, the commemoration moved to the Computer History Museum. If you’ve never been there then it is highly recommended. Right now they also have a working version of Babbage’s Difference Engine, one of only two in existence. In his lifetime, Babbage never completed the manufacture but about twenty years ago the Science Museum in London (also highly recommended for a visit) decided to build an example to see if it worked. They got it finished in 1991, a month before the 200th anniversary of Babbage’s birth. Nathan Myhrvold took some of his Microsoft millions, and commissioned a second one for his living room. But right now it is in the computer history museum and you can see it in action if you live locally.

So 50 years ago this year was the first integrated circuit and so the first fab in silicon valley. In one of those nice closed circles, earlier this year the last fab in Silicon Valley closed. To close the circle even more, it was an Intel fab. It was transitioned last year from manufacturing to process development and is now finally closing/closed. It was a 50 year circle from the first fab to the last in the valley. Of course this is really a success story. Silicon Valley is a poor place for a fab: land is limited and costly, the ground shakes from time to time, there is a lot of traffic vibration and, as fabs got insanely expensive the California tax environment is unfavorable.

In any case, the high value part of building semiconductors is not the manufacturing part. As it says on the back of the iPhone, “Designed by Apple in California and manufactured in China.” Semiconductors are also often like that, “Designed by xx in Silicon Valley and manufactured in Taiwan.” Much better than the other way round.

Posted in silicon valley | Comments Off

DAC

DAC has just announced the program. You probably already know, but just in case you don’t: DAC is late this year, July 26-31st and it is in San Francisco at the Moscone Center (walking distance from where I live, yeah!). The DAC website is at dac.com, as always.

There is some interesting stuff for those planning on going, including 29 panels in addition to the usual technical papers. One panel takes the place of a keynote and brings together Lip-Bu Tan, Wally Rhines and Aart de Geus, the CEOs of the three largest EDA companies. It will be late Monday afternoon. The other keynotes are on changing EDA/semiconductor business models by Fu-Chien Chu of TSMC, bright and early on Tuesday morning; and on parallel, scalable computing by Bill Daley of nVidia and Stanford just before lunch on Wednesday.

There are also panels on whether to stay in EDA, green technology, whether Moore’s law is a victim of the financial meltdown and other panels of general interest. There’s even one panel where you get to vote on which topic they cover, picking between wearable sensors, scavenging power, netbooks, and staying relevant in the current job market.

The exhibit list looks as long as ever, with around 20 new companies exhibiting for the first time, most of whom I admit I’ve never heard of. The exhibits are the same days as usual, 9am to 6pm Monday through Wednesday with a half-day on Thursday from 9am to 1pm.

One big change. There is no longer free access to the exhibits, an exhibit-only badge now costs $50 (or $95 if you procrastinate so much you end up buying it on-site), but it also now gives access to the keynotes and some other stuff. Also, exhibitors (or some of them anyway) will have complimentary passes to allow them to invite people, so if you are an EDA customer you might want to whisper in the ear of your neighborhood salesperson. Of course, exhibitors get in free to man their booths and suites, so you also can volunteer to help out a friendly executive on their booth if you aren’t already condemned to booth-duty. And if you are in EDA but between jobs without financial support, you can attend the exhibits for free (but I’m not sure how the logistics of this work yet).

As always, Denali are hosting a party for the EDA and design community on Tuesday evening. It’s at Ruby Skye from 8 ‘till late. You need to register to get a ticket and you won’t get in without one. They are also having various competitions including best EDA blog. You know who to vote for (once voting opens in June). Hint: it’s one that, for some reason, doesn’t appear on the front page of the DAC website in their little panel listing all the EDA blogs. I wonder what I said? This has now been fixed

People have been predicting the demise of trade-shows for a long time, but there is no doubt that DAC is the one event of the year where you can take the pulse of the industry, get a perspective on the future and, perhaps, discover something unexpected. Not to mention meet up with a lot of people you haven’t seen since…well, DAC last year. If you are serious about EDA then you really need to attend.

Posted in eda industry | Comments Off