<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>edagraffiti &#187; embedded software</title>
	<atom:link href="http://edagraffiti.com/?cat=21&#038;feed=rss2" rel="self" type="application/rss+xml" />
	<link>http://edagraffiti.com</link>
	<description>EDA, technology, semiconductor</description>
	<lastBuildDate>Mon, 14 Nov 2011 02:32:56 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.6</generator>
		<item>
		<title>EmbeDAC</title>
		<link>http://edagraffiti.com/?p=266</link>
		<comments>http://edagraffiti.com/?p=266#comments</comments>
		<pubDate>Sat, 24 Apr 2010 06:26:01 +0000</pubDate>
		<dc:creator>paulmcl</dc:creator>
				<category><![CDATA[embedded software]]></category>

		<guid isPermaLink="false">http://blogs.cancom.com/elogic_920000692/2010/04/24/embedac/</guid>
		<description><![CDATA[The latest EDAC spring meeting was a bit different from usual. The panel session was all about embedded software. John Bruggeman, now at Cadence but previously at Wind River, lined up his old colleague Tomas Evensen (the CTO of Wind &#8230; <a href="http://edagraffiti.com/?p=266">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p><img src="http://www.edagraffiti.com/images/embedded.jpg" title="embedded.jpg" alt="embedded.jpg" align="left" hspace="3" />The latest EDAC spring meeting was a bit different from usual. The panel  session was all about embedded software. John Bruggeman, now at Cadence but  previously at Wind River, lined up his old colleague Tomas Evensen (the CTO of  Wind River, now part of Intel), Jim Ready (the CTO of Montavista, now part of  Cavium Networks) and Jack Greenbaum (director of embedded software engineering  for Green Hills Software, still independent).</p>
<p>Jack started off the proceeding by pointing out that for most designs the  software engineering costs more than the IC engineering. For him the business  drivers were 16 bit designs all going to 32 bit, 32 bit designs going to 64 bit,  multicore and virtualization (using a single core to run code binaries from what  used to be several separate microprocessors). Tomas added the drive towards open  source. And Jim emphasized the thrust of Linux into embedded.</p>
<p>Tomas talked a bit about why Intel had acquired Wind River and why they had  also left it to remain independent. To ship a chip these days you need to ship  it with a state-of-the-art software stack, so Intel wants to ensure that for its  products. But unlike previous acquisitions, such as Freescale (then still  Motorola) with Metroworks, Intel didn&#8217;t want to kill the ecosystem and devalue  the software by taking everything in-house and making it Intel specific.</p>
<p>Everyone complained about the challenge of getting people to pay for  software. In the IC world nobody would dream of trying to design a chip without  software, or even writing all their own software. In the software world people  won&#8217;t make the investment even if the ROI is there. So system companies place  tremendous value on software but refuse to pay, and silicon provides less value  but gets all the revenue.</p>
<p>Everyone thought partly that this was due to people not taking quality  seriously. Apparently 40% of cars taken into dealers involve a re-flash of an  ECU (electronic control unit) because the initial and ongoing quality is so  poor. Perhaps the current woes at Toyota will cause a re-think on whether it is  appropriate to invest for software quality.</p>
<p>Quailty is expensive. DO-178B (a standard for certifying avionics) costs  $1000/line to validate. Meanwhile car companies are worrying about the number of  resistors on the PCB. Quality will only happen when companies decide it is worth  the money. Tomas talked of a system he was involved with that was validated.  Since the early 1990s not a single bug has been found in it. And Jim boasted  that he was involved at Ready Systems in 4K bytes of avionics code that was the  first certified.</p>
<p>Tomas and Jim both emphasized that despite Linux being open-source it was  still a viable business. Montavista has shipped over $250M of &#8220;free&#8221; software.  But one cloud was that Android drives out differentiation and so squeezes  vendors. It even marginalizes the hardware since it is now very generic.</p>
<p>The DNA of EDA and embedded software are very different. EDA is dominated by  physics but software is dominated by psychology. So before EDA ventures into the  software world it needs it understand it deeply since analogies don&#8217;t carry  across.</p>
<p>This was brought home when Gary Smith asked about what they were going to do  to write power-aware software. Everyone just took this as being a question about  battery life rather than realizing that the big problem is going to be having a  chip that we can&#8217;t light up all at once. It would help, of course, if the chip  people added circuitry to make it easy for the software to monitor power,  today&#8217;s chips rarely even have cache-miss counters which is one of the biggest  contributors to power.</p>
<p>Power may turn out to be the thing that finally ties the software development  process to the chip development process, since it will become impossible to  develop software without worrying about it. It will even affect software  architecture, not just detailed low-level stuff.</p>
]]></content:encoded>
			<wfw:commentRss>http://edagraffiti.com/?feed=rss2&#038;p=266</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>CoWare</title>
		<link>http://edagraffiti.com/?p=234</link>
		<comments>http://edagraffiti.com/?p=234#comments</comments>
		<pubDate>Mon, 08 Feb 2010 18:00:00 +0000</pubDate>
		<dc:creator>paulmcl</dc:creator>
				<category><![CDATA[eda industry]]></category>
		<category><![CDATA[embedded software]]></category>

		<guid isPermaLink="false">http://blogs.cancom.com/elogic_920000692/2010/02/08/coware/</guid>
		<description><![CDATA[It must be something in the water in silicon valley right now. But no sooner have I written about VaST and Virtutech being acquired then Synopsys acquires CoWare as well. Since I never worked there, I don&#8217;t have any sort &#8230; <a href="http://edagraffiti.com/?p=234">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p><img align="left" alt="" src="http://www.edagraffiti.com/images/syncow.jpg">It must be something in the water in silicon valley right now. But no sooner have I written about VaST and Virtutech being acquired then Synopsys acquires CoWare as well. Since I never worked there, I don&#8217;t have any sort of inside track on what is going on. CoWare started life many years ago with a product called n2c, which stood for napkin to C, an attempt at turning high level system ideas into models quickly. They then did a deal with Cadence whereby they took over the old Comdisco SPW product line and the engineering team that supported it, in return for equity and royalties I think. Then more recently they decided to develop their own virtual platform technology which, I believe, is the heart of their business today. For a long time a lot of their revenue was service revenue in Japan (remember, they are the most advanced in system-level thinking so tended to be the first place where there was any revenue to be had) although I presume that is no longer the case.</p>
<p>To sum up, there have been a number of different instruction set simulation technologies developed: Axys (which ARM acquired and then spat out again to Carbon when they decided they didn&#8217;t want to do all that modeling any more), Virtio (which Synopsys acquired a couple of years ago), VaST (which Syopsys acquired 2 weeks ago), Virtutech (which Intel/WindRiver acquired 1 week ago) and CoWare (which Synopsys acquired today). Phew.&nbsp;</p>
<p>Simon Davidmann and Brian Bailey both pointed out, correctly, that I&#8217;d forgotten about Imperas when I talked earlier about which technologies were left out there. There is also, of course, Qemu which is an open-source instruction set simulator. The base technology of Imperas is also open-source, the open virtual platform, www.OVPworld.org although that is not how they started out. So in an act of contrition for forgetting Imperas, let me tell you where they are: they have 2000 registered users, adding about 150 per month, spread around about 400 companies. They have over 40 processor models ranging from the bizarre ones used in automotive up to quadcore MIPS processors running at 200-1200 MIPS. They are still self-funded but are growing and adding staff.</p>
<p>So going forward there is Imperas/OVP still independent. Synopsys with Virtio, VaST and CoWare technologies which they will presumably try and pull together into a single environment, and WindRiver/Virtutech. One distributing basically over the web and the other two with much bigger distribution channels than they had before although less focused than before. Let the games begin.</p>
]]></content:encoded>
			<wfw:commentRss>http://edagraffiti.com/?feed=rss2&#038;p=234</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Virtutech</title>
		<link>http://edagraffiti.com/?p=121</link>
		<comments>http://edagraffiti.com/?p=121#comments</comments>
		<pubDate>Fri, 05 Feb 2010 15:00:00 +0000</pubDate>
		<dc:creator>paulmcl</dc:creator>
				<category><![CDATA[eda industry]]></category>
		<category><![CDATA[embedded software]]></category>

		<guid isPermaLink="false">http://blogs.cancom.com/elogic_920000692/2010/02/05/virtutech/</guid>
		<description><![CDATA[I&#8217;d heard rumors that Intel was acquiring Virtutech and I presumed that the purpose was to put it together with Wind River that they acquired last year. I mentioned this in my post about VaST earlier this week but I &#8230; <a href="http://edagraffiti.com/?p=121">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p><img vspace="3" hspace="8" align="left" src="http://www.edagraffiti.com/images/vte.jpg" alt="">I&#8217;d heard rumors that Intel was acquiring Virtutech and I presumed that the purpose was to put it together with Wind River that they acquired last year. I mentioned this in my post about VaST earlier this week but I wasn&#8217;t expecting to come back quite so soon to write about Virtutech. Anyway, it is now official, <a href="http://www.windriver.com/news/press/pr.html?ID=7841">Intel is indeed acquiring Virtutech</a>. The price hasn&#8217;t been disclosed (and I haven&#8217;t received any paperwork yet so I&#8217;m not even pretending to be clueless this time). Unlike with VaST, I do get my money back, and more.</p>
<p>I think I was probably the only person who worked for both VaST and Virtutech (and I even did a consulting contract for the two companies jointly a couple of years ago) so it is interesting to look at why one company was so much more successful than the other. I haven&#8217;t seen the finances of either company recently, so I don&#8217;t know the revenue levels, profitability etc.</p>
<p>The companies targeted different markets. VaST did most of its business in Japan, almost all of it when I first joined them. They had customers in automotive, wireless and consumer. Consumer is mostly ARM. Automotive is mostly processors you&#8217;ve never heard of (NEC V850 anyone?). Virtutech went for the big iron, modeling complete base-stations for Ericsson, whole routers for Cisco and Huawei, servers for Sun and IBM, and aerospace systems. Almost coincidentally, most Virtutech customers were PowerPC based which leveraged the models and led to a close relationship with Freescale. These types of projects are much bigger and so have much bigger budgets, meaning larger deals.</p>
<p>Another issue, as I mentioned in my post about VaST earlier in the week, is that VaST&#8217;s cycle accurate models were more complex to build. For about the same revenue, VaST had four times as many people doing modeling as Virtutech which, plainly, means that they need to sell four times as many copies of any given model to get the same return. The fact that the end markets that VaST was targeting had different processor requirements aggravated this.</p>
<p>Virtutech was run much leaner than VaST. A couple of years ago, at a time when both companies revenues were the same, VaST had a full-time CFO and four people in finance. Virtutech had one finance person. A company doesn&#8217;t live or die by the size of its finance department, but G&amp;A being too large is always a symptom of lax expense control.</p>
<p>One thing that both VaST and Virtutech did is have their engineering groups offshore. But it wasn&#8217;t an expense issue, with groups in India or China, but more an accident of history. VaST started in Sydney Australia and the engineering remained based there. Virtutech started in Stockholm Sweden and their engineering remained there. And I&#8217;ll give VaST the edge here; Sydney is a nicer place to visit than Stockholm, especially in winter!</p>
<p>One story about Virtutech, from before I worked there, was that Microsoft used Simics to port Windows to the 64-bit AMD processor before AMD could deliver any silicon. This was in the days when Intel was all Itanium going forward and Microsoft was supposedly on-board. In fact at first AMD tried to screen the fact that Microsoft was the end-user, it was so sensitive. And on the first day AMD delivered silicon, Windows booted. It would have been the ultimate customer success story but we weren&#8217;t allowed to talk about it.</p>
<p>When I joined Virtutech, Peter Magnusson was the CEO. He&#8217;d started the company in Sweden, where the engineering remained, before moving himself and his family to the US. It was a joke inside Virtutech that Simics was Peter&#8217;s Ph.D. thesis gone out of control. He has a reminiscence about Virtutech on <a href="http://petersmagnusson.com/2010/02/05/wind-river-intel-acquires-virtutech/">his own blog</a>.</p>
<p>So that just leaves CoWare out there as the only remaining independent supplier of this type of technology, and Carbon with their RTL acceleration technology.</p>
]]></content:encoded>
			<wfw:commentRss>http://edagraffiti.com/?feed=rss2&#038;p=121</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>VaST</title>
		<link>http://edagraffiti.com/?p=64</link>
		<comments>http://edagraffiti.com/?p=64#comments</comments>
		<pubDate>Wed, 03 Feb 2010 18:00:00 +0000</pubDate>
		<dc:creator>paulmcl</dc:creator>
				<category><![CDATA[eda industry]]></category>
		<category><![CDATA[embedded software]]></category>

		<guid isPermaLink="false">http://blogs.cancom.com/elogic_920000692/2010/02/03/vast/</guid>
		<description><![CDATA[Finally it is public knowledge that Synopsys has acquired VaST Systems Technology. I was VP marketing there for a bit over a year back when Graham was still CEO. Since I exercised my options when I left, I&#8217;ve been inundated &#8230; <a href="http://edagraffiti.com/?p=64">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p><img align="left" src="http://www.edagraffiti.com/images/svast.jpg" alt="">Finally it is public knowledge that Synopsys has acquired VaST Systems Technology. I was VP marketing there for a bit over a year back when Graham was still CEO. Since I exercised my options when I left, I&#8217;ve been inundated with paperwork on the merger, although the acquiring&nbsp; company name was redacted everywhere it appeared (although everyone knew who it was). I don&#8217;t suppose I&#8217;m giving away too many secrets to reveal that I&#8217;m not going to get my money back. The common stock is lucky to get anything at all, but they did need bribe us a little to sign off on the merger.</p>
<p>By the time Alain Labatt came on board as CEO, I was at Virtutech which sorta competes with VaST in having similar technology and sorta doesn&#8217;t since they go after such different markets. We thought that this was great for us since Alain&#8217;s reputation was that he was good at two things: raising VC money, and then ramping up the expense run-rate to spend it all. He&#8217;d done that at Frequency/Sequence and then again at Tera Systems. And true to form he seems to have been successful again at both raising money and then spending it.</p>
<p>When I was at VaST we did most of our business in Japan. I opened up a couple of accounts in Europe and we had just one, Delphi, in the US. The dependence on Japan has apparently reduced, but still VaST is over-represented there. To be fair, the Japanese are ahead of Europe which is ahead of the US in terms of system level thinking, so Willie Sutton style, that&#8217;s where the money was. VaST was also over-represented in automotive. Japan and automotive have unfortunately been especially hard hit in the current downturn, so I&#8217;m guessing that VaST&#8217;s business declined dramatically. I assume the VCs didn&#8217;t want to put in more money since they couldn&#8217;t see a route to a fast growing profitable company and so they got Alain to shop the company around. Synopsys is its new home.</p>
<p>Synopsys already purchased Virtio a couple of years ago which had similar technology to VaST&#8217;s. VaST&#8217;s is cycle-accurate which makes for a number of issues: since VaST had no non-cycle-accurate models it couldn&#8217;t really get premium pricing for them since it didn&#8217;t have any non-premium models for people who didn&#8217;t value cycle-accuracy (when ARM was in the modeling business it sold cycle-accurate models for 6-8 times the price of non-cycle-accurate ones). Also, the verification issues of cycle-accurate models are much harder since not only do they have to be functionally correct, the cycle accounting has to be correct and there are many corner cases in a modern processor. So the combination of expensive models and an inability to get a premium price for them made for an unattractive combination. Potentially with VaST and Virtio now in the same stable, that problem goes away. Non-cycle-accurate models for people who don&#8217;t need them, and premium pricing for people who need the costly cycle-accurate models.</p>
<p>There are plenty of rumors that Virtutech is also in the throes of being acquired, but I&#8217;ve not seen any paperwork for that one yet. But if that is true it leaves only CoWare out there of the companies with virtual platform technology. Axys was acquired by ARM. Virtio and VaST by Synopsys. Virtutech and CoWare, for now, are still out there.</p>
<p>It is interesting to look at why these companies were not as successful as I thought they should have been. In the end, I think, you could get so much done with cross-compilation to your workstation that the market for people who valued the model accuracy was too small. Look at iPhone programming. Despite a litany of complaints about the simulator, which works by compiling your source-code to run directly on the Intel processor in your Mac, you don&#8217;t really need an platform able to run the ARM binary to get development done. An inaccurate simulator and the actual phone are enough. There isn&#8217;t a lot you can do with a more accurate platform squeezed into the gap between these two other solutions.</p>
]]></content:encoded>
			<wfw:commentRss>http://edagraffiti.com/?feed=rss2&#038;p=64</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Looking through Critical Blue’s Prism</title>
		<link>http://edagraffiti.com/?p=42</link>
		<comments>http://edagraffiti.com/?p=42#comments</comments>
		<pubDate>Wed, 28 Oct 2009 00:00:00 +0000</pubDate>
		<dc:creator>paulmcl</dc:creator>
				<category><![CDATA[embedded software]]></category>

		<guid isPermaLink="false">http://blogs.cancom.com/elogic_920000692/2009/10/28/looking-through-critical-blues-prism/</guid>
		<description><![CDATA[I caught up with Dave Stewart and Skip Hovsmith of CriticalBlue (from Edinburgh, yay, one of my alma maters). They originally developed technology to take software and pull it out of the code and implement it in gates. They had &#8230; <a href="http://edagraffiti.com/?p=42">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p><img vspace="3" hspace="5" align="left" src="http://www.edagraffiti.com/images/critblue.jpg" alt="">I caught up with Dave Stewart and Skip Hovsmith of CriticalBlue (from Edinburgh, yay, one of my alma maters). They originally developed technology to take software and pull it out of the code and implement it in gates. They had some limited success with this. But now they have refocused their technology on the problem of taking legacy code and helping make it multicore ready with their Prism tool.</p>
<p> They do this by running the code and storing a trace of what goes on for later analysis. Previously they have done this only through simulation but now they can also use hardware boards to run the code. They don&rsquo;t need a multicore CPU, just one with the same instruction set.</p>
<p> Having developed the trace they can do &ldquo;what if there were 4 cores, or 32&rdquo; type analysis without need to run it again. On typical code that wasn&rsquo;t written with concurrency in mind the typical answer is &ldquo;not much would happen&rdquo; because there are too many dependencies. The example in the demo is doing a JPEG compression. Most of the time is spent in the DCT (discrete cosine transform) algorithm but the code doesn&rsquo;t parallelize due to data dependencies. It turns out that the code was written in a way that makes sense in a single processor world: allocate a single workspace, run the loop 32 times using the workspace, then dispose of the workspace. Obviously if you try and parallelize this, then all iterations of the loop except one must block and wait for the workspace. If you move the workspace allocation into the loop (so that you allocate 32 of them) then all the iterations of the loop can run in parallel.</p>
<p> They don&rsquo;t actually change the code. It turns out users wisely don&rsquo;t want something mucking around with their code without their understanding what is going on. But they give users the tools to both move code onto multicore processors and simply get it ready. When doing maintenance on code, by using Prism a programmer can also remove unnecessary dependencies and thus get the code so that it will be able to take advantage of multicore processors (or to take advantage of larger numbers of cores) when they become available.</p>
<p> I&rsquo;ve said before that microprocessor vendors, especially Intel, completely underestimated the difficulty of programming multicore processors when power constraints forced them to deliver computing power in the form of more cores rather than faster clock frequencies. Everyone realizes now that it is one of the major challenges in software going forward.</p>
<p> At DAC the nVidia keynote claimed that Amdahl&rsquo;s law wasn&rsquo;t really a limitation any more. I&rsquo;m not a believer in that position. The part of any program that cannot be parallelized sets a firm bound on how much speedup can be obtained. Even if only 5% of the code cannot be parallelized, which seems high, that sets a limit of 20X speedup no matter how many cores are available. Critical Blue have an interesting&nbsp; tool for teasing out whatever parallelization is possible, often with relatively simple changes to the code as in the example I described above.</p>
]]></content:encoded>
			<wfw:commentRss>http://edagraffiti.com/?feed=rss2&#038;p=42</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Software signoff again</title>
		<link>http://edagraffiti.com/?p=150</link>
		<comments>http://edagraffiti.com/?p=150#comments</comments>
		<pubDate>Wed, 23 Sep 2009 00:00:00 +0000</pubDate>
		<dc:creator>paulmcl</dc:creator>
				<category><![CDATA[embedded software]]></category>
		<category><![CDATA[methodology]]></category>

		<guid isPermaLink="false">http://blogs.cancom.com/elogic_920000692/2009/09/23/software-signoff-again/</guid>
		<description><![CDATA[What do you think the dominant design paradigm for electronic systems is going to be going forward? As I&#8217;ve said before, I believe that it is going to be taking software, probably written in C&#160; and C++&#160;, and synthesizing parts &#8230; <a href="http://edagraffiti.com/?p=150">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p><img align="left" src="http://www.edagraffiti.com/images/softwaresignoff.jpg" alt="">What do you think the dominant design paradigm for electronic systems is going to be going forward?</p>
<p>As I&rsquo;ve said before, I believe that it is going to be taking software, probably written in C&nbsp; and C++&nbsp;, and synthesizing parts of it into FPGAs and compiling the rest into binary to run on processors in the FPGA. This is what I&rsquo;ve been calling software signoff for a long time. It&rsquo;s more than just the software necessary to run on the FPGA or SoC. It is signing off hardware that co-optimizes the software.&nbsp;&nbsp;The idea that conceptually we need to get the software that specifies the system right, and then hardware design is just creating a silicon fabric&nbsp;(SoC or&nbsp;FPGA)&nbsp; which is able to run the software high enough performance&nbsp; and at low enough power (because otherwise why bother to do anything other than simply execute it).&nbsp;Power, performance and price, the 3Ps again.&nbsp;</p>
<p>There are two key pieces of technology here. The first is high-level synthesis, which should be thought of as a type of compilation&nbsp; of behavior&nbsp; into hardware&nbsp;. In the end the system product delivers a behavior or application.&nbsp; It is&nbsp;not as&nbsp;&nbsp;simple as &nbsp;some sort of productivity tool as RTL designers move up the next level. RTL designers will be bypassed not made more productive.</p>
<p>The other key technology is FPGA technology itself. Today FPGAs offer almost unlimited capacity and unlimited pins. FPGAs will become the default implementation medium. The classic argument for not using FPGAs used to be that you could reduce the cost enough to amortize the cost of designing a chip. But very few designs will run in high enough volume to amortize the cost of doing an SoC or ASIC in today&rsquo;s most leading edge processes, and the cost and risk of dealing with the variability (in terms of simulating hundreds of &ldquo;corners&rdquo; and the difficulty of getting design closure) is rising fast.&nbsp; FPGA takes a lot of the silicon risk out of the implementation. &nbsp;</p>
<p>Did you know that FPGAs represent more than half the volume of leading edge process nodes at the big foundries like TSMC and Samsung? FPGAs&nbsp;&nbsp;are the first logic in a new foundry process and &nbsp;drive the semiconductor learning curve. This is due to FPGA structural regularity that is much like memories but in a standard&nbsp;&nbsp;CMOS &nbsp;logic process.</p>
<p>If you need to do a 45nm design then far and away the easiest approach is to go and talk to Xilinx or Altera. To design your own chip is a $50M investment minimum so you&rsquo;d better be wanting tens of millions of them when you are done. Only the highest volume consumer markets, such as cell-phones, or the most cutting edge performance needs, such as graphics processors, can justify it.</p>
<p>The decline in the FPGA market in the current downturn conceals the fact that new designs in the largest and most complex devices is growing at over 30% CAGR. It may only be 12% of the market (which, by the way, is something over 15,000 designs per year) but it generates 40% of the FPGA revenue. These designs, and the methodology for creating them, will go mainstream until it represents the bulk of the market. Not just the FPGA market, the electronic system market. Designing your own chip will be an esoteric niche methodology akin to analog design today. Howeve these new high complexity FPGA require an ASIC-like design methodology, not just a bunch of low-end tools from the FPGA vendor.</p>
<p>The challenge for EDA in this new world is to transition their technology base to take account of this new reality and go where system-scale designs are implemented in FPGAs. That is largely not in the big semiconductor companies that currently represent the 20% of customers that brings 80% of EDA revenue. It is much more dispersed similar to the last time that design was democratized with the invention of ASIC in the early 1980s that pushed design out into the system companies.</p>
<p>A lot of&nbsp;&nbsp;RTL level &nbsp;simulation will be required. And one of the high level synthesis companies will be a big winner. In the startup world there&nbsp;are a few companies attempting to offer HLS: Synfora, Forte and AutoESL. Synfora and Forte has been at it for a while (although Forte may be disqualifying themselves in this vision of the future by only supporting SystemC).&nbsp;AutoESL has started to make some progress as well, with one group at Microsoft using just this methodology. Mentor is the current leader with its Catapult synthesis; Cadence has created their own CtoSilicon technology. But Synopsys, who has synthesis running through their veins, have no real high level synthesis product (and, unless they are doing it with people who are unknown in the field, don&rsquo;t have one in development).&nbsp;&nbsp;Synopsys does have FPGA DNA through the acquisition of Synplicity. &nbsp;My&nbsp;&nbsp;opinion &nbsp;is that once it becomes clear which HLS company is going to win, Synopsys will&nbsp;&nbsp;likely &nbsp;acquire them and for a serious price&nbsp;to complete their&nbsp;FPGA offering. &nbsp;</p>
]]></content:encoded>
			<wfw:commentRss>http://edagraffiti.com/?feed=rss2&#038;p=150</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>DAC: denial computing</title>
		<link>http://edagraffiti.com/?p=32</link>
		<comments>http://edagraffiti.com/?p=32#comments</comments>
		<pubDate>Thu, 30 Jul 2009 00:00:00 +0000</pubDate>
		<dc:creator>paulmcl</dc:creator>
				<category><![CDATA[embedded software]]></category>
		<category><![CDATA[semiconductor]]></category>

		<guid isPermaLink="false">http://blogs.cancom.com/elogic_920000692/2009/07/30/dac-denial-computing/</guid>
		<description><![CDATA[I went to the keynote today by nVidia&#8217;s (and Stanford&#8217;s) William Daily. The topic was the end of what he called denial architecture and the rise of throughput computing. Denial architecture was so called since it denied two things: that &#8230; <a href="http://edagraffiti.com/?p=32">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>I went to the keynote today by nVidia&rsquo;s (and Stanford&rsquo;s) William Daily. The topic was the end of what he called denial architecture and the rise of throughput computing. Denial architecture was so called since it denied two things: that the world was sequential and that memory was flat. Throughput computing turned out to mean, surprise, surprise, the type of engines produced by nVidia.</p>
<p>As everyone knows the performance of a single processor is increasing only slowly due to power considerations. Instead we have to take our increased computing power in the form of additional processors. Architectures like this, such as nVidia&rsquo;s chips, should continue to increase at about 70% per year for the foreseeable future. That is what I like to call &ldquo;core&rsquo;s law.&rdquo; The number of cores on a chip is increasing exponentially. It&rsquo;s just not all that obvious yet since we are still on the flat parts of the exponential curve.</p>
<p>Daily had some interesting analysis of the energy required to do a computation (such as a floating point multiply) versus the energy required to move the data a short distance, across the chip or off-chip. The bottom line is that computation is very cheap in both area and energy provided the data required is local, already close to the computational unit. When a lot of data is used in any sort of pipelined computation, where the output from one stage is immediately consumed by the next, then cached memory is a particularly bad architecture, something I&rsquo;d never realized before. Writing the data out causes the cache-line to be fetched, then the data is read once. Finally, the value, which will never be used again, is written back to the main memory.</p>
<p>To take advantage of all this compute power, the programmer has to worry about managing the concurrency and worry about which memories are used to store which data. Programmers like to deal in abstractions which is why sequential programming and flat memory work so well. There are only 3 numbers in computer science, 0, 1 and infinity. Numbers like 50 processors each with 2K of memory are not something that the programmer wants to have to worry about.</p>
<p>But it seems there is no choice. The CUDA programming architecture gives a framework for writing these kinds of programs and certainly some of the results on computationally expensive algorithms are impressive. Done right, it is a one time cost to get back onto the performance curve as process generations unfold into the future. But it seems more like assembly language programming in some ways, since so much of the details of the hardware have to be taken into account. Chips like nVidia&rsquo;s (and IBM&rsquo;s cell architecture used in the Playstation) are notoriously hard to deal with because of this mismatch between computational resource and the programmer&rsquo;s mental model of what has to be done.</p>
<p>This stuff is now being taught in universities so it will be interesting to see if a new generation of programmers who think this way find it any easier. It still seems really hard to take a lot of small computers and put them together so that they behave as one really huge one. But the payoff when it can be done is enormous. However, getting the software right continues to be the biggest problem in software.</p></p>
]]></content:encoded>
			<wfw:commentRss>http://edagraffiti.com/?feed=rss2&#038;p=32</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
