Running a salesforce

If you get senior enough in any company then you’ll eventually have salespeople reporting to you. Of course if you are a salesperson yourself this won’t cause you too much problem; instead, you’ll have problems when an engineering organization reports to you and appears to be populated with people from another planet.

Managing a salesforce when you’ve not been a salesperson (or “carried a bag” as it is usually described) is hard when you first do it. This is because salespeople typically have really good interpersonal skills and are really good negotiators. You want them to be like that so that they can use those skills with customers. But when it comes to managing them, they’ll use those skills on you.

When I first had to manage a salesforce (and, to make things more complicated, this was a European salesforce with French, German, English and Italians) I was given a good piece of advice by my then-boss. “To do a good job of running sales you have to pretend to be more stupid than you are.”

Sales is a very measurable part of the business because and order either comes in or doesn’t come in. Most other parts of a business are much less measurable and so harder to hold accountable. But if you start to agree along with the salesperson why an order really slipped because engineering missed a deadline, then you start to make them less accountable. They are accountable for their number, and at some level which business they choose to pursue, and how it interacts with other parts of the company, is also part of their job. So you just have to be stupid and hold them to their number. If an order doesn’t come for some reason, they still own their number and the right question is not to do an in-depth analysis with them about why the order didn’t come (although you might want to do that offline), but to ask them what business they will bring in to compensate.

Creating a sales forecast is another tricky skill, again because an order either comes or doesn’t come. One way of doing it is to take all the orders in the pipe, along with a percentage chance they’ll close. Multiply each order by the percentage and add them all up. I’m not a big believer in this at all since the chance of a 10% order closing in the current period is probably zero and it’s easy to fool yourself. Yes, the occasional blue bird order comes out of nowhere, sometimes so much out of nowhere it wasn’t even on the list. I’ve never run a huge salesforce with hundreds of salespeople; the law of averages might start to work a bit better then, but typically a forecast is actually build up with the judgement of the various sales managers up the hierarchy.

Another rule I’ve learned the hard way is that an order than slips from one quarter to the next is almost never incremental. You’d think that if the forecast for this quarter is $500K, and the forecast for next quarter is $500K, then if a $100K order slips that you have a bad $400K quarter now but you’ve got a good $600K quarter coming up. No, it’ll be $500K. Somehow the effort to finally close the slipped order comes out of the effort available to close other orders and you are wise not to count on a sudden blip in sales productivity.

Salespeople are a pain to hire because you have to negotiate with them and they are at least as good, if not better, negotiators than you are. It’s even worse in Europe where, if you don’t simply lay down the law, you can spend days negotiating about options for company cars ("I insist on the 8-CD changer"). At least in the US most of the negotiation is over salary and stock, which are reasonable things to spend some time on.

Another thing I’ve discovered is that salespeople really only respect sales managers who have themselves been salespeople in the field. Not marketing people who have become sales managers, not business development people who’ve become salespeople. It’s probably partly camaraderie but sales seems to be something that you have to have done to really understand. You want your sales manager to be respected by the salespeople because you want them to bring him into difficult sales situations to help close them, and they won’t if they don’t trust and respect him.

Posted in security | Comments Off

Mergers and acquisitions

There were three interesting acquisitions in the last week or so: Apache aquired Sequence, Synfora acquired Esterel Studio and Global Foundries acquired Chartered.

The Apache/Sequence acquisition is interesting for a couple of reasons. One is that both companies are private and I think we will see much more of this. A lot of the smaller EDA companies simply will not make it and the big guys don’t have a lot of appetite for acquisition these days. And let’s face it, this is not a success story for Sequence.

Sequence itself has a long history, extending back to its two ancestor companies, Sente and Frequency Technology. A third company, Sapphire, got folded in a little later. Sente goes all the way back to the early 1990s and Frequency to the mid-90s. The company was completely recapitalized at least once, but that didn’t seem to be enough to get them to lift-off. Clearly the investors didn’t have an appetite to do it again and pulled the plug. Sequence managed to carve out a good niche with its PowerTheater product which has become one of the standard tools for power measurement. But Sequence never managed to truly make the transition to focus on that one successful product, I think since its older products were in use at key customers and they couldn’t afford the loss of revenue from stopping development on them. Maybe inside Apache that will be easier to rationalize.

Synfora acquiring Esterel Studio is, of course, another private company making an acquisition, although this time of technology rather than a company. Esterel was actually developed in Sophia Antipolis at Ecole des Mines (yes, the school of mining although it really is much more general engineering these days) to where a couple of years later I’d relocate. The Esterel is the large red rocky peninsula between Cannes and San Tropez. But I digress. Since Synfora apparently has several customer already using both products together, since they are largely complementary, I should think integration should be straightforward.

The other acquisition was Global Foundries and Chartered Semiconductor. Technically it is the government of Abu Dhabi that owns them both, but clearly the intention is to attempt to create a more integrated foundry business. I never really understood why Chartered wasn’t more successful than it was. One theory I’ve heard is that they were very slow to make critical decisions since every decision had to go back to Singapore and so they failed to get close to their customers (who, pretty much by definition, were not in Singapore since it’s tiny). Both companies are members of IBM’s foundry club although their focus is different. Global Foundries has a lot riding on the success of AMD (whence it was spun-out) and I’m afraid I’m skeptical about that. Chartered is in the mainline bulk-CMOS business and struggles against TSMC since wafer costs are largely a matter of scale. TSMC has incrementally been building a gigafab with 100,000 wafer starts per month, which conventional wisdom considers gives it at leas a 10% cost advantage over merely large fabs. That’s a big difference.

I think we are going to see more consolidation in both EDA and semiconductor. In EDA, I think the strong private companies will swallow the weak. But there aren’t that many strong swallowers so we’ll also see companies just fade away. In semiconductor I think that there may be some consolidation at the level of companies merging, but more likely are whole divisions being sold to create stronger focused companies, as happened with NXP selling their wireless business to ST, and Freescale selling their wireless business to…well, nobody wanted it.

Posted in eda industry, management | Comments Off

Semiconductor cost models

One of the most important and under-rated tasks in a semiconductor company is creating the cost model. This is needed in order to be able to price products, and is especially acute in an ASIC or foundry business where there is no sense of a market price because the customer and not the manufacturer owns the intellectual property and thus the profit due to differentiation.

For a given design in a given volume the cost model will tell you how much it will cost to manufacture. Since a design can (usually) only be manufactured a whole wafer at a time, this is usually split into two, how many good die you can expect to get on a wafer, and what the cost per wafer is. The first part is fairly easy to calculate based on defect densities and die size and is not controversial.

In fabs that run only very long runs of standard products there may be a standard wafer price. As long as the setup costs of the design are dwarfed by other costs since so many lots are run in a row, then this is a reasonable reflection of reality. Every wafer is simply assumed to cost the standard wafer price.

In fabs that run ASIC or foundry work, many runs are relatively short. Not every product is running in enormous volume. For a start, prototypes run in tiny volumes and a single wafer is way more than is needed although it used to be, and may still be, that a minimum of 3 wafers is run to provide some backup against misprocessing of a wafer and making it less likely to have to restart the prototype run from scratch.

Back when I was in VLSI we initially had a fairly simple cost model and it made it look like we were making money on all sorts of designs. Everyone knew, however, that although the cost model didn’t say it explicitly the company made lots of money if we ran high volumes of wafers of about 350 mils on a side, which seemed to be some sort of sweet spot. Then we had a full-time expert on cost-models and upgraded the cost-model to be much more accurate. In particular to do a better job about the setup cost of all the equipment when switching from one design to the next, which happened a lot. VLSI brought a design into production on average roughly daily and would be running lots of designs, and some prototypes, on any given day. The valuable fab equipment spent a lot of the day depreciating while the steppers were switched from the reticles for one design to the next. Other equipment would have to be switched to match the appropriate process because VLSI wasn’t large enough to have a fab for each process generation so all processes were run in the same fab (for a time there were two so this wasn’t completely true). Intel and TSMC and other high volume manufacturers would typically build a fab for each process generation and rarely run any other process in that fab.

The new cost model shocked everyone. Finally it showed that the sweet spot of the fab was high volume runs of 350 mils on a side. Large enough that the design was complex and difficult (which we were good at) but small enough not to get into the part of the yield curve where too many die were bad. But the most shocking thing was that it showed that all the low volume runs, I think about 80% of VLSI’s business at the time, lost money.

This changed the ASIC business completely since everyone realized that, in reality, there were only about 50 sockets a year in the world that were high enough volume to be worth competing for and the rest were a gamble, a gamble that they might be chips from an unknown startup that became the next Apple or the next Nintendo. VLSI could improve its profitability by losing most of its customers.

Another wrinkle on any cost model is that in any given month the cost of the fab turns out to be different from what it should be. If you add up the cost of all the wafers for the month according the cost model, they don’t total to the actual cost of running the fab if you look at the big picture: depreciation, maintenance, power, water, chemicals and so on. The difference is called the fab variance. There seemed to be two ways of handling this. One, which Intel did at least back then, was to scale everyone’s wafer price for the month so it matched the total price. So anyone running a business would have wafer prices that varied from one month to the next depending on just how well the fab was running. The other is simply to take the variance and treat it as company overhead and treat it the same way as other company overhead. In the software group of VLSI we used to be annoyed to have our expenses miss budget due to our share of the fab variance, since not only did we have no control over it (like everyone else) it didn’t have anything to do with our business at all.

Posted in semiconductor | Comments Off

Friday puzzle: states

Today’s puzzle is a word puzzle: find a pair of US states whose letters can be mixed together and re-arranged to make a different pair of US states.

Last week’s puzzle was the cucumbers drying in the sun. There were 200 pounds of cucumbers originally and they were 99% water. That means that there was 198 pounds of water and 2 pounds of real cucumber stuff.

After sitting in the sun all day they were down to 98% water. The 2 pounds of real cucumber stuff was unaffected by this so that means that there were 98 pounds of water too. So the total weight was 100 pounds (half the amount started with). It’s counter-intuitive that going from 99% water to 98% water is actually a loss of over half the water.

Answer next week

Posted in puzzles | Comments Off

Chips and change

I’ve been reading an interesting book on the semiconductor industry. It’s called Chips and Change by Clair Brown and Greg Linden. I got sent a review copy (there are some tiny advantages to being a blogger) and I’m not sure whether it is truly available. Amazon shows it as having a publishing date of 9/30 but also being in-stock with a delay. Anyway, if you have anything to do with semiconductors I recommend you buy a copy immediately.

The book looks at semiconductor as an economic issue rather than from a technological point of view (although this is not ignored) which fits in with my view of the world. Semiconductor process transitions are driven by economics (cheaper transistors) more than technology (better transistors) especially now where leakage and other considerations make it unclear whether you are getting better transistors or only more of them.

The book examines how the semiconductor industry has lurched between major crises that has driven both its success and its restructuring over time. It starts back in the 1980s when the US, having essentially invented the integrated circuit, started to lose the quality war to Japan. It examines 8 crises in total.

First, losing the memory quality war to Japan that eventually drove most US memory supplies (Intel, for example, remember they were a memory supplier) out of the market. Most readers probably don’t remember when HP announced how much better the quality of Japanese memories was compared to American, and how it shook the industry to the core (they had lots of data).

The second crisis was the rising cost of fabrication. The result of this in the US (but not elsewhere) was the creation of fabless semiconductor companies that used TSMC, UMC and Chartered to manufacture. Also the creation of clubs of companies sharing the cost of process development.

The third crisis was the rising cost of design. This meant that low volume products just were not economically viable. Chips used to be consumed by big corporations largely insensitive to price to consumers who were hyper-sensitive to price. This was the fourth crisis. Somewhere in here the FPGA started to play a role.

The fifth crisis was the limits to Moore’s law, in particular limitations in lithography (Moore’s law is more about lithography than any other aspect of semiconductor manufacture). This has been an ongoing issue forever, of course, but has started to become the fundamental limitation on progress.

As the number of people involved in a design, and their cost, increased out of control there was a rush to find new talent in India and China. Partially for cost reasons but also because there were too few designers available without looking globally.

But fabs got more and more expensive, and price pressure on end-products got more intense leading to the current situation where most companies cannot afford to build a fab nor develop a leading edge process to run within it. There are just 4 or 5 groupings now that can do this (Intel, Samsung/IBM/ST, TSMC, Japan, UMC/TI) and there is likely to be further consolidation. Even with tapping into low cost Asian labor , semiconductors are not getting the share they feel they should of the electronics value chain.

The 8th challenge is the new level of global competition. Japan is clearly, for example, losing out as a “Galapagos market” with lots of internal competition but, as I’ve said  before, turning their back on the world, just like how the Galapagos produced giant tortoises. But also there is governmental competition with states attempting to join the industry keeping global competition feverish.

The book has a great graphic that summarizes the change in the basis of competition over time. If you read from left to right you see the problems come up chronologically. The vertical scale splits them into technological problems, economic problems, and competitive/globalization issues. This single graphic pulls together all of the issues facing the semiconductor industry, and how it got here, in a single simple chart.

As I said earlier, if you are involved in the challenges of the semiconductor industry, this is a book you should read (and, in case anyone is suspicious, I’m have no relation with the publisher other than receiving a free copy).

Posted in book review | Comments Off

Hunters and farmers: EDA salesforces

I wrote recently about mergers in the EDA space, mainly from the point of view of engineering which tends to end up being double booked keeping the existing standalone business going while at the same time integrating the technology into the acquiring companies product line.

The business side of the acquired company has a different set of dynamics. They only have to cope with running the existing business since any integration won’t be available for sale for probably a year after the acquisition. The basic strategy is to take the existing product that has presumably been selling well, and make it sell even better by pumping it through the much larger salesforce of the acquiring company.

The big question is what to do about the salesforce of the acquired company. A big problem is that there are really two types of salespeople that I like to call hunters and farmers. A startup salesforce is all hunters. A big company salesforce is all farmers. Some individuals are able to make the transition and play both roles, but generally salespeople are really only comfortable operating as either a hunter or a farmer.

Hunters operate largely as individuals finding just the right project that can make use of the startup’s technology. Think of a salesperson trying to find the right group in Qualcomm or the right small fabless semiconductor company. Farmers operate usually in teams to maximize the revenue that can be got out of existing relationships with the biggest customers. Think of Synopsys running its relationship with ST Microelectronics.

Given that most of the hunters are not going to become good farmers, or are not going to want to, then most of the acquired company’s salesforce will typically not last all that long in the acquired company. But they can’t all go immediately since they are the only resource in the world that knows how to sell the existing product, that has a funnel of future business already in development and probably have deals in flight on the point of closing. One typical way to handle things is to keep some or all of the existing salesforce from the acquired company, and create an overlay salesforce inside the acquired company specifically to focus on helping get the product into the big deals as they close.

The challenge is always that the existing salesforce doesn’t really want a new product to introduce into deals that are already in negotiation. They have probably already been working on the deal for six months, and they don’t want to do anything to disrupt its closing. Adding a new product, even though it might make the deal larger, also adds one more thing that might delay the deal closing. The new, unknown or poorly known product, might not work as advertised. As I’ve discussed before, big company salesforces are very poor at selling product where the customer isn’t clamoring for it.

So the typical scenario goes like this: the small acquired company salesforce is sprinkled into the big acquiring company salesforce for a quarter or two to make sure that initial sales happen and so that the farmers learn how to sell the product. After a quarter or two, the hunters will either drift away because they find a new startup opportunity, make the transition to being farmers in their own right (they may have been  at some point in their career anyway), or else they fail to make the transition and end up being laid off.

Posted in sales | Comments Off

CEO pay

If you are an investor, what do you think the best predictors for success for a startup are? If you could pick only one metric, which one would you use?

Peter Thiel, who invested in both PayPal and Facebook so seems to know what he is doing, reckons it is to examine how much the CEO is paying him or herself.

Thiel says that “the lower the CEO salary, the more likely it is to succeed.”

A low CEO salary has two effects, both of them important. It means that the CEO is focused on making the equity of the company valuable, rather than attempting to make the company last as long a possible to collect a paycheck.

The second effect is that the CEO’s salary is pretty much a ceiling on the salaries of all the other employees and it means that they are similarly aligned.

The effect of those two things together means that the cash burn-rate of the company is lower, perhaps much lower, and as a result either extra engineers can be hired or the runway to develop and get the product to market is longer.

When Thiel was asked what was the average salary for CEOs from funded startups he came up with the number $100-125K. For an EDA startup, this seems pretty low since it is much lower than good individual contributor engineers. I have seen a report that an EDA or semiconductor startup CEO should be paid around $180K (plus some bonus plan too). On the other hand, maybe Peter Thiel is right. How many EDA and semiconductor startups have been that successful recently?

A good rule of thumb in a startup is that the more junior you are then the closer to normal market salary you should get. There are two reasons for this: you can’t afford it and you don’t get enough equity to make up for it. If you are on a $100K/year salary at market, you probably can’t afford to work for $50K/year. If you are an executive at a big EDA company making $400K/year you can afford to work for under $200K/year. If the company makes it, the vice-presidents in the company will have 1-2% equity, which is significant. The more junior people typically not so much (at least partially because they are that much more numerous) unless the company managed to bootstrap without any significant investment.

Thiel has a company, younoodle, that (among other things) attempts to predict a value a startup might achieve 3 years from founding. It is optimized for internet companies that have not yet received funding, so may not work very well for semiconductor or EDA startups. And guess one of the factors that it takes into account when assessing how successful the company will be: CEO pay.

Posted in investment | Comments Off

Guest blog: Steve Schulz

Today’s guest blog is by Steve Schulz. These days Steve runs the Silicon Integration Initiative, Si2. Prior to joining Si2 Steve was VP corporate marketing for BOPS. Prior to that he had a long tenure of nearly 20 years at Texas Instruments in a wide variety of positions. He’s been heavily involved in many EDA standards.

The law of unmet needs: embedded software and EDA

Over the years, I have developed many “Belief Bricks”. What are these, you ask? These are the building bricks that define my foundation for explaining the world around me. We all have them – they are a filter by which we accept input and shape our ideas. I’d like to pick out one of them here and connect it to a trend in our industry – it’s the “Law of Unmet Needs”.

I like this one – it is the root of business opportunity. Do this: find what is painful and getting worse, and no one (seems to be) addressing it. Then figure out a (market-feasible) approach for how to solve it, use good business skills to manage the solution, add water – and voila!

Of course it’s not quite that simple. Yet it remains the basic recipe for “business plans” supporting new startups, or new products within existing companies. It is the basis for how we approach our new standardization efforts at Si2 as well. Part of the trick is truly understanding what the market is telling you about the unmet need, while part is navigating hazards that must be deftly avoided to not fall into the abyss along the way.

One benefit of my role at Si2 is the opportunity to listen to a wealth of input from across our member companies. Recently, there has been a noticeable increase in the pain level associated with designing complex silicon that runs embedded software. This trend was already there, as more processors run more software on SoCs. What has changed, however, is the added risk this embedded software “variable” brings to achieving necessary parameters of the hardware design task. 

One working group in Si2′s Low-Power Coalition, while addressing power at the architecture / ESL level (where 80% of the savings are hidden), concluded that a lack of standards for higher-level modeling of power was a barrier to industry progress. Now, even without an embedded software component, the challenges of estimating and managing power consumption during the product’s operation are hard enough. Yet many products today have multiple processors, and this trend will continue. Your smart phone’s silicon burns power dictated by the software that owns the bulk of digital functionality. The energy dissipation resulting from switching transistors is a direct consequence of the software operation… but EDA flows lack a means to factor that into the design trade-off space. What operations must be concurrent? What impact will switching power / frequency modes have to critical response times as timing fluctuates? Which architecture is best suited to the combined (software + hardware) time-varying functionality? How do we cooperatively work with the software team better?

This problem does not lend itself to simple in-house solutions. Its no wonder that we are hearing so much about the rising cost and complexity of designing silicon – to the point that the venture capital community has “moved on” to other (more attractive) areas. There clearly seems to be a large unmet need, and this trend has nowhere to go but up. 

In the past, established EDA vendors have stated they have rejected this growing aspect of design because there is no money in the “software world” (think free compilers and $995 development kits). However, that logic is flawed. To compete in the software development world is to address a different problem – and one that already has plethora of solutions. The unmet need here is addressing the current problem scoped by EDA: effective design of silicon to requirements under a number of complex constraints. EDA adapted to adjacent manufacturing issues and integrated DFM concerns; perhaps software is the next adjacency. How much would companies pay for genuine improvement in this problem, where the new world order puts embedded software onto nearly every chip? How can we design to even more stringent requirements 5 years from now if this trend continues?

Perhaps this problem area is not being addressed because we lack a clear vision of any feasible approach for connecting our world of silicon design and the world “on the other side”. Perhaps no single company can deliver a useful solution without more enabling infrastructure to support it. Perhaps we haven’t really tried yet.

I see a continuation of the trend for more embedded processors – and more complex silicon design parameters dependent upon what the software does that drives its operation. What do you see? Is this really an unmet need? If so, how would you propose the industry tackle it? I would be interested to hear your comments.

Posted in guest blog | Comments Off

Friday puzzle: cucumbers

Today’s problem: on a sunny day a greengrocer places 200 lbs of cucumbers in front of his store. At the start of the day they are 99% water. He doesn’t sell any all day but it is a really hot day and by the end of the day the cucumbers are down to 98% water. How many pounds of cucumbers are left in front of the store?

Answer next week

Last week’s puzzle was the Monty Hall problem. It is sufficiently well-known and controversial that it has its own Wikipedia page. The reason it is controversial is that if it is expressed ambiguously then the answer is not clear and erudite professors will write in to complain that you’ve got it wrong. But as I expressed the problem (in the small print), you should definitely switch: you will double your chance of winning to 2/3.

The confusing thing is that intuitively it may seem like after Monty Hall has opened his door that your chances are 50:50 between the two doors. The right way to think of it is that the chance that the car is behind the door you picked starts off was 1/3 and that remains the case after Monty Hall has opened a door. After all, he was always going to open a door with a goat behind it (he’s not opening a random door that happens to have a goat behind it) since he knows where the goats are. So the chance that the car is behind the remaining door must be 2/3. It may help to think of the 100 door version. You pick a door, Monty Hall then opens 98 doors revealing goats, since he knows exactly which 98 doors to open and, except in the unlikely event you’ve actually picked the car, which is the one door he must avoid opening. The door he must avoid opening almost certainly has the car behind it.

Posted in puzzles | Comments Off

Kindle

I have had an Amazon Kindle for a few months now. It’s not perfect by any means but I really like it and for some sorts of books I prefer to read on the Kindle to on paper.

The first great thing, especially when traveling, is it will hold a more books than you want to worry about. It will apparently hold thousands but I prefer just to keep a couple of dozen max that I’m either reading or have on deck to read.

The screen is really good for reading provided it is light. It is really high contrast, not quite as good as paper but close. It works well in bright sunlight since it is reflective. However, there is no backlight so you can’t read in the dark, you need a light to reflect. The screen is slow to update which isn’t a problem when reading a normal book, a quarter of a second or so when you turn the page is fine, quicker than turning a paper page.

However, any book like a reference book or a travel guide that you want to jump around in isn’t really a good match for the Kindle. The screen update is too slow and the navigation just isn’t rich enough (partially because the slow screen makes a good on-screen user interface pretty much impossible). Nor is it good for any book that you want to skip chunks of. I tried reading 1000 records to hear before you die and it was painful when I came to a record (modern jazz say) that I wasn’t interested in so I wanted to skip 4 or 5 pages to the next record.

The fact that you can download for free the first few chapters of any book Amazon sells (on Kindle, not all books are available) is great. You really can get into a book a certain amount before you commit to buy it. And when you do buy it, it is always cheaper than the paper version, usually $9.99. The integration with Amazon is really clean. If you decide to buy a book on your Kindle it will be there within a minute (it uses Sprint’s network although they hide that and call it Amazon’s Whispernet). You can also buy online on your PC (or iPhone) and it will upload it to your Kindle immediately.

The iPhone app works well too, and is synchronized with your Kindle so that you can read a book partially on your Kindle and partially on your iPhone and it will keep track of where you’ve got to. I’m not sure about the privacy implications of Amazon knowing just where you have got to in every book you have read, but they know so much about my reading habits anyway. I’m sure some lawsuit will want it all in discovery and there’ll be the usual arguments that the government can’t let you have privacy because terrorists and pedophiles might be detected by their reading habits on their Kindles.

Apart from the relatively slow screen that means that skipping around in a book doesn’t really work, the thing I miss most compared to a paper book is that there is no way to tell how far you are from the end of a chapter. At the bottom of the screen there is information that tells you how far you are through the book, but when I read a book on paper I tend to peek ahead a few pages and see how long the current chapter is before I decide whether to read one more chapter (or the rest of the chapter) or go to sleep, leave the café or whatever. That may be just me, we all have little idiosyncracies about how we read.

Chris Anderson’s book Free is available for Kindle. The main thesis of Free, as I’m sure you know, is that a lot of things that have real value are available for free (this blog, for instance, although the real value bit is in the eye of the beholder) because copying digital data is free. He put his money where his mouth is: Free on paper costs over $20 but when it first came out on Kindle it was free. I’m afraid if you didn’t get it during that first month then it’s now $9.99.

Posted in culture | Comments Off