A recent Forbes article throws some heavy shade at the thousands of institutions that are trying to help new companies grow. The social media link sub headline is “New research from the Kaufmann Foundation shows that business incubators may not be helping startups as much as they claim to be.”
First of all, that really isn’t what the Kauffman report found. Their MAIN argument is that not enough work has been done comparing incubated companies versus non-incubated, which is an excellent point. There is a SINGLE study that did that, and the data is not encouraging. But it is only one study, and much more work is needed.
And that’s not going to be easy. To begin with, we need to not confuse incubators with accelerators: they do different things, usually have different sponsors, participants have different expectations, and measuring success is likely to be different too. Next, there are multiple confounding factors that make evaluating the impact of incubators problematic.
This is a very ‘political’ question, even within the tech community. Those who skew more free enterprise believe that bootstrapping and succeeding on your own are the ONLY paths to success, they talk about Apple and Facebook, and the fact that almost all tech mega-successes have NOT come from incubators or accelerators. These types tend to be opposed to any kind of dirigiste industrial policy. And many incubators are sponsored by governments.
There may be a selection bias. Companies that are very early hyper successes often attract significant capital early in their life cycle: as a result they never end up needing the support from an incubator or accelerator. This kind of “cream skimming” may mean that the pool of incubated companies is INHERENTLY less probable to end up as successes. The incubators aren’t the problem, it is the pool of candidates who NEED to use incubators that are statistically less likely to succeed. It doesn’t mean that helping them is a bad idea. One of the big challenges is that robust analyses of the incubators globally may show that they are not helpful on average. That wouldn’t surprise me: I suspect some incubators are good, and others are less good. A more interesting question would be “are there any incubators or accelerators that show a consistent pattern of helping start-ups become successful, and if so, what can we learn from them?”
The real problem then becomes the lack of good statistical universe. Let’s say the average incubator helps 25 companies per year, with 2/25 becoming a big success. If one incubator has three successes three years in a row, while a second incubator has only one success per year for three years, it would be tempting to examine the first for best practices, while dismissing the second as a failure. Bad idea: the difference between one and three over such a short period could easily be due to random chance. This is like picking your mutual fund on the basis of performance. Just because a portfolio manager picked the right stocks for the last five years doesn’t mean she will do it for the next five.
BIGGEST PROBLEM: what the f*** do we mean by success? What do we want incubators to do? Is it to turn companies into billion dollar valuations? Or billion dollar revenues? Create jobs? Get academic ideas into products? Employ university graduates? Build a tech ecosystem or critical mass? Backstop a critical skills area? Fill up an abandoned steel foundry that would otherwise be empty? (That last is a real thing.)
What if incubators are shown to produce identical outcomes to non-incubated companies…but allow management and founders to grow to the same level WITHOUT as much dilution through raising capital? That would certainly be seen as a benefit by the founders, who in turn might be worth $2 billion instead of $200 million on the IPO, and then set up a billion dollar research foundation in the city where the incubator was. That feels like a win to me, but how do you measure that sort of thing across the globe, and over short time periods?
I am sure there are more issues, but this feels like some useful starting points.
Why are there so many deer in Toronto’s ravines in 2015? Isn’t it weird that an urban area of six million people has so many large ungulates (and their baby fawns, see above) wandering around happily?
This was the question posed by a Vancouver friend of mine on Facebook. I grew up on the North Shore in Vancouver, and it felt normal to have wild animals in our back yard: there were thousands of square kilometers of forest and wilderness just the other side of the highway! But it is more surprising in someplace like Toronto, which is extremely unwild, long-settled, and was grotesquely polluted as recently as the 1970s. In the 1930s and 40s, the Don River actually caught fire. Twice.
I have some answers, but before we get there, I need to mention that “our deer friends” are not the only signs that the Toronto ravines are doing well ecologically. In terms of land creatures, I also see rabbits, coyotes, groundhogs (especially near Eglinton, which is groundhog central), tortoises and/or turtles, snakes, muskrats and the very occasional beaver. There are anglers on both the Don and the Humber, and I have seen for myself salmon swimming upstream in the East Don south of Steeles! The bird population seems very good: not just the ducks and geese that one would expect, but herons, egrets, Baltimore orioles, sandpipers, owls, jays, cardinals and way too many of those stupidly territorial red-winged blackbirds that dive bomb your head when you ride past on your bike! Keeping the blackbirds partially in check is the raptor population: three different large hawks and even ospreys.
Why such success across all three biomes?
People were idiots, and then we stopped. The reason the Don caught fire was that refineries dumped oil straight into the river…and it didn’t even get much press coverage! At the time, rivers were seen as giant public utility sewers. Tank trucks would drive across the city, and just dump chemicals, toxins, and even heavy metals into the Don, Humber and Black Creek. Individuals weren’t better: they would pour paint and other poisons down the storms sewers, which fed straight into the rivers. And the storm sewers were cross-connected with the other sewers, so heavy rainfalls led to bad stuff being continuously pumped into the rivers and Lake Ontario. Uck!
Then one day we stopped doing all that. As a city, we made it a priority to separate the two sewer systems, and treated all the storm waters. There were laws about dumping, and fines that helped, but much of the change was cultural. Don’t just throw tires and shopping carts into the rivers because you can. Forty years is a long time, but also a short time. And forty years later, the river valleys are clean and thriving again.
People are present. I am convinced that one of the biggest things that helped the psychological shift is the extensive network of paths. Putting runners, walkers, bladers, cyclists, parents with babies, older people in wheelchairs into paths along the river bottoms…and hundreds of thousands of them per year…makes Torontonians feel connected with their ravines and waterways. Don’t dump that waste down the drain – we picnic there! If those ravines were inaccessible and deserted, maybe tank trucks would still try to dump poisons into the stream, just to save money on proper disposal. But they can’t – there are ALWAYS witnesses on every kilometer of the rivers. Millions of eyeballs over the years have been more vigilant than any environmental enforcement team ever could have been.
People were smart. The people who work for the city aren’t idiots. They kept the attempts to control water flow to a reasonable and non-invasive minimum. No big dams, a few small weirs, erosion control through passive measures. Lots and lots of swamps, with bird friendly marshes and reeds, and automatic filtration of bad bacteria, organics, and even metals. They tore up invasive plant species and replanted with native varieties that support the fauna. Extra long bridges so the pilings don’t sit on the edge of the river banks, avoiding erosion and turbulence problems. Over and over, our ravine system reflects a conscious ethos of “let nature be natural.” It works, and the wildlife boom shows that.
We’re also lucky. Toronto’s ravine system is built out of short rivers cutting through glacial till, and with frequent heavy rains and flash floods. That situation killed a lot of people in Hurricane Hazel, but the physical and meteorological environment has also been helpful to restoring the ravines rapidly. Each summer t-storm flushes the rivers out, sweeping debris both large and small away, and burying the worst under silt, sand and gravel. One section of the Humber was badly flooded two summers ago: the river rose and deposited at least 10 cm of this really gross goo and silt. It looked like a disaster zone for many months. But two years later that same area is a paradise of lush plants, insects, birds…and our friends the deer!
The good folks at comScore have released their Global Mobile Report (free download), which looks at multiplatform audiences by demographic in the US, Canada and the UK. The single most important finding is the data on smartphone usage for Americans 18-34 (aka millennials or Generation Y) versus what Canadians and Britons are doing.
As you can see on the chart below, Americans 18-34 spent 61% of their digital media time on smartphones in March 2015, compared to 47% and 50% for Canadians and Britons of the same age! And that seems to be coming out of their computer use: American millennials are on PCs only 31% of the time, compared to 49-50% for those of us with the Queen on our coins. (Please note that comScore uses the word ‘desktop’ to describe all computer use: laptop or desktop, PC or Mac.)
Put another way, American millennials are on their smartphones 30% more than computers, while the non-American smartphone usage is only 8-10% more. Given that the US market is sometimes a leading indicator or bellwether, this raises serious questions about if the rest of the world is going to follow in their footsteps?
Before we go there we need to look at another comScore chart. As you can see below, mobile consumption (the orange square represents smartphones and tablets combined) by American 18-34 year olds is massively out of whack compared to what we see in the UK and Canada: their monthly mobile time spent of 88.6 hours is a full 30% higher than the UK/Canada average of about 68 hours.
And the reason why I think this is important is the blue squares in the chart above. Millennials in Canada and the UK are very similar to each other, using computers for digital media time (other computer use isn’t being measured) about 44.6 absolute hours per month. American millennials are a lower at 39.1 hours, but that difference is much smaller than the percentages would suggest, and is only about ten minutes per day.
My conclusion? Younger Americans aren’t really using computers that much less for digital media, instead they are using mobile that much more!
And I think that way of looking at it is critical: the computer isn’t going away, the smartphone is instead growing the total digital media pie.
Most people in the media are focused on calling the Apple Watch a success or failure based on units sales or market share. Those are great numbers for driving readership, but they miss an important truth: companies like Apple care much less about those metrics, and much more about whether the new product makes back the money invested in it, and goes on to make even more money. For long term shareholder value enhancement, that return on investment (ROI) is the only metric that really matters.
First we need to know cost: how much money did Apple spend on developing the watch and marketing it? Apple spends over $6 billion a year on R&D, so it is impossible to figure out what portion of that was for the watch. I just can’t see it being more than $400-500 million. Add another $50 million for marketing (Apple spent $38 million TV ads for the Watch alone) and we are at a $500 million investment.
We don’t know for sure how many watches Apple has sold, or the sales mix, so we don’t know average selling price (ASP) and therefore revenues. We also have only some guesses about how much the watch really costs to make (less than $100,) as teardown pricing is an inexact science at best. A reasonable guess would be that they have sold somewhere around 5 million units worldwide in the three months following the launch, at an ASP of $500, and an operating profit of $2 billion. Even if demand dries up entirely, and they sell ZERO watches for the next 9 month, the first year of sales would see Apple recoup its entire $500 million product development and marketing cost, and have an incremental gain of 300%.
Most companies that develop new products have various ROI hurdles. The lowest is “I hope we get our money back!” More common is “let’s get our money back, and then another 100% return over the product lifecycle.” That is considered a basic success, so 300% return in the first year would fall into the home run category.
Before I go any further, I need to be clear that this is all guesswork on my part. Next, many (many many) people care about this stuff primarily in terms of what it means for the share price, which I have no opinions on. Next, Apple makes about $75 billion per year in operating profit already, so while $1.5B from the watch sounds like a big deal; it may not move the needle much for a company as large and profitable as Apple. My comments ONLY apply to the question of “from Apple’s product development perspective, is the watch a success?”
And here is where it gets really interesting to me. Because the success or failure of the watch is likely not measured by Apple management purely in terms of watch units sold, watch revenues made, or even watch operating profits earned.
First, the media treatment of the launch has been both massive and largely favourable. At $1 per thousand impressions, the brand marketing value is likely in the hundreds of millions of dollars worldwide. I am not referring to Apple ads for the watch; I am talking about “free” media coverage.
Next, the watch is likely to be a useful part of the contactless payment business and Apple Pay. The company has high hopes that smartphone and watch “tap and go” NFC payments are about to be a big thing and if watch sales of 5 million (or whatever) help encourage mobile payment adoption, then that has another big boost for the ROI calculation.
But both of those are minor points compared to phones. Although Apple makes a wide variety of products, many of which would be the main deal at other tech companies, they are all dwarfed by the iPhone in terms of primacy for Apple. The iPhone is over 70% of Apple revenues in 2015 and likely over 75% in terms of profits. To put it bluntly – anything that drives more iPhone sales will be considered a success at Apple!
The Apple Watch only works with the iPhone 5 or later, and really only works best with the iPhone 6. Further, the watch only works with an Apple phone, it won’t work if you have a Samsung. If we keep using my nominal 5 million watches sold, then the math gets very good very fast. What if a million of those purchases were from Android users who were just dying to try the watch? What if another million were folks who bought a newer iPhone at the same time, just to catch all the benefits? And what if another million were thinking about switching to Android, but are sticking with Apple because their watch locks them into that ecosystem?
That last million has its benefits: sticky users, all buying stuff on iTunes and apps and so on. But the real ROI comes from the first two million, which are outright wins: a couple of million “bonus” high end iPhone sales at >$600 ASP is another $1.2 billion in revenues, and $400-$500 million in operating profits.
In other words, even ignoring the revenues from the watch itself, the “drag along” effects of tethering the watch so tightly to the latest iPhone may pay for the entire watch development and marketing costs.
Once again, that is probably irrelevant from a share price perspective. The media loves a horse race, and conversations around whether the watch is meeting, beating, or failing to meet Wall Street forecasts or live up to the hype are dominating the conversation.
But from the perspective of the Apple product innovation folks, they are almost certainly looking at the kinds of indicators of success above. I don’t know if the watch is a success or not, but I hope looking at it from this angle helps.
Last week, analytics firm Slice Intelligence released some charts that caused headlines that Apple Watch sales were collapsing to well under 5,000 watches per day. I posted the article on Facebook, generating nearly 50 comments. One of which linked to an Apple Insider item, and asked: “Duncan, what’s your take on yet another article refuting the claims of poor Apple Watch sales?”
The Apple Insider piece will thrill the heart of any data geek: it is 2,000 words of closely reasoned argument buttressed by numerous charts. But it was NOT a refutation – it failed to present new data of its own that showed the Slice data to be false. Instead, it did an amazing job of pointing out the multiple problems with relying too heavily on the Slice information.
Slice already admitted many of these limitations: their data is based only on eReceipts from a relatively small sample population, it is US-only, online only, and excludes resale sites like eBay. After I finished reading the Apple Insider article, I agreed that the deficiencies are so material that no one should rely on the Slice data to try to predict Apple Watch sales. You CANNOT say “Apple only sold ~3,000 watches on July 2, and there are 365 days per year, so the Watch run rate is only about one million units.” You just can’t do it! But I wasn’t trying to. I actually have no view on whether Apple will sell 5 million or 50 million watches in 2015. Instead, what I am really interested in is “what kind of buyer is buying the Apple Watch?” There are two camps out there today.
Camp One says that all previous smartwatches have been mediocre successes at best, kind of like tablets were before 2010 and the iPad. And, like the iPad, the Apple Watch is the smartwatch done right. The iPad version one wasn’t perfect, but people bought it, liked it, showed it to their friends, version two was better, and Apple now sells over 60 million iPads per year, and the tablet market as a whole is 230 million units in 2015. In the same way, the Apple Watch will be big for Apple, but also transform the wearable space and turn it into the Next Big Thing.
Camp Two agrees with most of that, but worries that the time is not yet right for wearables, especially watches. Maybe one day they will be big, but in 2015 the mass market of consumers isn’t interested. Further, since Apple does produce generally great products, has a very loyal customer base, and has enjoyed very favourable media interest (the narrative of “Apple will do to watches what they did to tablets” is pretty hard to resist!) means that people in BOTH Camp One and Camp Two agreed that the opening weeks of Apple Watch sales were likely to be large, and the Apple Watch would easily become the most successful smartwatch launch ever. But where Camp Two diverges is that although initial sales will be big, relatively quickly we will see sales drop sharply.
Although we’ve been talking about watches and tablets so far, the phenomenon at work is common to technology devices, movies, music, games, and so on. The entire movie business is built around two numbers: how big is your opening weekend box office…and what happens after that? Whenever the latest Marvel Universe movie (or Woody Allen film, or Tom Cruise vehicle, or James Bond, etc.) opens, the studio can count on a certain number of millions of super-fans to pack the multiplexes’ biggest screening rooms. But some movies fall off a cliff in the second weekend, while others might never match the opening day success, but still generate sufficient box office interest to stay in theatres for weeks or even months. Whether long term success, staying power, or “it has legs”, they all mean the same thing…and they all look the same on a graph.
And here is where I think the Apple Insider article doesn’t come close to “refuting” the Slice data. Although the Slice methodology has multiple weaknesses and deficiencies, and is possibly useless for calculating annual sales – it is CONSISTENT. To coin a phrase, the chart at the top is “comparing apples to apples.” The absolute numbers shouldn’t be relied upon, but I will tell you that I barely noticed them when I first saw the shape of the curve.
The chart was properly done and reminded me of other charts I have seen: the time period across the bottom with a uniform interval; there was a seven day rolling average to eliminate one day glitches (see the daily chart below to see what the unsmoothed version looks like); and (most importantly) it was log scaled. Anyone who follows media, technology, or any of the natural sciences could look at this graph and say “this is a hyperbolic decay curve. It follows a reverse S-curve shape. There is a big drop at the beginning, then a plateau, then another drop. The two most important things this chart tells me comes from a) the duration of that plateau; and b) the steepness (or slope) of the second drop.”
Once again, I ignore the absolute value of the numbers on the right hand axis. Instead, the chart nearly screams the following: After initial enormous sales on launch, daily transactions fell about 90% and stabilised in mid-April. They stayed highly range bound for seven (7) weeks, until the second week of June, where they declined another ~90% over a three week period.
There are two interesting things about the shape of the chart. The first is that Camp Two is more likely to be correct. Regardless of how many millions sold, the first generation Apple Watch seems to have very strong launch sales, a relatively brief but strong plateau, and (once the early adopters are finished buying) the second decline is steep and to a low level. The product doesn’t appear to be popular outside the early adopter crowd, and not many people appear to be looking at Watches on the early adopter wrists and saying “Man, I need to get me one of those.” If true, this is NOT like the iPad history, where you could see the number of devices ‘in the wild’ growing pretty steadily over time.
The second thing is maybe even more important. I have no skin in the game, and it doesn’t matter to me whether Camp One or Two is right. But as a long-time data guy, the Slice chart “looks right.” The shape of the Slice curve isn’t “close” to what Camp Two would have predicted…it is almost an exact match. The robustness of that fit, both to predicted curves and to other decline curves we see in tech and media, makes me believe the Slice data is useful at some level, and likely reliable.
New data might change my mind, but for now the “Watch is mainly about early adopters” looks to be the more probable hypothesis.
Guardian headline to the contrary, the Hubble space telescope is not inferior to a smartphone camera. If you want to skip my lengthy reasons below, it’s because your smartphone takes pictures of brightly lit scenes, in a friendly environment, that are meant for showing your friends on Facebook. That’s not what Hubble does, and if your smartphone camera tried to do what Hubble does, it would fail horribly.
To be fair, the article is accurate about an important point: the gear that goes into space is seldom cutting edge. It is hard to service, repair or upgrade once it is up there; and it was usually designed-in years before launch, which may have been as long as decades ago.
But the headline is going to be the only thing most people read, and it has important negative consequences. It makes people think that the scientists at NASA and other space agencies are too dumb to use smartphone cameras. Too bureaucratic. Too slow. Too tied up spending billions of “hard earned tax payer dollars” on more-expensive-but-less-good gadgets. All for some low quality pictures?
Let’s take a look at the ‘real picture’, shall we?
It’s all about the photons. Cameras work by gathering photons, focusing them, and capturing them on an imaging plate. All else being equal, not enough photons = grainy picture. As you know, a smartphone camera will take a brilliantly crisp photo outdoors on a sunny day. Indoors is OK if all the lights are on, but usually low quality if even a little bit dim.
The New Horizons probe is taking pictures of Pluto as I write. Pluto has a weird orbit, but it closer to the Sun than it is most of the time: about 33x times are far away as Earth is. That means that the Sun will be casting about 1100x LESS light out there than it does at Earth’s distance. New Horizons needs to capture the reflected photons to take pictures, and that’s much harder than it is in your living room. And the Hubble space telescope is normally used to take pictures of astronomical objects that are millions or even billions of times fainter than the surface of Pluto. The reason that so many space pictures look bad is the objects are really far away and there’s not much light. Imaging a 30th magnitude galaxy is a tough problem: you’re looking at objects that are many orders of magnitude fainter than the random noise in a smartphone camera!
Space is a rough neighbourhood. Smartphones only work between -10C and +40C, or about 260K (Kelvin, which is like Celsius but with 0K being absolute zero, or -273C) to 310K. Depending on the space mission, likely temperatures range from about 3K to 450K, which is a range about 9x larger than a smartphone camera could handle. G-forces on launch are much worse than dropping your phone onto the road. But the real issue is radiation: both cosmic rays and particles are absolute murder on imaging sensors. Sensors are designed to sense things, so you can’t make them out of lead or seal them up in a perfectly shielded box. There are some people looking at using Commercial off the Shelf (COTS) cameras that are not radiation-hardened, but at this time almost everything in space is specially tailored for years of high radiation exposure. Your smartphone camera might last a week or even a month, but it wouldn’t last years. By the way, one of the world leaders in making space-ready cameras is Canada’s Dalsa. Now owned by US-headquartered Teledyne, they still do great work: they didn’t make the Hubble Main Field Camera (MFC), but they are on the Mars Surveyor, for instance.
This will get a little technical. There are just so many issues where smartphone cameras are NOT the same as space cameras that it’s hard to know where to start. Speed, resolution, and packaging are the minor issues. The sensor itself is MUCH larger on the Hubble MFC – you couldn’t just stick a tiny smartphone sensor in the optical path.
One MAJOR issue is that all smartphone cameras today use a kind of sensor called CMOS, while the main Hubble camera using something called CCD. Here is an excellent overview on some of the key differences between the two technologies. Both have their virtues: CMOS is low power, which makes it ideal for battery powered smartphones. CCDs use more power, but that’s not a problem when you have giant solar panels providing kilowatts whenever you need it. As one minor(ish) issue, CMOS uses something called a rolling shutter, while CCD uses a global shutter, and for most science/space imaging tasks a global shutter is much better. That’s not even the worst problem with CMOS cameras: they don’t produce images the same way CCDs do. They need to do some level of on board processing to make the pictures look good, which produces good selfies, but reduces the scientific usefulness of the images. Astronomers want the “raw data” and CMOS sensors don’t have that as their output. Also, CMOS cameras tend to have low dynamic range: they are less good at capturing really bright areas and really dark areas (and you get a lot of that in space!) in the same image, and that is bad for spatial resolution.
But the other issue is what we mean by “taking pictures.” A smartphone camera does a good job capturing light in the visible spectrum, which has wavelengths of roughly 400-700 nanometers, as seen in the image below. But the Hubble ‘cheats’: it is specially designed to capture the full visible spectrum, but also can image the longer wavelength infrared light and shorter wavelength ultraviolet light (the latter is particularly useful in space.) Remember the photons? Being able to ‘see’ photons from a broader spectrum gives you more photons to work with, as well as providing other scientifically useful information. Smartphones don’t do that. As an example, the image at the top of the Guardian article is of the Horsehead Nebula. Pretty, isn’t it? But that image is not from the visible light a smartphone camera can capture, it is infrared. In visible light, the nebula’s just an opaque black mass!
Processing is not done onboard. One of the odd things about the Guardian article is how the writer keeps talking about processor statistics: “The third servicing mission was in 1999 and that was when the processor was last upgraded, from 1.25MHz to 25MHz, still way below the specifications we are familiar with today.” True enough, but it shows a serious misunderstanding of how the Hubble is doing something very different from a smartphone. We love our smartphones, and we love what they do to our pictures: they compress them, encode them, add filters, and modify them for transmission over cellular or Wi-Fi. These are all processor-intensive tasks, and the Hubble just couldn’t keep up.
But the Hubble isn’t about Instagram filters. It is a scientific mission, and its job is to take the best pictures possible, and then send them down to Earth as accurately as possible. There may be some error-checking on board, but no filtering, compressing, or any of the other things we expect from smartphones. Therefore 25MHz processors are not a gating factor on the picture quality.
Not that NASA is perfect. Don’t get me wrong. Using commercial off-the-shelf equipment where feasible is a great idea. It will save money and get new technology into space faster than in the past. But bad science stories about smartphone cameras being better than Hubble don’t help.
In other words, the Guardian headline and article is not merely slightly inaccurate, it is entirely backwards: smartphone cameras would NOT take better pictures than the Hubble.
Perhaps you doubt me? Perhaps you think it’s just my opinion and research against the author’s?
But it’s isn’t just my opinion. I wrote most of the above, and then sent a copy to my friend Brian Piccioni. He’s a tech analyst who was #1 ranked in Canada and globally multiple times, and specialised in imaging companies, graphics and smartphone chips, and made a few suggestions to improve the article. I haven’t seen my university friend Dave Kary since 1987 at UBC, but we’re in touch on Facebook. He also helped me sharpen up the article…and Dr. David Kary is an award winning astronomy professor at Citrus College in California. Finally, Dr. Savvas Chamberlain (Ph.D., M.Sc., D.Eng., FRSC, FIEEE, FCAE, FEIC, C.M. and Member of the Order of Canada) is someone I have known for years, and I had the honour of moderating a panel he was on at a semiconductor conference last year. Not only the former CEO of Dalsa, Savvas has published more than 150 papers on image sensors, CCDs and other semiconductor devices, and authored and co-authored 20 patents related to image sensing.
I am responsible for the final product, of course. But I did want to share the level of research and review that went into this analysis.
Fewer than 2,500 pages read this month, down 20% from May. I’d blame the pulmonary embolism, but I actually got a TON of reading done while waiting in the emergency room. 🙂 As you’ll notice from the picture, there was even a non-fiction eBook as part of the mix!
I am a long-time Stephenson fan, have read everything else he’s written, and was looking forward to this extremely. It’s not terrible, but it is not his best work. Too many reviewers have spoiled too much of the plot, but the first 2/3 of the book is a (good) geekfest on orbital mechanics, crisis-handling and comet wrassling. The science is very solid, but the writing is not as sharp as usual – Neal has publicly stated that he ‘kind of had this old idea kicking around’ and that comes across on the page: it’s hurried. The final third is a big shift, and I almost wish it had been made into a second volume. 7/10.
I have also read everything Mitchell has ever written. His early promise in Ghostwritten and number9dream were apparent, and Cloud Atlas was mind-glowingly good: it will almost certainly be regarded as one of the most important novels of the early 21st century. But every book after raises the question whether Mitchell can match that triumph. I personally liked Black Swan Green (a relatively conventional bildungsroman) but thought the Jacob de Zoet book in Japan was a let-down. Although The Bone Clocks was shortlisted for the Man Booker prize, it is definitely not David at his best. If you think of The Bone Clocks as Mitchell with a bit more Neil Gaiman than usual, you’d be on the right track. To my mind, the environmental allegory/dystopia/screed in the final chapter was too heavy handed and weakens the book. 8/10
Uff. I threatened to stop reading this series after book two. But various friends said “stick with it, they get better.” Maybe a little. Books three and four aren’t the worst thing I have ever read, but Pratchett’s illness and death has obviously had an impact: it’s pretty much all Baxter by this point, and the man is a charmless writer. For some reason I forgot I had read The Long Mars at the start of the month (which tells you something about how memorable these are) so it isn’t in the photo. 5/10 for both books.
My Facebook friend Mike Klein has joined Ascribe.io, which uses blockchain technology to manage and track digital content. Clever idea, and I wish him luck in Berlin! But as a test of the technology, Mike shared an eBook with me. Not only did the transfer work well, but I really liked the book! I normally prefer fiction to non-fiction, but this book combines a clever heist, French history, and a fascinating exploration of the watch-making industry in the past and today. Not a subject I knew much about, and this was a wonderful introduction. It isn’t perfect writing, but Biggs made this topic into a page turner for me! 9/10
I posted the following on Facebook yesterday, and over 10% of my friends clicked like on it. It seems to have hit a nerve, so I’m posting on this blog as well, just so there’s a more permanent record of it.
Those first weeks, Barbara and I slammed together with an emotional intensity that probably registered on seismographs. The photo above was taken in South Africa only a month later: she already had the trip booked when we went on our date, and within two weeks I was buying my own plane ticket!
In 2004, I didn’t know that one day my job would be about making predictions. But I remember making a long range forecast to her: I told Barbara that all of the romance and intensity we were experiencing in South Africa was great…but would only grow better over time.
I didn’t mean that we would become best friends, or more comfortable with each other. I predicted that for both of us the passion, the cards, the flowers, the spending every minute together, long conversations, dinners alone, the constant physical touch…that all these things would increase and not diminish.
And here it is, eleven years later. And I have never made a more accurate prediction. Happy anniversary my own true love!