What does the Pokémon Go craze really mean?

Here are ten quick thoughts, especially on the augmented reality (AR) angles: does the success of PG mean that AR is at a tipping point? (As always, I have no view on the stock market implications.)

1) It is definitely a hit. But comparing the number of daily average users (DAUs) to Tinder or Twitter doesn’t make sense; we should be measuring it against other successful mobile game titles. Pokémon Go has been downloaded about 7.5 million times, and let’s assume that 6 million are playing it daily, which is likely high, but we are early in the adoption cycle. Our old friend Angry Birds had over 30 million DAUs at one time, Clash of Clans was over 100 million DAU, and the record holder seems to be Candy Crush Saga, which had 158 million people PER DAY playing it in Q1 2015. Although PG has yet to launch globally, it is clearly not yet in the “big leagues” for mobile games.

2) But will it have ‘legs?’ Many games come out, peak quickly, but then see numbers drop again almost as quickly. Others have much greater longevity or retention as they call it in the gaming business. Retention is defined as the percentage of people who played your game in Month 1 also play it in Month 2. Losing more than half of your players in the months after launch is normal: only 16% of games have Day One retention rates higher than 50%! We just don’t know yet what Pokémon Go retention will be – my own guess is that it will likely have slightly lower retention than average for some of the reasons below.

3) The time of year matters: this is a great summer game. I suspect it will do less well when the weather gets colder or wetter. #CatchACold isn’t nearly as fun as #CatchEmAll!

4) This may be more about how much people love Pokémon than how much they love Augmented Reality. There are a number of other similar AR mobile games , and none of them have seen this kind of success. Ingress (an earlier game from the makers of PG, and without the AR overlays) had about a million monthly average users.

5) It is hard to overestimate the laziness of human beings. Yes, people are running around and trying to “catch ‘em all”, which is great for fitness: one woman said her pedometer measured twice as many steps in a day while she was playing! But, as we know from fitness bands, the drop-off rate for gamifying physical exertion is high. People do it for a few days, then turn back into couch potatoes! 😦

6) The most popular smartphone games tend to be casual time-fillers. You can play for a minute or two while in a line-up, or on a bus. Pokémon Go requires more of a time commitment – early data suggests that the average player is spending over 43 minutes per day in the game. That is amazing engagement, but likely to appeal only to a fairly narrow slice of serious gamers.

7) Some people are saying that this will be big for Augmented Reality in general. I don’t think that there is much evidence for that. Playing with your phone in a limited AR way is good, but how will that translate into AR headsets? Or into non-game AR content? Or into non-Pokémon AR content? All good questions…

8) A lot of smart futurist-type people and forecasters have been saying that AR will be the next big thing since 2010. They have been badly, embarrassingly wrong so far. And I think some of the buzz around Pokémon Go is from AR-boosters pouncing on this first success like a drowning man grabbing a life-saver.

The success of Pokémon Go means that at least some people, for some period of time, will actually use and enjoy using augmented reality for certain kinds of content. We didn’t know that before, so this is definitely meaningful new information.

But whether this is a bellwether for ever-increasing growth in the AR market is unproven in my view. Critically, most of the AR advocates have been pushing AR headsets of late, not the mobile phone overlay version. I don’t think the success of Pokémon Go does anything to suggest that people will also be willing to wear expensive, heavy, obtrusive headsets.

9) It is worth noting that Pokémon Go is unusually VISIBLE. Tens or hundreds of millions of people can and are using their smartphones to hurl birds at pigs, be ninjas with fruit, or drive around Hollywood with Kim Kardashian. But unless you peer at their little screens, the game playing are not being thrust into your awareness. In contrast, even a few dozen people gathered in one spot in Central Park makes headlines. (Which I find slightly odd: I have been to Central Park, and it has well over 100,000 visitors per day in the summer. Why make a fuss about a few dozen playing Pokémon?) I would argue the impact of Pokémon Go is being exaggerated to some extent by the extremely public nature of the game.

10) There are lists of issues that are getting written up: people getting robbed, walking into traffic, security issues around the app, draining your battery, or even using up data. (The last is not a big deal: PG uses about 10 MB per hour of play.) I think these are all fairly minor points, and will not be significant long term factors.

My conclusion?

I think AR in mobile gaming will be very similar to motion control in console gaming. The Nintendo Wii showed that motion control was a real market, and a profitable market, with tens of millions of people trying it and using it. But it reached a quick peak, and then fell rapidly from that peak: see chart below. Critically, it NEVER became the way that most people played games. It was an alternative, but always a small piece of the pie. I suspect AR in gaming will be the same. And motion control never crossed over from console gaming into how we interacted with our TV sets or computers…despite many companies trying to make that transition.




The first death due to “self driving” cars: who is at fault?

Tesla is now being investigated by the NHTSC following a May accident where a driver was killed while his car was in Autopilot mode. There are a few articles out there, but this Washington Post has the best information by far, accident site is the picture at top. In addition to the obvious human cost, this is the first time a human has been killed while a car was in autonomous or semi-autonomous self driving mode. There will be a lot of debate about where the fault lies. I am not interested in the legal aspects, but there are a few parties who share some fault. Sorry for the cursing below, but a guy is dead and I am angry about that.

The Driver: Not a nice thing to say, but the Tesla website and owner’s manual and everywhere else tells you SPECIFICALLY to keep your eyes on the road, your hands on the wheel, and always be ready to take control. That said, attempts to make this all about the (dead) driver are NOT going to fly with popular opinion, politicians, the media, regulators, and so on.

Tesla: Stop calling it fucking Autopilot. It is a very sophisticated and capable advanced cruise control – and calling it Autopilot makes drivers think it is more capable than it is. Yes, Tesla warns people not to trust it too much, but if a pilot cruising along at 35,000 feet turns on the “autopilot” the plane NEVER smashes into another airplane. It will automatically avoid a crash, which did not happen in this case.

The Media: Stop fucking calling Teslas “self driving cars.” They are nothing close, at least in 2016. Tesla warnings that the Autopilot feature is in beta, needs to be backed up by human drivers, and so on get turned into background noise by a million media mentions that call them self-driving. Nobody reads End User License Agreements disclaimers, but that usually doesn’t end up with people getting killed.

Tesla Again: Time to get a little technical. There are two broad approaches to making vehicles more autonomous that are related to how the car “sees” the road. One is to put a big, expensive, active sensor on top of the car a la Google. The Google car uses LIDAR, which is like laser radar, to scan the environment with great precision. It costs a lot of money, is pretty ugly, and doesn’t work in snow, but it does have certain advantages. One of them is that it would have detected a truck in the path of Joshua Brown’s car.

The other approach is to have a suite of cameras that look in all directions. This is cheaper, blends in better with the car, and works well under many circumstances. This is what Tesla uses, and it appears to be at least in part responsible for the fatal crash. To quote the Tesla blog post announcing the crash: “Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied.” In other words, the lighting conditions were such that purely optical systems (whether a human eye or a semi-autonomous car with cameras) were not good enough. Elon Musk stated publicly in October 2015 that fully autonomous vehicles don’t need to use LIDAR, but would need “passive optical and then with maybe one forward RADAR… if you are driving fast into rain or snow or dust.” I think we can now say that we can add bright daylight and white trucks to “rain or snow or dust” and remove the word “maybe.” I will make a prediction here: purely optical solutions are not sufficient and all autonomous (and maybe even semi-autonomous, see next point) vehicles MUST have at least one active sensing technology at wavelengths different than the human eye uses. We will not settle for robot cars that are roughly as dangerous as human drivers: they need to be safer, or there’s not much point. [Edited to add: I was unclear above. The Tesla does have a front facing radar unit, but it only scans the road ahead up to about the level of the hood. The truck body was high enough that the radar didn’t ‘see’ it, not detected by the cameras, and still low enough to cause a catastrophic and fatal crash.]

Semi-autonomous Vehicles: There is a fundamental problem here. Developing fully autonomous vehicles is going to take a while, and there are many benefits from incrementally getting there. Rolling out features like automatic emergency braking (which will be standard on most American cars for sale in 2022) will save thousands of lives, billions of dollars, get consumers to trust the technology, and also allow the technology to reach economies of scale. But there is an uncanny valley in terms of driving.

Uncanny valley refers to the fact that Elmer Fudd is kind of adorkable, but the characters from Polar Express were nightmarish! As animation moves from the cartoonish to almost-human, there is a perverse effect where “superior” animation actually looks worse.

In the same way, nobody became a worse driver because they had an automatic transmission. Even cruise control doesn’t seem to have increased accident rates. But as semi-autonomous technology gets better and better, there is a very real risk that human drivers will be lulled into inattentiveness by the improvements in autonomy.

I am not sure there is an easy fix for that last point, except getting active sensing into cars fast.

Compared to What?: Tesla is spending a lot of time saying that this was the first fatality in over 130 million miles driven, and the average in the US is one fatality every 94 million miles driven. That is true, but beside the point in two ways.

First, I think the public and regulators are going to demand more of semi-autonomous cars. Making the same mistakes a human would have made won’t be good enough, and it is clear that the Tesla camera approach was not good enough in this instance.

Second, this was a car, and expensive car, and on a divided highway. As part of that 94 million mile stat, there are many motorcycles (15% of fatalities), old and unsafe vehicles, and collisions in bad weather, at night or on much more dangerous roads. Given the conditions, I think that most of us would expect our robot cars to do better.

#CalledIt! TV viewing by US 18-24 year olds declines 10% in the last year.

The Nielsen Total Audience Report for Q1 2016 came out this week, and (as always) it is filled with a trove of information for those tracking the traditional television industry, and the habits of viewers, especially the key 18-24 year old demographic. You can download the full report for free.

Those who know me know that my view on the US TV market is “erosion, not implosion.” Across a number of metrics, people are watching only slightly less traditional TV and a few are cancelling cable, but not as many as you think. The only real area of concern for me is what is going on with those 18-24 year old millennials: they may be a bellwether. In my post on the Q4 2015 data, published on March 26, I put up a chart of the year over year changes in live and time shifted TV minutes for the 18-24 demographic, and said:

“…annual declines of 25% feel like they were an exception, and were likely a bit of a one-off. Next, it is possible that annual rates of decline may stabilize at around 10% in the US, or that they may improve even more, and we may see single digit annual decreases in traditional TV viewing. I don’t have enough data yet to know, but my hunch is that a 10% annual decline is the most reasonable assumption. The five year CAGR is exactly -10% since 2010.”

I nailed it: in the same quarter last year this age group watched 155 minutes per day of traditional TV, and in 2016 they watched 140 minutes daily, for a 10% annual decline. (9.8% if you want to be exact!)

It is nice to have your hunches confirmed so quickly, and I am going to stick with that hunch: I am predicting that viewing minutes for 18-24 year olds do not start dropping by crazy amounts, but neither have we hit a floor. Viewing minutes will continue to decline at around 10% per year for this age group, and this is a very real, very serious issue for traditional broadcasters and cable/satellite/telco bundle providers. See below, but the rate of decline in TV minutes for young people is about ten times the rate for the overall US population.

Additional observations from the Nielsen report:

1) Traditional TV for the population as a whole is declining…but SOOOOO slowly. Adult (18+) live and time shifted TV dropped from 5 hours and 7 minutes daily in Q1 2015 to 5 hours and 4 minutes daily in Q1 2016. That is THREE minutes less TV per day, or a 1% decline. Not quite the death of TV, eh?

2) Paying for traditional cable, satellite or telco TV bundles is falling. Cord-cutting is a thing, and is growing: there were 100.77 million homes paying for TV last year, and only 99.22 million this year. That loss of 1.5 million homes is meaningful, but needs to be put in context. The number of US homes paying for traditional TV fell 1.5% in the last year. That’s not good, but neither is it catastrophic. It will likely continue to fall, but may still be around 90 million by 2020.

3) In my view, the key source of FUTURE cord-cutters are those who watch the least TV. (Duh!) In Q1 2014 the 20% of Americans with internet access who watched the least live and time shifted TV (48.2 million people) watched 29.2 minutes of TV daily. By Q1 2016 that quintile (now 47.5 million people) watched only 15.4 minutes daily, or 47% less in only two years (see chart below.) Although TV viewing for the average American is barely dropping at all, for one in five Americans it is collapsing. These are the cable cutters, the Netflix-only folks, and they tend to be young, well-educated and highly employed. This matters to advertisers.

Quintile Q1 2016

4) The PC continues to hang in there too. Yes, smartphone usage is up year over year, but time spent on a PC for those 18+ rose by over an hour per week (from 5h36m to 6h43m), and it even rose for 18-24 year olds (4h26m in 2015 to 4h32m in 2016.)


Ad spending has fallen off a cliff forever. Or not.

Advertising as a percentage of GDP has fallen in the US. Is that the new normal, or will it go back up one day?

The chart above fills my little data-geek heart with joy: it has 90 years of data, the data comes from literally thousands of independent sources, and it covers a large and very well-measured market. At a 99% confidence level, I am sure that the chart is showing a genuine and important trend: US advertising spending as a percentage of GDP was highly stable in a 1.1-1.5% range between 1947 and 2007 (60 years), but in the last ten years has fallen sharply to below its historical range. It has never been this low before, except during WW2.

But will this last?

Theory #1: Yes, this is the new normal, and will persist. If you have heard of “trading analog dollars for digital dimes” you will be on the right track. Digital advertising is more targeted, more effective, more measurable, and therefore more efficient. That means advertisers don’t need to spend as much to get ad effectiveness, and therefore don’t spend as much. We can expect ad spending to stay under 1% for the future, and may even drop further as more ad dollars move to efficient digital and away from inefficient traditional ads.

Theory #2: No, this will not last. Advertisers are like kids at Christmas playing with a new toy. Digital is novel, and does have some advantages, but the rates of ad fraud, bots, ad skipping and ad blockers means that advertisers are going to need to spend much more than they are today, on a mix of both digital and traditional advertising. The 1% level is not sustainable, because digital isn’t as effective as its proponents believe.

Theory #3: Digital advertisers (especially Facebook and Google, who share 55% of digital ad spend and 2/3 of the annual growth: see chart at bottom) are doing what all new entrants do: they are coming into a market, and low-balling pricing because that’s how you gain share as a new entrant. Once digital becomes 30-40% of total ad spend, and customers are entrenched in their buying habits, they will raise prices, and we will see ad spend go back into its historical range. It is clear that advertisers are more than capable of paying 1.1-1.5% of GDP for ads over the long term, so why shouldn’t digital players (once they are sufficiently established) charge all the market can bear?

I would be interested in any thoughts on the above. Theory #1 tends to be widely held by new media/digital media types, #2 is widely held by traditional media players, and (so far as I know) Theory #3 is original to me, and hasn’t been discussed elsewhere.

I kind of like #3, but everyone loves their own babies. 🙂



The shocking fault in ‘smartphone by default.’

A recent British study has revealed some important – and worrying – aspects of people relying on smartphones for almost all their Internet needs. Ofcom, the UK regulator, conducted a series of interviews with people who they call “smartphone by default.” This is a surprisingly large group in the UK, and growing fast: 16% of UK adults rely solely on devices such as smartphones and tablets for online access in 2015, up from 6% the year before!

There are two broad groups of people who are smartphone by default: ‘smartphone by choice’ and ‘smartphone by circumstance.’ The critical finding from the study is that those who CHOSE to rely mainly on smartphones are doing fine…but those who use smartphones because they can’t afford other options are experiencing a widening digital divide, having trouble doing certain important tasks, and becoming “de-skilled.”

I know some smart people who choose to use smartphones almost all the time. They tend to be older, have good incomes, be technology early adopters, and generally review the work other people do rather than create a lot of content themselves. Most are businesspeople who like being able to do their job from a single, ultra-portable device. At times, they can even be kind of smug about being able to work without a laptop – they seem to view those who still need a PC as somehow less evolved, like they still live in trees and have a tail or something. 🙂

The UK study confirms something that I have long noticed about ‘smartphone by choice’ people: they almost always have (or have access to) a computer that they can use when they need it. They may be smartphone almost all the time, but according to Carl in Belfast:

“It’s impossible to talk to my accountant and deal with all the spreadsheets without going onto a laptop computer. It just needs the slightly bigger screen to properly deal with the numbers on the spreadsheet and send something over to him.”

According to the study, “Almost all participants experienced moments when they felt unable to complete a necessary task on their smartphone and needed access to another device – most often a computer or laptop with a bigger screen, keyboard and mouse.”

Those who were ‘smartphone by circumstance’ did not usually have access to their own computer, and had to borrow one, travel for over an hour to use one, or try to find a public access PC, such as at libraries or community centres.

While playing games, messaging, or social media all work well on the smartphone, study participants mentioned that doing their finances, researching health issues, doing schoolwork, dealing with government and (especially) applying for jobs and writing CVs all required access to a PC, and barriers around access, privacy, time limits or travel time were significant problems for them.

But the single biggest problem the survey identified was around digital skills. Those who were smartphone by circumstance had very low typing skills, couldn’t use office productivity software well, and were poor at digital file management. They were either (if young) failing to acquire these skills, or (in the case of some older participants) actually losing skills they once had.

One day, no one will need to know how to type: we will just talk to our devices and they will transcribe accurately. One day, no one will need to know how use word processing or spreadsheet programs.

But that day isn’t today, and it isn’t going to be 2020 or even 2030. For the next decade or two, not being able to use a keyboard or productivity software properly will be a significant disadvantage in the workplace.

Would YOU hire someone in your office who can’t type, use a mouse, find a file, or know how to use a word processor or spreadsheet program?


The biggest Facebook editorial bias is their user base.

Excellent story from the always-insightful Mathew Ingram on the subject of Facebook and Trending news stories, but in this article (and others I have seen) nobody has mentioned the STRUCTURAL bias Facebook has to be dealing with around Trending topics.

Right now FB has human editors working on Trending, and that introduces one kind of bias. And their algorithms are written by programmers, which introduces another kind of bias. But those algos and human editors are relying on the data produced by Facebook’s USERS…who are NOT demographically representative of the population as a whole, which introduces yet another bias, and one which is likely to be even more important than the other two.

Facebook users are significantly more likely to be women, under 55, urban, connected to broadband and more highly educated. In US terms, all of those demographics skew Democrat rather than Republican (to varying extents.)

In the United States, about 45% of Americans either identify as Democrats or lean that way, compared to 42% leaning or identifying as Republican. Pretty close to a tie. But if I look at Facebook demographics, (especially heavy users who spend the most time and post/share/like the most news stories) I would expect the split to be much wider.


If the data the algorithms and editors are seeing isn’t at least 60% Democrat and 40% Republican I would be surprised. And even a 65/35 split wouldn’t shock me.

[To be clear, this is nothing new. The people in 1980 who wrote letters to the editor for the print version of the New York Times or clipped articles out to share with their friends were almost certainly politically different from the people who did so from the Wall Street Journal. Audience demographics always have skew and bias, and always will. But algorithms make those biases apparent in real time. And no one is talking about that, which seems weird to me.]


New jaw-dropping energy technology announced! Oh wait, it doesn’t work…

Last year, everyone was talking about a start-up called uBeam, which claimed to be using ultrasound to charge devices through the air. I shared an article in a Facebook post in August of 2015 (after a number of friends asked me for my opinion of the technology) that described uBeam’s solution as “an impossible idea.” According to some recent articles, it looks like my scepticism may be justified.

A few reminders for when you read stories about amazing new technologies:

1) If something sounds too good to be true, it probably is.

2) New products that come out of nowhere, with non-technical founders/CEOs, and supported by no new breakthroughs in the scientific literature are much more likely to fail.

3) Just because smart VCs (like Andreessen Horowitz) have put money in is NOT a reliable sign of probably success. I love a16z, but everyone in VC makes mistakes.

4) If something sounds too good to be true, AND IT IS ABOUT ENERGY? Put on your extra-skeptical hat. Energy tech, whether batteries, charging, power harvesting, and so on is just riddled with disappointing results.

In 25 years, the failure rate for “new energy breakthrough technologies” that I have seen is well over 99%. Energy is important, complicated and incredibly well funded and researched by existing players. The ability of an outsider to come up with something new and significant at reasonable cost and good reliability is roughly zero.

Speaking as an environmentalist, that is hard to say: I WANT new breakthroughs to come out of start-ups. Many of the world’s problems would be solved or improved materially by better energy technology. But I also have to be a realist, and admit that this stuff is really hard, and tends to be badly covered by the tech press.

To be clear, the uBeam technology may still end up working at some level…but the burden of proof is now on them.



Watching the air come out of the FinTech bubble.

fintech bubble

I get a lot of questions about FinTech. With global investments into FinTech exceeding US$19 billion in 2015 (and nearly $14 billion of that went into VC-backed companies) it is obviously a hot space…but where are we in the hype curve? Could we be in a FinTech bubble?

On Thursday May 5, Canadian law firm BLG hosted a FinTech morning session in Toronto, with about 100 attendees, two panels, and an opening speaker (that would be me – photo below.)


It was a good discussion on FinTech (defined as “companies that use technology to make financial services more efficient. Financial technology companies are generally startups founded with the purpose of disrupting incumbent financial systems and corporations that rely less on software”) but I have a few thoughts I wanted to share.

Can we turn the hype meter DOWN to 11, please?spinaltap-11

The movie Spinal Tap made the joke about turning the amplifiers up to 11, but the conversation on FinTech is way past that number: the billions of dollars of investments get mentioned every third sentence, and speakers from Toronto-based accelerators/clusters like MaRS and OneEleven are wonderfully optimistic about the space, talking about the hundreds of FinTech startups in Canada, and how Toronto will become a global FinTech hub to rival London or New York.

It isn’t only about the future though, or even the “it worked for Uber, and the same thing will work for finance” kind of argument. The single biggest financial technology success story in FinTech so far is alternative lending. This includes peer-to-peer (P2P) lending, aka crowd lending, but also has players who raise their capital from large institutions. Loans are being given to consumers, small businesses, for student loans, or credit card debt. The alternative lending business is a genuine monster in the FinTech space, with companies lending out billions of dollars globally and launching enormous IPOs on the stock markets.

This subject is near and dear to my heart, since I co-authored a prediction on crowdfunding in 2013, and correctly predicted that the lending component would be the biggest and fastest growing part of the market. After the BLG FinTech symposium, I was excited to see what was going on in the lending space, and read a few articles as soon as I got home.

Although every FinTech conference I have attended trumpets the alternative lending companies as shining examples for the rest of the industry, the stock market performance isn’t nearly so exciting. As you can see from the chart below, over the last 16 months two of the most prominent lending companies have seen their share value decline by 70-80%, while the NASDAQ is actually UP nearly 20%. To be clear, I don’t follow either company closely, and I have no opinion about them or their prospects. They are just the companies that get mentioned during presentations.

[This was all written over the weekend. On Monday morning May 9, the CEO of Lending Club was forced to step down on disclosure and lending issues, and the company suspended future guidance. The stock closed Monday at $4.62, or 32% lower than the screen grab below.]


But I have seen numerous tech bubbles over the years, and a common signpost is when advocates cite a very small number of companies as success stories, focus on their IPO price, and seem almost unaware of subsequent market performance.

If you want an example, 3D printing ‘experts’ always referred to the two largest printer manufacturers as incredible success stories…even as their share prices declined 70-80% (sound familiar?) from mid-2014 to today (see chart below.) Eventually, the 3D printing evangelists realized that the market was telling them something, and they have dialled back their forecasts as they realise that while 3D printing is important, it is growing more slowly than earlier predictions, the consumer market is virtually non-existent, and significant volumes of 3D printed final part manufacturing are still years away. Once again, I am only showing the performance of these companies in the past — I have no views on their future performance.

3D chart

Of course, there are other FinTech companies that are doing well, but I do think the conversation would be more realistic if we publically discussed the fact that some of the leading players are going through growth pains. Based on the recent performance of the crowd lending companies, I would say:

“It seems that they are disrupting their shareholders even faster than they are disrupting the banks.”

Speaking of banks…

One part of the reason that the alternative lenders are seeing their share price fall is a shift in capital: part of how they have succeeded is by offloading some portion of their loan portfolio to other investors. But while they were able to offload 40% last year, that was only 26% in the most recent quarter, and at lower margins to boot. It is worth asking to what extent the whole FinTech space might have seen valuations inflated by excess capital? Across the broader tech space, we are seeing new money in declining, which has caused some to talk about tech unicorns becoming extinct, with obvious knock on effects for FinTech as well. (In addition to Lending Club and On Deck, who are both public, all of Prosper, Funding Circle, Avant, SoFi and Kabbage are alternative lenders with >$1B valuations based on last rounds. That is SEVEN lenders with billion dollar plus valuations!)

Although alternative lending is growing, it is still tiny compared to traditional lending (US consumer credit is around $3 trillion, and all the alternative lending is under $20 billion in 2015.) The whole reason crowd lending and the other forms did as well as they did was they were exploiting an inefficiency in the market: banks lent money to individuals or small business who had a credit score (these numbers are arbitrary, but give you the idea) of 60/100 and higher, and wouldn’t usually lend below that number. Meanwhile, alt lenders have lower costs, no branches, and fancy algorithms.

They don’t lend to just anyone of course, but they were able to make loans to those whose score was under the banks’ cut-off (which changes over time) at slightly higher interest rates, and still not have too many non-performing loans. Yahoo, and watch the money roll in and the market cap rise! Boy those banks are stupid and inflexible dinosaurs for not lending to those with scores under 60, eh?

Don’t get me wrong: banks can be pretty inflexible some of the time. But they aren’t idiots, and what would happen if banks saw billions of dollars of loans start moving away from them? What if they shifted their lending criteria, just a little? If they move their bar down to (for the sake of argument) 58 or higher, they would be able to take back 20% of the addressable market for the alternative lending players, and (most importantly) they would be the most profitable and least risky borrowers.

I think this is a critical point: as FinTech players exploit weaknesses of the banks, the banks (while not exactly nimble) will be able to respond.

The other thing that occurs to me is that the FinTech industry and alternative lending especially, have emerged and evolved in the period 2008-2016: the financial crisis, its aftermath, a moderately decent and sustained economic recovery with strong non-government job growth, and ultra-low interest rates throughout. I imagine they have all kinds of wonderful algorithms that tell them exactly how many loan losses they will incur over time.

But what happens if rates start going up, or the economy hits an air pocket? Even banks that have been around for a hundred years sometimes get caught offside when that happens, and they need to slow dividend growth or adjust their capital ratios when loans don’t get paid back the way you expected. The alt lending companies don’t have nearly as much history, or as much capital: they are much more levered to non-performing loans.

I would be willing to predict that the lenders (and perhaps even many other kinds of FinTech companies) are likely to do well in certain kinds of economic environments, and do less well in others. Not that they go away entirely, just that their growth or profitability might be adversely affected.

Everybody hates banks, amirite?

It makes sense that the folks from Uber are pretty negative on the taxi industry, and vice versa. They are in a zero-sum game, and they will never be working on the same side. But it is interesting to look at Netflix: CEO Reed Hastings has repeatedly said that the company is NOT competing with cable, it is a complementary service. We predicted this in our cord stacking prediction in 2014, saying that most Canadians will get cable AND Netflix, not either/or. Netflix’s approach is working well for them: a number of cable and telco TV providers are now bundling Netflix onto the set top box: the traditional TV distribution company and the internet “disruptor” are actually playing nicely together, to the benefit of both.

But every FinTech conference I have attended has a different vibe. It is impossible to overstate the contempt and hatred some of the more zealous FinTech fanboys (word choice deliberate…see below) have for the incumbent financial service providers. Whether banks, brokers, or insurance companies the traditional players are derided as slow, stupid, inflexible, unresponsive, failing to address millennials, and possessing antique IT infrastructures. Don’t get me wrong – there are grains of truth in most of those charges.

However, if I were a FinTech player, I would tone down the hostility and name calling, at least a little. 1) Don’t poke a sleeping bear. 2) A lot of banks are investing in FinTech companies. 3) A lot of banks are BUYING FinTech companies. 4) Even when it isn’t about ownership, look at Netflix: many FinTech companies have good products that the incumbent financial players would love to bundle, re-sell, white label or otherwise distribute to their existing customers. And, especially in Canada, I think that may be the biggest opportunity.

Because, unlike the folks who attend FinTech conferences, most Canadians actually kind of like and trust their banks for certain services. Take a look at the chart below, from this year’s Global Mobile Consumer Survey data for Canada. 84% of Canadians would prefer to have their banks or financial institutions handle their in-store mobile payments. That’s only one data point, but I do think that FinTech would be better off thinking about banks and other providers as “sell-with” partners rather than the Devil incarnate.


FinTech can address underserved markets!

You will hear the word “millennials” thrown around every few minutes at a FinTech conference, and the BLG event was no exception. It makes sense: many 18-34 year olds are not well-served by the existing financial industry, they love technology, and are willing to take financial advice from robo-advisors and self-provision various banking services on smartphones. But millennials are only about 30% of the population as of 2016, and are NOT the biggest underserved market. Who is?

Women are 51% of the population, are expected to control two-thirds of household spending in the US over the next decade, and in homes with over $2 million in assets are more likely than men to have sole control of making financial decisions, 44% compared to 35%. They are LESS likely to be investing, and frequently feel that financial products are not being marketed well to them.

Globally, everyone knows this, especially in the world of FinTech. Financial products that harness the power of social, the crowd, the sharing economy, mobile and that allow people to invest in the products, causes and concerns that matter to them appeal to both women and millennials. My wife has been doing research on this topic for the last six years, and her white papers on this topic cover both the problem and the solutions. In many countries, Barbara has met numerous female FinTech leaders of companies. FinTech is far from achieving gender parity of course, with Bitcoin conferences being notably and egregiously male.

But the situation in Canada seems to be worse than average. Last week’s event had a round dozen speakers and panelists, and nary a woman on stage. (Lots in the audience, which is a good sign!) I go through the various FinTech websites, and the ‘Who We Are’ pages are almost exclusively male. I meet FinTech companies, and they talk about marketing to women, but have no women executives, no women on the board, no female programmers, and the only woman is in marketing or answering the phones!

It shouldn’t be that bad here: the EU average for women in IT roles is 17%, and Canada is well ahead of that with 22% women in tech, second only to the US at 24%. That’s still a long way short of parity of course, but our bench depth in tech makes me think that one of Canada’s core strengths in FinTech could be around understanding and marketing to women.

Women in FinTech


Is the worst over? Young Americans are watching “only” 10% less traditional TV!

Duncan Quarterly 18-24 year old

The Nielsen Total Audience Report for Q4 2015 came out this week, and (as always) it is filled with a trove of information for those tracking the traditional television industry, and the habits of viewers, especially the key 18-24 year old demographic. You can download the full report for free.

Everyone who follows TV knows that younger viewers are watching much less live and time shifted traditional TV (the stuff you get on your cable package, but not Netflix or YouTube) today than in the past: in Q4 2010 American 18-24 year olds watched 244 minutes of TV per day, and that number was down 41% by Q4 2015 to 144 minutes. That is still well over two hours per day on average, but 100 minutes less PER DAY is a big drop. Unprecedented in the media industry, in fact…unless you look at what happened to young people buying CDs, or reading newspapers!

Media analysts are divided into two camps: those who believe that younger viewers are abandoning traditional TV the same way they abandoned print newspapers (“We are never ever getting back together” to quote Taylor Swift) and those who acknowledge that viewing has indeed dropped for this age group, but 1) the decline seems to be moderating, and perhaps we will find a new plateau of viewing at a stable-but-lower level; and 2) as this age group gets older and has children, their traditional TV viewing may start rising again.

As you look at the chart at the top of this post (guess what I was doing all Good Friday?!) you can see the live and time shifted viewing minutes for US 18-24 years olds for each quarter between Q3 2010 and Q4 2015 (blue line and the left hand axis), as well as the annual change in viewing minutes on the right hand axis in red. I don’t think anyone else has ever published this data in exactly this way before, so feel free to share, or ping me for the data file.

You will notice that the blue line is declining over time, but is kind of wiggly: it is always important to make sure you compare viewing across the same quarter, since there are seasonal effects in TV viewing habits. People watch less in summer, for instance. When I look at the red line, which indicates the year-over-year decline, a few things jump out at me.

1) 2012 data saw some pretty consistent declines approaching 10%, but the rate of decrease lessened into 2013 and it looked like a new viewing plateau around 200 minutes daily might be the new normal. In Q3 2013 the annual decline for 18-24 year olds was 0% — time for a party in the TV industry!

2) Oops, not so fast. 2014 was NOT a good year for TV watching for this demographic: annual declines of 24% and 25% in two quarters were the nadir. That kind of year over year change is (so far as I know) without precedent – neither CD sales nor newspaper subscriptions ever fell that steeply! Needless to say, a raft of “TV is dead” articles started being written around this period, and for good reason. If that level of erosion continued…

3) But it didn’t. For every quarter since Q3 2014 the year-over-year change in viewing minutes for this age group has been getting better/less awful. In fact, in the most recent quarter, viewing was down “only” 10% from a year earlier. That isn’t going to cause TV execs to burst into song, but it isn’t nearly as bad as the 25% drop from a year ago.

Enders TV stats

4) A REALLY interesting additional point can be seen in the chart above. Enders Analysis in the UK has the semi-annual viewing data for various age groups in the UK, and the three youngest demographics are lines in various shades of green. An exact match of the US viewing trends is unlikely, but you can clearly see that the UK data has a roughly similar shape to what happened in the US. Moderate declines at first in 2011/2012, maybe a bit of stability in 2013, a terrible collapse in 2014 (although muted compared to the US data – UK viewing was down 12-14% compared to 20+% in the States!) and then some signs that the worst is over and the annual changes moderated across all of the younger demographics in 2015.

What do I think? Well, first off…annual declines of 25% feel like they were an exception, and were likely a bit of a one-off. Next, it is possible that annual rates of decline may stabilize at around 10% in the US, or that they may improve even more, and we may see single digit annual decreases in traditional TV viewing. I don’t have enough data yet to know, but my hunch is that a 10% annual decline is the most reasonable assumption. The five year CAGR is exactly -10% since 2010. Our Deloitte TV Prediction for Q1 2016 was that 18-24 year olds will watch 150 minutes (2.5 hours) daily, which would be a 12% year-over-year drop. We will see! Now, what about that having kids question?

Nielsen life stage BB only

Take a look at the chart above: one in six (16%) of US 18-34 year olds who live on their own without kids are broadband only. They have no cable package and no TV antenna, and therefore are getting all of their video Over-the-Top (OTT) through the internet, and services like Netflix, YouTube, Hulu, and so on. (I need to note here: those who don’t watch an average of 3-4 hours of traditional TV per day are still watching 3-4 hours of VIDEO content per day. They just aren’t getting it from the traditional broadcasters and distributors, which is a $170 billion industry in the US.) But once those 18-34 year olds start a family, the percentage of broadband only homes collapses from 16% to 6%. Stage of life does seem to matter.

Nielsen life stage viewing minutes

And if you look at the next figure, it matters not just in terms of video source, but also in terms of traditional viewing minutes: people with kids watch over an hour (62 minutes) a day more of traditional TV than those living on their own! This may be good news for the existing TV industry: some younger viewers are perhaps having a brief fling and enjoying OTT hookups only (why do you think it is called “Netflix and chill?”) but once they settle down with kids, they will return to the traditional TV viewing habits of their parents…perhaps with a little OTT added as spice?

That is certainly what the TV bulls would say. I am a little less sure: the Nielsen data is great, but the problem is that relatively few 18-24 year olds are starting families, so the data for those with kids over-represents 25-34 year olds. It will be interesting to see if that same “once you have kids you return to traditional TV” finding still holds true over the next few years? My gut says it will be partially true, but less so than in the past. Too many people with three year olds keep telling me that Netflix plus YouTube has more than enough content for their children. I have no opinion – my kids are too old for children’s TV, and haven’t started spawning yet themselves.

This post is already way too long, but I should add a few things.

Please download the Nielsen report: it is filled with much more useful information. I would particularly highlight Table 5C on page 25. It divides American homes that have internet in five quintiles, or groups representing 20% of the population each. Although the average American watched over four hours of TV per day in Q4 2015, some watched more and some watch less. The lightest viewing quintile watched only 16.4 minutes per day, which is a record low for that group. If you are wondering where the cord-cutters of 2016 are going to come from, I have to point to that number: hard to justify paying $60 per month or more for pay TV when you watch that little.

The other thing I want to add is that our Deloitte TMT 2016 Prediction on US TV is tracking really well. The number of homes paying for a traditional TV bundle (cable, telco or satellite) fell by 1.5 million, and we are predicting 1-2 million for the year. The number of homes that rely on an antenna for TV (broadcast only) rose by 650,000, and we are calling for growth of about a million. The daily live and time shifted viewing time by the population aged 18+ fell by only three minutes compared to 2014, and we are looking for slightly bigger drop of about ten minutes. Might be the televised election primaries…the Republican debates have been drawing bigger audiences than expected: Trump makes for compelling TV, as we all know. 🙂

Even Zuck thinks mainstream VR ecosystem is “at least 10” years away.

zuckerberg_s7_vr_launch_bigOur 2016 prediction for ‪‎VR (virtual reality) hardware and software is that it will have a breakout year, with revenues of over $1 billion.

That’s a great start, but I am cautious on the prospects for hypergrowth over the next few years: I think it will do well with hardcore gamers, and some enterprise applications, but I do not think it will become the next ‘platform’ technology for consumers in the next decade. There are billions of TV sets, radios, computers and smartphones in people’s homes around the world, and getting close to a billion tablets. Will VR headsets join that category soon? I have gone on the record as saying no.

A lot of people are publicly arguing with me: they think that although VR might not be in that league in 2016 or 2017, things will change by 2020.

Respectfully, I just can’t agree. Not only does my own research point to a much narrower adoption, even some VR fans are being cautious. Mark Zuckerberg and Facebook spent billions on Oculus Rift, and made a big splash talking about VR at ‪Mobile World Congress in Barcelona last week. In an interview, here is Mark’s view on VR and its adoption path:

“We are betting that Virtual Reality is going to be an important technology. I am pretty confident about this. And now is the time to invest….I honestly don’t know is how long it will take to build this ecosystem. It could be 5 years, it could be 10 years, it could be 15 or 20. My guess is that it will be at least 10.”