I’m : a programmer, writer, podcaster, geek, and coffee enthusiast.

Nintendo and the hardware business

MG Siegler, reacting to news that Nintendo’s Wii U isn’t selling well:

I just don’t see how Nintendo stays in the hardware business. … I just wonder how long it will take the very proud Nintendo to license out their games.

I hear this a lot, but it’s not that easy for Nintendo. They’ve been spoiled by the massive success of the first Wii.

The Wii was much more profitable than most game systems. Not only was the system itself sold at a profit from day one, which is very unusual in this business, but Wii owners also bought unusually high numbers of add-ons — additional controllers, nunchucks, steering wheels, tennis rackets, exercise scales… — with very high profit margins.

And this wildly profitable system also sold more than anyone expected, including probably Nintendo.

The Gamecube sold poorly, sustained mostly only by enthusiasts of Nintendo’s long-standing game franchises (Mario, Zelda, etc.). The Wii sold well because it had a much more broad appeal — people would buy it even if they didn’t care about Nintendo’s long-running franchises.

But the Wii was a fad, in retrospect. The Wii U looks like it’s going to have more Gamecube-like appeal: long-time Nintendo fans will like it, but it won’t do well in the rest of the market.

The solution most people suggest is for Nintendo to exit the cut-throat, usually-low-margin hardware business and just make their well-known games for other platforms. But Nintendo’s hardware business isn’t low-margin at all. Suggesting Nintendo leave it would be like suggesting that Apple leave the hardware business.

Unlike Apple, Nintendo’s just not selling a lot of hardware anymore. That’s a big problem. But becoming a software-only business would almost certainly bring in far less revenue, especially if they wanted to sell their games on iOS, where nobody can sell games for $50.

And their franchises might not matter to the general gaming public as much as we think. What percentage of gamers are even old enough to have played more than one or two Mario, Zelda, or Metroid games?

Neither option looks good for Nintendo.

The market has moved on from what they do best. There’s not a lot of room left for game systems that aren’t also media centers and social gaming hubs, both of which Nintendo is still terrible at. And even those systems aren’t very profitable or compelling anymore.

The software side of gaming has also lost most of its middle class. At the high end, there’s room for a small number of huge-budget blockbuster titles that usually involve realistic sports simulations or killing people, none of which Nintendo does well. They compete by pushing the boundaries of cutting-edge graphics hardware, which Nintendo doesn’t produce anymore, and licensing real-life sports teams, which Nintendo doesn’t do. Or, more often on the PC side, they operate massively multiplayer online social fantasy worlds, which Nintendo also doesn’t do well. These successful blockbusters can charge $50.

At the low end is casual gaming, including the entire iOS gaming market, which is rapidly eroding demand for high-end gaming. Modern casual gaming almost always happens on computers or computer-like platforms, not traditional game systems connected to TVs. It relies much more on social features, which Nintendo doesn’t do well. Many of the big hits succeed by taking advantage of psychological tricks or gambling mentalities, which Nintendo is probably too proud to do. Casual games are usually free or nearly free up front, and they get money from frequent in-app purchases or advertising, which Nintendo would probably also hesitate to do.

Nintendo needs the profits of the high end, but they can’t compete there anymore. All of the growth is happening at the low end, which is mostly games that they can’t or won’t make. And even if they succeeded in casual gaming, it probably wouldn’t bring the kind of profit that they need.

I don’t think Nintendo has a bright future. I see them staying in the shrinking hardware business until the bitter end, and then becoming roughly like Sega today: a shell of the former company, probably acquired for relatively little by someone big, endlessly whoring out their old franchises in mostly mediocre games that will leave their old fans longing for the good old days.

The iPhone Plus DPI argument

I’ve read a lot of counterarguments to my theoretical iPhone Plus and its 264 DPI display. I’d like to address the most common:

  1. It won’t qualify as Retina, so Apple won’t ship it. In the iPhone 4’s intro, Steve Jobs said that the threshold for not being able to see individual pixels is approximately 300 DPI. But then they shipped the Retina iPad at 264 DPI, because viewing distance changes the threshold.

    Apple changes its mind and bends its own rules as times, technologies, and market conditions change. Non-adherence to a marketing remark a few years ago won’t stop Apple from shipping a product today.

  2. There’s no point to making a larger screen with the same resolution. To be blunt, those arguing this need to broaden their horizons. A lot of people appreciate or need a larger screen so its contents are easier to see or touch, or more comfortable to read.

    Plus, much iOS usage is scalable: applications have font-size settings, maps and web pages scale, photos zoom. Even without more pixels, people with good vision would get more usable space by being able to zoom out more in scalable content or set applications to use smaller font sizes. (That’s why, in my mockups, I set Instapaper to a smaller font size on the “iPhone Plus”: to demo the usefulness of a larger physical screen even if the toolbars and buttons would all just get larger.)

  3. It would need to be at least 720P (1280 × 720). Who cares? 1136 × 640 is close enough not to matter for most people, and comes with huge advantages in app compatibility and OS maintenance since it’s not a new size.

  4. It couldn’t compete with large-screened Android phones with much higher DPIs. On one tech-spec checklist item, that’s true. But that has never stopped Apple before.

    Apple isn’t shipping higher-DPI screens yet because they’re choosing not to, not because they can’t. Higher-resolution screens have major downsides in battery life and GPU performance. The panels also cost more, but that’s only one part of the carefully balanced tradeoffs in mobile-device design. (Suppliers also may not be able to produce enough higher-resolution panels to satisfy the massive iPhone demand even if Apple could ship a sufficient GPU to give good performance without slaughtering the battery.)

  5. Apple wouldn’t ship multiple screen sizes because it would complicate the supply chain, or There has always just been one iPhone size, so they won’t add another one. Sure, it would be easier to just have one model forever. But Apple has broadened product lines as they’ve matured to gain marketshare, address more needs, and leave less room for competition.

    The four-product lineup is long gone. Today, Apple ships 24 distinct iPhone hardware models in two screen sizes.1 The iPad is even more diverse: 48 distinct hardware models,2 two screen sizes, and two densities of the larger size. They also sell 34 iPods,3 and a surprisingly diverse and conflicted notebook line.4

    Yet it works. People navigate the lineup well enough, and the huge network of well-regarded Apple retail stores helps tremendously.

Apple has left very few holes in their lineup that can be profitably filled. Almost nobody’s refusing to buy a MacBook only because one isn’t offered in their desired screen size. Almost nobody is seeking non-Apple portable music players only because they can’t find an iPod form factor that works for them. And now, since the release of the iPad Mini, almost nobody is saying that the iPad lineup needs another size.

Imagine if they only sold one laptop size. It would almost certainly be the 13” MacBook Pro, because that’s generally rumored to be their best seller. Imagine what a waste it would be for customers not to have great alternatives on both ends like the 11” MacBook Air and the 15” Retina MacBook Pro.

If it makes sense to add another iPhone size, and they’re losing many profitable sales by not addressing that market — both of which I believe are true — then Apple will add another iPhone size. The iPhone is Apple’s most important, most profitable product. They can sell multiple sizes.

I don’t think they need to wait until they can go to “4X” Retina density (in fact, I question whether that will ever be worth the battery cost), I don’t think they’ll care if it’s not exactly 720P, and they sure won’t give a damn that some Android phones shipped with higher-DPI screens and got worse GPU performance and worse battery life as a result.

  1. 4 iPhone 4 models (2 colors, 2 radios), 2 iPhone 4S models (2 colors), and 18 iPhone 5 models (2 colors, 3 capacities, 3 radios). This doesn’t count iPhones locked to different carriers, or unlocked, that share the same hardware as other models. ↩︎

  2. 18 iPad Mini models (2 colors, 3 capacities, 3 radios), 6 iPad 2 models (2 colors, 3 radios), and 24 Retina iPad models (2 colors, 4 capacities, 3 radios). ↩︎

  3. 12 iPod Touch fifth-generation models (6 colors, 2 capacities), 4 iPod Touch fourth-generation models (2 colors, 2 capacities), 8 iPod Nano models (8 colors), 8 iPod Shuffle models (8 colors), and 2 iPod Classic models (2 colors). ↩︎

  4. There are currently three very different Apple laptop lines, plus multiple sizes and spec baselines within each one, each of which can then also be custom-configured.

    And the lines overlap. Two lines have 15” models, and all three lines offer a 13” model. The 11” Air has a higher resolution than the 13” Pro. The 13” Air has the same resolution as the 15” Pro.

    It isn’t even clear which is “best”. You can’t get both the most storage and the best screen in the same model. The model that supports the most storage doesn’t support the most RAM. The models with the best screens don’t include Firewire, Ethernet, or a DVD drive. ↩︎

Buttered coffee

Anytime there’s a new artisanal coffee gadget, elaborate $10,000 brewer, or bean fad, everyone always asks me about it because I’m a big coffee geek.

A few months ago, people started asking me about “Bulletproof Coffee”:

Try this just once, with only 2 Tbs of butter, and have nothing else for breakfast. You will experience one of the best mornings of your life, with boundless energy and focus. It’s amazing.

It’s hard to wade through the Bulletproof Exec site network and figure out the whole picture of who they are, what they’re selling, and what evidence they have to back up their claims. As far as I can tell, it’s just coffee with butter added (optionally with MCT oil, which I don’t have), and the site is mostly written and run by Dave Asprey.

Asprey also encourages us to buy his “Upgraded Coffee”, claiming that most other coffee is full of mycotoxins from mold. I’m skeptical of this claim, and there’s a great analysis and dissection of it on StackExchange from someone who appears more qualified to evaluate such claims.1

I’m skeptical of all claims from people who believe in or peddle pseudoscience and quackery,2 but for you, my dear readers, I actually bought a stick of unsalted, happy-cow butter3 and risked this morning’s coffee — a fine Kenya I roasted two days ago, wet-processed to avoid mycotoxins — to test this for you.

Asprey’s recipe is 2 tablespoons of butter for 17 ounces of coffee, which is more caffeine than I can handle in the morning, so I made roughly a third of it: 6 ounces of coffee (AeroPress-brewed from 9 grams of beans), and about two-thirds of a tablespoon of unsalted butter.

Butter does not dissolve into coffee. It melts and floats on the surface in a thick, oily layer. I kept stirring it and swirling it around so I wasn’t just taking sips of butter off the top, but it was a losing battle.

It tastes exactly how you’d expect: like drinking a butter sauce with a light coffee flavor. It was hard to get through, feeling and tasting mostly just like butter.

The butter, even being unsalted, severely masked the flavor of the coffee, even moreso than half-and-half would. It blocked all of the freshly-roasted-coffee flavor I love, and Kenya has very strong flavor even among other single-origin beans. (That’s why I like it so much.)

But how did I feel? It makes Asprey feel great:

It makes for the creamiest, most satisfying cup of coffee you’ve ever had. It will keep you satisfied with level energy for 6 hours if you need it.

Hunger-wise, it kept me satisfied for a couple of hours. Admittedly, I didn’t make his full recipe, so maybe the full 2 tablespoons of butter would have lasted longer. I usually eat a light breakfast, so this was good enough to address hunger.

But it left my lips feeling oily all morning, and it completely ruined my coffee’s flavor. It looked gross, felt gross to drink, and left a gross layer of oily fat to scrub out of the mug. If I want to experiment with eating a low-carb, high-fat breakfast again, I’ll pick a different method.

I don’t recommend trying this.

  1. I’d add one more bit of doubt: Asprey wants to sell us his “upgraded” toxin-free beans, insisting that the roasting process doesn’t eliminate the mold toxins from other beans. But then he says, “If you can’t find good beans, order an Americano because steam helps to break down the toxins.”

    I can’t find a good reference for average steam temperature from an espresso machine, but I imagine it’s not much higher than 100°C, and there’s no way brewing an Americano can bring the coffee up to the steam’s temperature. So Asprey’s claiming that brewing coffee somewhere below its boiling point of 100°C breaks down the toxins, but roasting doesn’t.

    But when coffee roasts, its internal temperature is brought to a minimum of 205°C for the lightest drinkable roast (stopped at first crack), and probably closer to 220°C (shortly before second crack) for a dark enough roast to appeal to most coffee drinkers. ↩︎

  2. When I started losing my hair at age 17, my mother and I explored a lot of options, including just about every new-age “remedy” technique in widespread use. Science won. Now I just get a quarter-inch buzzcut every few weeks, which is far cheaper and a far more effective fix than Rogaine, Propecia, homeopathy, eating right for my type, vibrating hairbrushes, special shampoos, and reiki. ↩︎

  3. Asprey insists on grass-fed, unsalted butter. I couldn’t find any at my nearest two fancy grocery stores, so I came as close as I could with some unsalted, not-grass-fed butter↩︎

The first read-later service

I’m nervous to post this, but my readers and customers were very appreciative that I clarified the Readability story after some incorrect assumptions in the press, so I’ll take the chance one more time.

Steve Streza was this week’s guest on CMD+SPACE. Steve’s the lead developer at Instapaper’s biggest competitor, Pocket, founded by Nate Weiner and formerly named Read It Later. Weiner has commented numerous times in the press that Read It Later was the first read-later service, and that it predated Instapaper by being started in 2007. On CMD+SPACE, Steve repeated that claim:

Mike: “What sets Pocket apart from other services like Instapaper or Readability?”

Steve: … “It’s worth noting that Read It Later and Pocket, we were the, kind of, first people to develop a save-for-later service, and that term, ‘read it later’, had kind of been adopted by other companies who are building these kind of similar products.” …

Later in the episode, he made the same claim a second time: Read It Later was the first service in this category. Whether this is true depends on how you define that.

The first public mentions I can find of Read It Later are in November 2007. It was just a Firefox extension with two buttons, read later and reading list, that behaved like a bookmarks folder. There was no web service, no sync, and no support for other browsers — just a special bookmarks list stored locally in Firefox. This is what Pocket now claims was the first read-later service.

Instapaper launched in January 2008 as a bookmarklet that worked in any browser and a web service to sync your bookmarks between devices.1 A few months later, I added its text-extraction parser, and in June 2008, I released its iPhone app with customizable text settings and offline saving.

I think most people would consider these elements — multiple browser and device support, sync service, text extraction and reformatting, and offline saving — the essential ingredients that make a read-later service. And in 2007, Read It Later offered none of these.

That’s why I think it’s extremely misleading, or simply false, to say that Read It Later/Pocket was “the first save-for-later service”. (There wasn’t even a “service”.2)

Months after Instapaper launched all of these features and was being very well-received in the tech press, in October 2008, Read It Later added a web service for sync, other-browser bookmarklets, and offline saving. Then an iPhone app in 2009. And while Read It Later has introduced some original features, Weiner systematically copied almost every major Instapaper feature over the first few years of Instapaper’s existence.

This is ancient history, and while it annoyed me at the time, I don’t really care anymore. Nobody does. For the most part, it doesn’t matter.

I don’t care anymore whether people know that Instapaper defined the read-later service and was first to most of its core features. I don’t care anymore whether people know how much Read It Later copied from Instapaper in our early years. You can’t force people to know backstories.

But for Pocket to repeatedly state the opposite — that they were the first service like this, and that Instapaper followed their lead — is over the line, and I won’t sit here quietly and let that go unchallenged.

I like Steve Streza, and I don’t think he’d knowingly mislead people. I assume he’s just repeating the story he was told.

  1. I didn’t even know about Read It Later at the time because I stopped using Firefox in 2006. ↩︎

  2. In fact, Readeroo, from April 2007, was much closer to the idea of a read-later service than Read It Later was. It was also a Firefox extension that also had two buttons — add to reading list, show list — but used Delicious as the back-end, simply creating bookmarks with the “toread” tag. ↩︎

The Magazine: now with full-article sharing, web subscriptions

When I launched The Magazine in October, I didn’t know whether enough people would subscribe to keep it afloat. I didn’t know whether most people would make fun of the idea, argue that articles on the internet should be free, or be disappointed that my first major new product in nearly five years was “just a magazine”. So I launched a minimum viable product: just the iOS app with a barebones CMS behind the scenes. I had already sunk four months of my time (and thousands of dollars worth of other people’s time) into the app and the first few issues, and my lowest priority was building a website.

I hastily built a basic site while I was waiting for the app to be approved. I only needed it to do two things: send people to the App Store, and show something at the sharing URLs for each article. Since The Magazine had no ads, and people could only subscribe in the app, I figured there was no reason to show full article text on the site — it could only lose money and dilute the value of subscribing.

That was the biggest mistake I’ve made with The Magazine to date.

On January 3, we published And Read All Over, a bold piece by Jamelle Bouie about racial access barriers in the tech press. We got a good number of comments from our readers, but nothing out of the ordinary.

To attract the best writers, including people who already have their own sites with strong readerships, we allow authors to republish their articles on their own sites (or anywhere else) just 30 days after we publish them. Bouie did exactly that, as many of our authors have. Only then did his article explode into the huge discussion I suspected may result from it — and The Magazine wasn’t a part of it.

I think the biggest reason the discussion didn’t happen when we published it is that sharing links to The Magazine’s articles hasn’t really been a great proposition. You’d share a link, and everyone would just see the truncated teaser. Some of them would subscribe and see the rest, but most would get turned off by the truncation and just abandon the effort, as we web readers tend to do. Most people with big followings would quickly realize this and, understandably, avoid linking to our articles.

So as Bouie’s piece was starting great discussions everywhere a month after we published it, I reprioritized web subscriptions from “Maybe I’ll do it in 6 months” to “I’m doing that right now.” And now, three weeks later, it’s done: you can now subscribe to The Magazine on the web, and you can sync subscriptions between the app and the website so you don’t need to pay twice. (I even implemented a crazy passwordless login system. Check it out. You can create an account on the site without subscribing if you want to see it.)

And more importantly, full-text shared links can now exist while keeping the business healthy: they’re now free trials for a porous paywall.

I’ll probably be tweaking the parameters to find the combination that works best, but here’s how it works today:

It’s a lot like the New York Times paywall, but with a limit of one free article per month. (We only publish about 11 articles per month, and we don’t run ads like they do, so higher limits would be problematic.)

Please help spread the word: you can now freely share links to your favorite stories from The Magazine. Thanks!’s new direction

Today, launched free accounts. Previously, everyone had to pay — originally $50/year, then reduced to $36/year.

The first half of their announcement is written very defensively, presumably because they anticipate anger from paid members. They’re practically beating us over the head with the idea that they’ve always planned to offer free accounts. (I didn’t get that memo, but I don’t care.)

Free accounts have three main limits: they can only follow up to 40 people, they have lower limits for’s new file-storage APIs, and they require an invitation from a paying member. Fair enough, I guess.

What previously made me doubt’s future was that it wasn’t growing quickly enough, and I assumed that a lot of the paying members would decline to renew their accounts when their time was up.1 I’m still worried about that.

My bigger concern is the shift they’ve taken recently away from being a near-clone of Twitter. They’ve always positioned themselves as a more generic platform that could be used to support many different apps, and the Twitter clone was just one use of it. But they’ve pushed especially hard in this direction recently, especially with the heavy promotion of the new File API.

They seem to want developers to start using as a primary storage and communication platform. When they launched, this was something cool developers could do. Now, it seems like this is what developers should do, and this is how we’re supposed to view

As a developer, I don’t think I’d go anywhere near that type of integration. is still too small of a platform to even support many Twitter-like clients: the biggest and most compelling use of the service to date, and presumably the reason why most of its members signed up. Writing a different type of app for it would only appeal to a small percentage of the already-small userbase.

If I want to build an app with a social network backing it, I should use Twitter or Facebook. Developers need to support the biggest social networks because their role is to spread our apps. Building an app exclusively for would work against us.

If I want to build an app with server-side file storage, I should use Dropbox. Not only does Dropbox offer far more space than in both their free and paid tiers, but there are already quite a few more Dropbox users who can start using your app without the hassle of signing up for a new service. (And Dropbox is more likely to be around in three years.)

Worse yet, if I build an app that requires, it still effectively requires a paid account for my customers to use it, because the chances that they’ll already have been given a free-account invitation from another member are nearly zero. How are developers supposed to sell an app that requires a $36/year, third-party, confusingly positioned service that most customers have never heard of?2’s push for developers at this stage is solving the wrong problem. Very few developers need to add features or APIs, and I just don’t see a lot of demand for the new APIs and theoretical use-cases that they’re now pushing.

What developers need is for to add tons of users to the service they already offer. (Conveniently, that’s also what’s users need.)

As long as the invitation requirement is in place, the free tier won’t do this. And when an invitation is no longer required, is going to need to start battling the spam and abuse that all free social services face. In the best-case scenario, they’re going to have scaling challenges. It’s not going to be easy.

But that’s what they really need to be focusing on: getting enough people actively using it to compel everyone — old and new — to stick around.

It may already be too late. This is going to be a challenging year for

I hope they pull it off. I want to succeed. It just doesn’t appear from the outside that they’re solving the right problem.

  1. Correction: I originally wrote here that the first wave of accounts was up for renewal on April 14, giving just seven weeks to convince them to stick around. I had misread my expiration date: it’s actually April 14, 2014 — next year. My mistake. Obviously, then, there’s less urgency on this factor than I thought. ↩︎

  2. Giving away an app for free and relying on’s Developer Incentive Program won’t bring in enough money↩︎

Why don’t MacBooks come with cellular networking?

I’ve shared John Gruber’s theory on this for a while:

I’m not sure why Apple hasn’t offered [4G networking on MacBooks] as an option yet, but my guess is that it’s because Mac OS X isn’t designed to behave differently while on different types of networks. With cellular networking, for example, you wouldn’t want iTunes to download new episodes of TV episodes or even podcasts in the background — a single episode could eat up your entirely monthly bandwidth allotment.

It’s been long enough since cellular modems in PC laptops became commonplace, and the MacBook line is diverse enough, that the omission of cellular options looks like a deliberate choice now rather than a “haven’t gotten around to it yet” feature.

When I first started using cellular modems, the only option was a $60/month plan with a 5 GB/month limit that ran at about 1 Mbps on Verizon’s new-at-the-time EV-DO network. Hardly anyone ever hit that limit — it would have taken about 11 hours of fully saturating the real-world bandwidth to burn through that, and in practice, it was hard to sustain those speeds for long. It wasn’t the kind of thing you’d accidentally do, and back then in 2005, people’s computers didn’t routinely download 5 GB of data unbeknownst to them.

With LTE, you can burn through a 5 GB data cap in an hour if you’re downloading big video files, and it would be easy to burn through the cap in just a few days if you’re streaming HD video — which, in 2013, is commonplace. And most people’s data plans have far less than 5 GB/month today. (At least they’re cheaper.)

I was hoping Mountain Lion would add some APIs suggesting cellular data considerations, but it didn’t happen. Maybe 10.9 will.

To start, Apple could just put cellular-connection detection and responsible-usage logic into iTunes and Software Update. That would be sufficient to launch with new 4G MacBook models at WWDC, then they could have a session on the new API and start enforcing responsible practices in the Mac App Store. Along with maybe working something out with Netflix, they’ll have addressed the biggest accidental bandwidth hogs that most people will face.

If Apple wants to offer 4G in MacBooks, they can start whenever they want. Doing it properly will just take a bit more effort than adding a modem.