I’m : a programmer, writer, podcaster, geek, and coffee enthusiast.

Google hasn’t yet provided any direction on Android as a tablet platform, which means that the Tab is held back by lagging application support and software that doesn’t fully take advantage of the extra screen real estate.

Part of the conclusion of Engadget’s Samsung Galaxy Tab review. This is going to plague every Android tablet until Google takes a clear, bold stance by making their own tablet versions of Android’s built-in apps — which may never happen.

There’s likely to be a big chicken-and-egg problem: Google probably won’t care to devote any resources to good tablet interfaces for Android software unless one of the hardware tablets takes off, but none of the tablets are likely to take off with half-assed tablet interfaces on its software. It sounds like today’s Galaxy Tab is the Android equivalent of “just a big iPod Touch”, and it’s up to Google — not Samsung — to make it more meaningful.

This is the kind of problem that Apple simply doesn’t face in their products: by keeping it all under the same roof and having a ruthless dictator with high standards and great taste at the top, they would never release hardware that begs for software. They’d just hold the hardware back until the software was ready — and not just running, but thoughtfully tailored for the device.

Developers don’t rush to new platforms

A common fallacy is assuming that any new platform in an exciting market — recently, smartphones and tablet computers — will be flooded with developers as soon as it’s released, as if developers are just waiting outside the gates, hungrily waiting to storm in.

In two recent cases, that’s exactly what happened: the iPhone and the iPad. (And probably the Mac App Store next.) So important people, including the tech press, consumers, and many hardware manufacturers themselves, assume that every new hardware platform will be greeted with the same rush of high-quality software.

Steve Jobs knows this is wrong, even though his company is the beneficiary of the best cases of the supposed effect. As he said, in the recent earnings call:

You’re looking at it wrong. You’re looking at it as a hardware person in a fragmented world. You’re looking at it as a hardware manufacturer that doesn’t really know much about software, who doesn’t think about an integrated product but assumes the software will somehow take care of itself. And you’re sitting around saying, well, how can we make this cheaper? Well, we can put a smaller screen on it, and a slower processor, and less memory, and you assume that the software will somehow just come alive on this product that you’re dreaming up, but it won’t. … Most [developers] will not follow you.

The problem is that hardware manufacturers and tech journalists assume that the hardware just needs to exist, and developers will flock to it because it’s possible to write software for it. But that’s not why we’re making iPhone and iPad software, yet those are the basis for the theory.

We’re making iPhone software primarily for three reasons:

  1. Dogfooding: We use iPhones ourselves.
  2. Installed base: A ton of other people already have iPhones.
  3. Profitability: There’s potentially a lot of money in iPhone apps.

Of course, the last two are related: it’s hard to have one without the other, but subtle implementation or demographic differences can make a huge installed base yield relatively low profitability.1

Without dogfooding, we aren’t motivated to make our software with craftsmanship and pride.

Without an installed base, too few people will be able to buy our software for it to be worth generalizing and releasing.

Without profitability, we can’t afford to make and support it.

For most platforms, all three conditions need to be met for a significant number of developers to justify writing software for them.2

This worked so well for the iPhone because we already had iPhones for a year before we could make apps for them. Thousands of developers already owned and loved their iPhones: they were so good that we all bought them without having any apps at all. And on day one of the App Store, there were already 6 million iPhones in the world. (Three days later, there were 7 million.) And the app-buying process couldn’t be easier: it was, and remains, the easiest purchasing process in the entire software industry. Dogfooding, installed base, and profitability were all very strong.

The iPad was trickier: it started with an installed base of zero. But the product was so good and so compelling that we all knew that it would sell like crazy from day one. And it was so good that we all got iPads, too, solving the dogfooding issue. And since it retained the same easy purchasing process as the iPhone App Store, and we knew it would have a big installed base shortly, the profitability was promising as well. So a bunch of us made great iPad apps.

Now, consider this fall’s tablet computers. Can you say with confidence that any of them will address these three needs well enough, and for enough developers, to ensure a steady supply of quality software?

I can’t.

  1. For instance, both BlackBerry and Android have huge installed bases, but their marketplaces and payment procedures significantly reduce the profitability of maintaining good apps on their platforms. ↩︎

  2. At least one condition being extremely powerful can justify shortcomings in the others. For instance, a lot of developers who don’t particularly care for Windows make Windows software anyway because of its huge installed base. And some developers make software for far less popular devices because they own them and want the software for themselves (like my Kindle support) despite the low likely profitability. But for the most part, a platform needs all three conditions to be strong. ↩︎

My Default.png dilemma

iOS apps display a prerendered, static image when they launch called Default.png. Developers can set it to anything they like. This is Instapaper’s:

This matches the iOS convention of showing the app’s typical interface, but empty, so it looks like it has fully loaded (before it actually has) and then the data “pops” in a second or two later. It’s a trick that the original iPhone employed to make the app launches seem faster and more responsive. The pictured interface is just a decoy: it’s a static image file that doesn’t necessarily have any relation to what the interface will actually look like when it loads.

Apple’s apps have always had an advantage with this approach: they can change their launch images at runtime. Before quitting, they can render an image of how they actually look, so when they launch next and restore their state, it looks seamless.

But third-party apps can’t change theirs dynamically. This poses a visual continuity problem: we need to show the same launch image even if the app is restoring a state that doesn’t look anything like it.

In Instapaper’s case, this is a problem in two major states that it may launch in:

Dark mode is the much bigger problem. In a dark room, it’s jarring and blinding for the bright-white launch image to blink briefly when launching the app into dark mode. I’ve gone to ridiculous lengths in the past to avoid showing white screens or flashes while in dark mode, so this one major exception irritates me.

One way to avoid this issue is to make the launch image dark instead of light. I can’t just make it a picture of the dark-mode interface, because then the (much more common) light-mode interface would look strange when it appeared after the dark launch image.

Another option, fairly popular among developers, is to make a splash screen. I considered this for 2.3, but chickened out. Here’s a quick mockup of what I had in mind:

(That’s the dark-mode background color, #141414.)

This has its own set of problems:

So both launch-image types have significant downsides. It’s a bit like the Settings-go-in-the-Settings-app dilemma, but I think that one has been definitively settled by now (in most cases, they don’t).

In this case, I don’t think either option is good enough. I’m sticking with the Apple convention1 for now, but it’s always going to annoy me until I come up with a better solution.

  1. Rule #1 for non-game iPhone apps: If in doubt, do it Apple’s way. ↩︎

The company will also announce that it has raised $800,000 in venture capital, the first step in moving along the path from building an app to running a profitable business.

The New York Times on Pulse growing and making the app free

I’m happy for the Pulse team. Pulse is great. But this quote, for which the blame lies more on this article’s author, is (probably unintentionally) ridiculous.

Hundreds (if not thousands) of iPhone and iPad apps are made by profitable businesses that didn’t raise any venture capital. In fact, making the app free is going to remove its biggest revenue stream, which is likely to be generating at least $25,000 per month.

I’m sure they went into this responsibly and have a good chance of making more later in other ways, as the article mentions. But in the short term, this move is likely to make them quite unprofitable. That’s why businesses raise venture capital: to cover their costs when they’re unprofitable so they can grow into something that’s hopefully larger and very profitable later.

It’s ridiculous, incorrect, and insulting to those (like me) who have chosen the traditional business model — charge money, spend less than you make — for this author suggest that giving away your product for free and paying your expenses with VC money is the “first step” to make your app development “a profitable business”.

Instapaper’s backup method

I mentioned earlier on Twitter that my home computer downloaded a 22 GB database dump every three days as part of Instapaper’s backup method, and a lot of people expressed interest in knowing the full setup.

The code and database are backed up separately. The code is part of my regular document backup which uses a combination of Time Machine, SuperDuper, burned optical discs, and Backblaze.

The database backup is based significantly on MySQL replication. For those who aren’t familiar, it works roughly like this:

And the binlogs can be decoded, edited, or replayed against the database however you like. This is incredibly powerful: given a snapshot and every binlog file since it was taken, you can recreate the database as it was at any point in time, or after any query, between the time it was taken and the time your binlogs end.

Suppose your last backup was at binlog position 259. Someone accidentally issued a destructive command, like an unintentional DELETE or UPDATE without a WHERE clause on a big table, at binlog position 1000. The database is now at position 1200 (and you don’t want to lose the changes since the bad query). You can:

And you’ll have a copy of the complete database if that destructive query had never happened.

Replication is particularly useful for backups, because you can just stop MySQL on a slave, copy its data directory somewhere, and start MySQL again. The directory is then a perfect snapshot with binlog information (in the file). If you can’t afford the slave’s downtime, or you want to snapshot a running master without locking writes, you can use mysqldump if the database is small or xtrabackup if it’s big, but it’s really best to dedicate a slave to backups if you can.

So here’s how I use replication for Instapaper’s backups: (click for larger version)


UPDATED, one day later: Inspired by the emails I’ve gotten in response, I’ve accelerated my S3 plans and did it all today. Now featuring automatic S3 backups as well. I’ve updated the diagram and description to include this.

The slave takes snapshots (“dumps”) of its data directory every day and continuously copies the binlogs from the master to its own disks and to my home computer with rsync. Once a week, it copies that day’s entire data dump to Amazon S3 and my home computer.

The dumps have the binlog name and position conveniently in their filenames:


These are additionally backed up from my home computer with Time Machine and SuperDuper. Additionally, I occasionally burn a Blu-ray disc with the most recent dump, and I take some of them offsite.

The slave is usually within 1 second of being up to date. And since the binlogs are synced every few minutes, the copies on my home computer and S3 are almost always within 15 minutes of being current.

Room for improvement

Of course, there’s always room for improvement in a backup scheme:

And you can probably think of an improvement that I haven’t considered. If so, please email me. Thanks.

Supporting older versions of iOS while using new APIs

If you wanted your iPhone or iPad app to work with older versions of the OS, or if you wanted to make a universal app that ran on both iPhone and iPad, you need to ensure that the code never tries to call a method or instantiate an object that doesn’t exist on its OS.

For example, the iPhone doesn’t support UIPopoverController, and iOS 4.1 doesn’t support 4.2’s new printing API, so I need to ensure that my universal app never attempts to show a popover on iPhone, and never attempts to print a document on pre-4.2 OSes. If I try to access functionality that doesn’t exist, the app will crash.

The nature of Objective-C requires the linker and runtime to know about any declared types in the app, so even if I have a pointer to an unsupported object, it will crash on launch. The solution before OS 4.2 was to generically type any references with id and use NSClassFromString() to call methods:

if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad) {

    // dyld errors on iPhones crash the app on launch
    UIPopoverController *p = [[UIPopoverController alloc] init...];

    // dyld won't crash on launch
    id p = [[NSClassFromString(@"UIPopoverController") alloc] init...];

Unfortunately, there are some conditions where that couldn’t work, such as if you wanted to have a subclass of a new class in your app. I ran into this issue when making Instapaper 2.3.1’s Print feature, which needed a custom subclass of UIPrintPageRenderer for a feature I wanted. If your app contains a subclass of an unsupported class, even if it’s not instantiated, it will crash on launch. You can make a subclass dynamically, which is like the subclass equivalent of using id and NSClassFromString(), but it’s difficult and messy.

The other option to avoid all of that is weak-linking, which makes the runtime manually look up the existence of every symbol before its first use. Weak-linking allows direct subclassing from new classes and doesn’t require the id pointers and NSClassFromString(). But it can dramatically slow performance if a lot of symbols are weak-linked.

Prior to the iOS 4.2 SDK, you could only mark entire frameworks for weak-linking, which usually meant all of UIKit (and potentially Foundation and libSystem, depending on what you needed). This could cause potentially fatal slowdowns on launch, especially on older hardware, which made this a poor option for many apps.

The iOS 4.2 SDK added a significant improvement: automatic weak-linking of only classes that required it, instead of entire frameworks. (As far as I can tell, it’s an automatic and officially supported version of this workaround.) But it only enables this support under certain conditions:

If you build under these conditions, you can safely subclass or have pointers to whatever you want (as long as you’re careful not to instantiate them when they’re not available), and you can check for a class’ availability at runtime like this:

if ([UIPopoverController class] != nil) {
    // UIPopoverController is available, use it
    UIPopoverController *p = [[UIPopoverController alloc] init...];

And your app can be compatible with all iPads, and all iPhones and iPods Touch back to version 3.1.

  1. This is actually a bug that’s fixed for the upcoming LLVM compiler 1.7. ↩︎

Not being That Guy whose music you hear on the subway because his earbuds are cranked so loudly that you might as well be wearing them

Some people buy too many shoes or collectible figurines. I buy headphones.

I currently have four pairs: two full-sized pairs for home and office use, and two portable pairs for riding the train and walking.

From left to right:

I just got the B&W P5 at Apple’s Black Friday sale (for $72 off) to replace the PX 200-II, but when we were in a very quiet room later that night, Tiff could hear my music a bit too clearly from a few feet away.

I was worried that my music may be annoyingly audible to nearby train passengers — after all, one reason to buy closed headphones is to avoid that — and I really don’t want to be That Guy.

So when I got home, I devised a test to compare all of my headphones and see if the P5 leaked too much sound for social comfort. But I don’t have an SPL meter, so I improvised:1

And got these results:

Listen (MP3)

Obviously, the open DT 880 leaks far more sound than the closed models, which wasn’t a surprise.

The other results are pretty good news for the P5: surprisingly, the big 280 Pro that I thought was completely sealed from the outside world leaked just as much sound as the P5 and the PX 200-II, which isn’t much but is audible in very quiet rooms or at very close range.

And none of the closed headphones were as quiet at close range as I thought. Sorry for all of the Phish, Topherchris.

  1. This wasn’t 100% scientific: in addition to whatever nitpicky details I’m not thinking of, since they all take different amounts of current to yield the same output volume, I had to estimate consistent volume levels across all of the headphones. ↩︎