This will play out the same way it always does: big competitor announces something the week before an Apple event to try to take the wind out of Apple’s sails, then Apple releases something similar or better, and everyone forgets about the competitor’s announcement.
This year, with my newfound Zazzle skills, I made it a reality:
I don’t think anyone else will find this funny enough to actually purchase one for themselves. I thought the same thing about the Useless mug, and this is far less funny and will appeal to far fewer people.
But if you really want your own Serious Hat, here it is.
This was a lot of fun to test (Hops enjoyed the bonus walks), and I’ve found it to be extremely useful in practice even with just my house set as a location.
This week’s podcast: Instapaper’s ad on Howard Stern, the Android app launch, Instapaper being the Starbucks App Of The Week, background location updates with geofencing, and conservative predictions about all of the “TBA” sessions at WWDC.
A truly big week.
If you’ll excuse the light posting volume over the last week or so, now you know why. I hope to have a chance to write more of this out later this week.
QuickShot is an iOS application that leverages the power of Dropbox to do amazing things with photos and videos.
Tools like Photo Stream brought cloud photos to the masses but they only scratch the surface of what it means to have media in the cloud. QuickShot allows the control of almost every aspect of the upload process including where files go and what they are named once they reach Dropbox. Combining that level of control with Mac OS X folder actions or Dropbox automation services enables the creation of complex workflows that send captured items to iPhoto, OCR software, Twitter, or other destinations based on how the app is configured to upload. The best part is all of these settings can be changed at once with just a few taps.
QuickShot is available as a Universal iOS application for $1.99 on the App Store.
Thanks to QuickShot for sponsoring the Marco.org RSS feed this week.
I talked a lot about the rationale behind this on the podcast this week. Here’s the short version.
I always had three reasons not to develop Instapaper for Android myself:
I didn’t think there was enough of a market for paid Android phone apps.
Android tablets were selling very poorly, but more than half of Instapaper’s business comes from the iPad because of its reading nature, so it seemed like working on an app mostly for a phone platform might not be worthwhile.
I didn’t have time to take any effort away from the iOS app to do it myself, and I hadn’t found anyone else (who I could afford) who I’d trust to do it well.
I don’t know whether the first is still true. Time will tell how well Android phone apps really sell.
But it’s less relevant, since the second is no longer true: the Nook Color, Nook Tablet, and Kindle Fire have sold well. These 7” tablets are sold primarily as reading devices, and their tightly integrated payment systems seem to have created a viable market for paid apps.
Instapaper has had a very popular feature for the e-ink Kindles that’s used by more than 75,000 people. That was definitely worth doing. I managed to develop it myself, but only because it was easy, it has very few features (limited by the device’s capabilities), and it doesn’t require much ongoing maintenance time.
But e-ink readers are being largely replaced by 7” tablets, especially the Nook Color, Nook Tablet, and Kindle Fire. Neither Barnes & Noble nor Amazon publish sales figures, but industry estimates suggest that these products, combined, have sold over 10 million units. Even though I like e-ink readers better than 7” tablets, the industry is clearly moving away from e-ink and toward tablets. E-ink isn’t going away, but it’s being marginalized.
Simply put, Instapaper needs to be on popular reading devices. That category now includes at least three 7” Android tablets, probably with more to come. I realized last winter that I needed to address this demand, but I couldn’t do it myself.
I asked my friends at Mobelux if they were interested in developing the official Instapaper Android app under a revenue-sharing agreement instead of a traditional hourly model, which I couldn’t afford for the quality and amount of work that this would require. We discussed the risks on both sides, and we both agreed that we were willing to accept them for the potential of what could become a great new business for both of us.
Now, six months later, 1.0 is done. Mobelux made a great app, and we hope it does well.
And I can highly recommend Mobelux for other developers’ needs: I’ve known them for much longer than this project, and their work is top-notch.
By Richard Dunlop-Walters, Instapaper’s editor for The Feature:
Once you go down just one single level in the chain, the hits drop off and nobody gives a damn where it came from. Via links can’t be adapted and molded into something that creates what Popova wants - a giant web of links pointing back to her - the attention isn’t there.
It’s clear to me (with a few years under my belt of posting to The Feature) that the simple act of passing along a link or nugget of information really isn’t particularly valuable. Someone that’s good at it can gain a reputation and a substantial following, as Popova has, but the discrete acts that contribute to that reputation aren’t that valuable on their own.
Agreed. Link to great stuff often, and you’ll build an audience. But subsequent linkers don’t owe credit to every intermediate linker. It’s nice in some cirustances as a courtesy, but it’s never necessary. And even when a “via” link is included, most readers don’t care and don’t click.
If I were to speculate about what Apple’s big WWDC TBA session is (some have guessed television), I would guess Apple is going to teach its multitude of developers the basics of natural language processing and how exactly it plans to let them integrate with Siri.
The complexity of this is why I don’t think there will be a Siri API of this sort for quite some time, if ever.
I’d expect third-party-accessible Siri APIs, if they arrive, to be much more simplistic and “dumb” in supported sentence structures and phrasing — something that reads more like AppleScript, such as “Tell Instapaper to update articles”.
My biggest gripe with geofencing is that most apps that implement it do not allow you to set the radius for the geofence. For OmniFocus that means that even driving by certain places will set off the reminders. This is not only not helpful, but I find it downright annoying.
Sometimes a wide fence is good, but most of the time you need the geofence as tight as can be — say 10 yards. Even at that it’s just not accurate enough most of the time to be a feature I find useful in day to day situations.
With perfect input, I’d agree, but that’s not realistic for one critical reason: geolocation isn’t accurate enough, especially indoors, and especially for geofencing.
In my house, my iPhone can rarely get a GPS fix that’s more accurate than about 25 meters. This is fairly common for indoor use. And geofencing doesn’t even use full GPS, since that would use too much battery power. From what I understand, it uses a low-power technique based on cellular triangulation, much like the location services on the GPS-less original iPhone. This saves your battery, but it’s not very accurate.
So even if an app creates a geofence with a precise location, it doesn’t really matter since the triggering mechanism is based on a far less accurate location service. And if the geofence radius is too small, these inaccuracies might often make it fail to fire.
Instapaper’s geofence radius is 50 meters. (A typical Manhattan block is about 80 meters on the short side.) I didn’t want it to fire every time you walked to the bathroom, but I also didn’t want it not to fire as you walked two blocks to the subway. It can’t be much smaller without losing reliability, and it can’t be much larger without losing usefulness.
A reasonable overview of what we’re likely to see by Chris Foresman at Ars Technica.
On page 2, because this article is so long that it needs to be split into two pages, Foresman notes the problem of what to do about sending DisplayPort over Thunderbolt:
The complicating issue is that Thunderbolt not only carries high-speed PCIe data, but must also carry DisplayPort video as well. On all other Macs, GPUs—whether integrated or discrete—are fixed. This makes it easy to pipe the DisplayPort output to the Thunderbolt port, which serves as both a high-speed interconnect as well as the connection for an external monitor. The Mac Pro, on the other hand, has removable PCIe-based graphics cards. How will Apple get the output of these cards into the Thunderbolt controller?
I’ve split this quote into two quotes for your convenience because it’s so long. Here’s quote 2:
The most likely solution is a Mini DisplayPort passthrough cable. ASUS is using an external DisplayPort cable to add Thunderbolt to its latest motherboard designs, but that seems decidedly “un-Apple-like.” There may be a more elegant solution in the works, such as directing the card’s output over the PCIe bus directly to the Thunderbolt controller, but according to our sources, no current graphics cards work that way. Given that reality, we think Apple will use an internal cable combined with GPUs featuring an internal mini-DP connector.
That’s possible, but also inelegant.
Apple doesn’t care about enabling third-party video cards, so they can do something custom here. My theory: Thunderbolt will be sent through ports on the video card’s backplate by putting a Thunderbolt controller on each video card available for this Mac Pro.
After two years, the Mac Pro was “updated” today, sort of: now we can choose slightly faster two-year-old CPUs at the top end, and the other two-year-old CPU options are cheaper now.
That’s about it.
No Xeon E5 CPUs, no USB 3, no Thunderbolt. They’re even shipping the same two-year-old graphics cards. Same motherboard, slightly different CPU options from 2010. That’s it.
The message is clear: Apple doesn’t give a shit about the Mac Pro.
Since the Mac Pros were just “updated”, they probably won’t be updated again for a while. So what should Mac Pro buyers do? The 6-core 3.33 GHz model is now “only” $3000, a $700 reduction from its release two years ago. But it’s also $3000 for two-year-old technology. (The 12-core 2.66 GHz model is still $5000, just like two years ago.)
I bet this is the last Mac Pro. If you wanted to kill a product line, an “update” like today’s would be a good way to clear out parts and keep selling to a few desperate buyers for a bit longer without any real investment.
The authenticity of Cook’s email has now been confirmed with Apple PR via multiple sources. If you read his email like a lawyer, it doesn’t guarantee that this is a Mac Pro, exactly — it could be iMacs that somehow better satisfy “pro” users.
But then Forbes got a spokesperson to seemingly confirm that this is indeed about a Mac Pro:
Apple corrects to say desktop updates for 2013 applies only to Mac Pro and not iMac.
The “Why It Didn’t Work” section reads a lot like the first segment of Build and Analyze 73. The big-picture reasons they cite are correct, although it’s not the entire story:
It’s hard to miss two big events: what happened in November 2011 and April 2012?
In November 2011, Readability stopped requiring all users to pay and significantly downplayed the payment system, making the entire service free to boost the userbase. In March 2012, the (free) Readability iOS app launched with significant PR, but in April, major competitor Pocket (also free) launched with a lot more features and even bigger PR.
The lack of publisher sign-ups wasn’t the only reason this didn’t work. Readability removed the need for people to pay, significantly eroding the income “for publishers”. And they’re no longer the only free option in this market, reducing their visibility and potentially reducing their userbase.
In this week’s special episode from WWDC: the Mac Pro “update”, the updated MacBook Airs, reservations about and impressions of the new Retina MacBook Pro, why Apple hasn’t put cellular radios in their laptops, and my reactions to what we did and didn’t get in iOS 6.
Since the Retina 15” MacBook Pro still uses high-wattage mobile CPUs and a high-powered discrete GPU, I had some concerns about heat and fan noise. I had hoped that it would use lower-wattage CPUs or drop the high-powered GPU to reduce heat and noise, but Apple didn’t do either, opting for maximum-performance components instead.
The asymmetrical fan blades are an interesting trick to attempt to address this problem in a different way: rather than reducing the cooling load, they made the fan noise more pleasant.
Jason Snell graciously indulged my extreme nerdiness and let me test the fan noise on Macworld’s review unit last night. I couldn’t monitor temperatures or RPMs, but I could hear the noise and feel the heat. The results were promising but unsurprising:
At idle speed, the fan is very quiet and will be inaudible in most rooms, just like the previous model.
With all cores maxed out, the fan slowly ramps up to full speed over a few minutes, similar to the previous model.
At full speed, the fan noise is about the same volume as the previous model, but it does sound different. The asymmetrical-blade design works as described, making it sound more like white noise or whooshing air.
When the heavy CPU load stops, the fan ramps back down to low speed more quickly than the previous model.
Most of the heat is concentrated near the middle of the screen hinge, since that’s roughly where the CPU is. It didn’t seem to get as hot as the previous model, but the difference didn’t feel very significant.
Effectively, heat and fan volume are the same as the previous model, but the fan noise has a less irritating tone. If the previous 15” MacBook Pro design was too hot or loud for your preferences, the Retina model probably will be, too.
Even at the non-integer scaled 1680 x 1050 setting, the Retina Display looks a lot better than last year’s high-res panel. It looks like Apple actually renders the screen at twice the selected resolution before scaling it to fit the 2880 x 1800 panel (in other words, at 1920 x 1200 Apple is rendering everything at 3840 x 2400 (!) before scaling - this is likely where the perf impact is seen, but I’m trying to find a way to quantify that now). Everything just looks better. I also appreciate how quick it is to switch between resolutions on OS X. When I’m doing a lot of work I prefer the 1920 x 1200 setting, but if I’m in content consumption mode I find myself happier at 1440 x 900 or 1680 x 1050.
I’m curious to learn more about how those non-integer scaling modes work. Apple may be able to use this trick to offer “Retina” screens on other Mac models before true 2X mode is economical or practical, possibly offering smaller multipliers such as 1.5X if they look good enough.
If so, I’d expect such partial-Retina screens to appear first (or only) at the edges of practicality and economics: the 11” MacBook Air and the 27” iMac.
People familiar with Apple’s plans tell me that when its new iOS 6 software becomes widely available this fall, podcasts will have their own app, where users will be able to discover, download and play them on mobile devices. Users who access iTunes via laptop and desktop machines will still find them in that version of iTunes, though.
I’m hoping that Apple will take this opportunity to add useful features, such as automatic downloading of new episodes, that third-party podcast clients have had for years.
This will probably trigger a handful of “[podcast app] is getting Sherlocked!” stories from the tech press. Believe me, good podcast apps will be fine. (My preferred client is Downcast.) Apple will never add as many features as the better third-party clients, and large segments of the market will always want things done differently than Apple’s choices.
Almost a year ago, I decided to sell my Mac Pro and MacBook Air, consolidating instead into one decked-out 15” MacBook Pro. At the time, having been annoyed by inconveniences and impracticalities of having multiple computers, I wrote:
But multi-computer usage still sucks for so many reasons, because most files, applications, and settings don’t sync. And if your laptop has a smaller capacity or lower performance than your desktop (as it probably does), it’s going to be less useful than you need, more often than you think.
It hasn’t gone as well as I would have liked. For the most part, it has worked, but with a few big drawbacks.
While it has a decent amount of CPU power, I frequently need more. And when I’m stressing the CPU, the fans speed up so much that it’s annoyingly loud. The noise becomes worse than an annoyance if it happens when I’m trying to record audio.
I already need more disk space than this laptop can hold, and with a new baby, I’m taking more pictures and videos than ever. External hard drives are slow, loud, unreliable, and ugly, and I hate the desk clutter they cause. I tried alleviating this somewhat with a Mac Mini server for archive storage and Time Machine, but it’s also slow and finicky.
The dual-GPU setup in the 15” has big drawbacks as well. When the discrete GPU is active, battery life is much worse and the laptop runs much hotter, and Lion is very bad at managing the switching: in practice, the discrete GPU is almost always active.
The discrete GPU always runs when connected to an external monitor. (Not that it does a very good job: I frequently see annoying video glitches and choppy video performance while running in dual-monitor mode.) Running in clamshell mode is impractical: the GPU runs hot and ventilation is reduced, so the fans tend to spin up faster and more frequently, and it runs hot enough that it would be unwise to run it in clamshell all the time.
But clamshell mode is exactly what I need. I hardly ever move the laptop from my desk. I even said as much when justifying this laptop last year:
I work at home now, and when I go on quick day trips, I hardly ever bring a laptop, preferring an iPhone and optional iPad instead. When I do bring my laptop somewhere, such as when traveling, I want more power and screen space than the Air offers so I can get work done. …
I need more time to form a concrete opinion, but it seems so far that this CPU’s awesome performance definitely comes at a cost of increased heat and reduced battery life. For my intended usage as a desktop most of the time, that’s an acceptable tradeoff.
What I failed to predict at the time was that this would result in the worst of both worlds. I had confined myself to the limitations of a laptop, but I really used it as a desktop the vast majority of the time. Even when I could take it away from my desk for a little while, I wouldn’t, because I didn’t want to disconnect all of the cables and rearrange all of my windows.
I went from having an awesome desktop and an awesome laptop to having one machine that served both roles poorly.
A few months ago, I decided to go back to the Mac Pro as soon as an updated model was available, and either keep the 15” for travel or trade it for an inexpensive 13” Air.
This week, the WWDC hardware updates happened, and the choice became more difficult.
The not-new Mac Pro
The Mac Pro was “updated” this week, sort of: it’s still the same two-year-old hardware, but with slightly tweaked CPU options (within the same two-year-old Xeon family) to adapt to Intel’s dwindling production of these old chips.
I was disappointed, to say the least. All indications suggest that we won’t see another new Mac Pro for over a year.
The new 15” Retina MacBook Pro
This looks like a great machine, but of all of the problems I have with my current 15”, it would only really solve one: it’s faster.
Almost nothing else would be meaningfully improved. Its single SSD that maxes out (very expensively) at 768 GB would actually make my storage problem worse. And it would be a complete waste for me to get a beautiful Retina screen only to keep it closed or ignored during the vast majority of my use.
I’d love to have a Retina-screened Mac, but not this Retina-screened Mac. I’d be better served by waiting for a Retina MacBook Air and a Retina Cinema Display.
(MacBook Air owners tempted by this Retina 15”: If you really want to buy one, make sure you’re not going to regret it when we presumably get Retina MacBook Airs within a year.)
John Siracusa is definitely making fun of me now
Neither option is ideal, but neither is my current setup, so I did something crazy.
I ordered a new-old Mac Pro.
It’s the 3.33 GHz 6-core model, which got a $700 price reduction in Monday’s “update”. It’s now “only” $3000, which sounds like a lot unless you’ve ever bought a Mac Pro with a fast CPU configuration, in which case you’ll probably recognize this as a low-midrange price.
I know first-hand that this is a very powerful machine: my wife bought this exact model for photography work when it was released two years ago. Even though this is two-year-old technology, $3000 is a decent price for this much power. I’d have to spend $3750 on a Retina MacBook Pro to get its highest CPU configuration — still slower than this Mac Pro — with a 768 GB SSD.
This new-old Mac Pro will make me very happy for the next 12–18 months until the next model comes out, and then I’ll decide what to do. If the next one sucks for some reason, I can skip it. And if it’s good enough to buy, I’ll sell this one, probably losing about $1000 on it. To me, it makes sense to buy 12–18 months of high-end computing happiness for about $1000.
While I agree that this would be a smart move for Apple, I believe that the real winning combination would be a full, end-to-end solution, and this post is my pitch.
It could revolutionize the podcast world if Apple’s really working on something like what he envisions, but I doubt it — I think podcasts just aren’t mass-market enough for Apple to put this much effort into them.
David Smith’s helpful reference of which iOS 6 changes are public and can probably be discussed safely by developers under Apple’s WWDC NDA.
Does “Bluetooth MAP support” mean that the now-ancient BluePhoneElite SMS-via-computer feature might finally work on iPhones, just as iMessage hopefully makes that unnecessary?
Harvest is a painless time tracking tool for creative professionals available anywhere you find yourself working. Track time via the web, desktop or your mobile device. Within seconds, turn your billable hours into an invoice. Get started with a free 30 day trial today.
Thanks to Harvest for sponsoring the Marco.org RSS feed this week.
But Apple is showing how they want paid upgrades to function. They removed the previous version of Bento from the App Store and replaced it with Bento 4 as a new purchase.
I think this is pretty much correct. But it also highlights the biggest problem with this “Tweetie 2” approach: how are customers supposed to feel who just bought the previous version and are now being asked to pay for the latest? There’s no infrastructure in place to, for example, give the latest update for free to anyone who bought the previous version within the last few months.
The other problem with this approach is that it makes it impossible to issue bugfixes or other minor updates to the previous version without making it available for sale publicly, which would lead to some new customers inadvertently purchasing the old version and being quite unhappy about it.
The real message from Apple is clear: “Design or adjust your business model such that it doesn’t need paid upgrades. Look, here’s a great in-app purchase system. Find a way to use it.”
David Smith says that it’s possible to issue updates to old, no-longer-for-sale versions of apps in the “Tweetie 2” scenario without making the old app publicly visible to new buyers again.
Intel’s roadmap is generally a very strong predictor of when new corresponding Macs will be released.1 But after the complete update of Apple’s laptop lineup at WWDC, the Mac Pro and iMac were untouched, despite suitable new Intel CPU families being available for both.
Then an email from Tim Cook confirmed that new Mac Pros are coming “later next year”, and Apple PR strongly implied that new iMacs were coming later this year.
Why did Apple just release new MacBook Airs, MacBook Pros, and a Retina MacBook Pro, but no new iMacs or Mac Pros? And why are the iMacs probably being updated this year while the Mac Pro update won’t happen for 12–18 months?
As usual, I have some guesses.
My core theory: Apple believes that Retina displays are the only way to go from this point forward, and they’re waiting to update each family until it can be Retina-equipped.
Now, an obvious question: why were the MacBook Airs and Pros just updated without Retina displays? My best guesses:
The vast majority of computers Apple sells today are laptops, and summer is a very lucrative, competitive back-to-school buying season that they couldn’t afford to sit out with stale hardware.
The Airs and 13” Pro are too inexpensive and can’t yet support Retina displays without raising prices, which Apple won’t do for such important products.
The laptops sell too well, and Mac-sized Retina screens simply aren’t available yet in high enough volume to be used in anything but a single, high-end model.
But the iMac has higher margins, sells in far lower quantities, and is under less competitive pressure to keep an aggressive update schedule. It’s plausible that Apple can start selling Retina iMacs in a few more months. And since the iMac is often bought for shared family use, it’s a great product to release right before the holiday shopping season.
I’m guessing, therefore, that we’ll see new iMacs with Ivy Bridge CPUs, and probably with Retina displays, in October or early November.
But the Mac Pro is very different. It’s sold separately from its display, its customers will pay whatever Apple wants to charge for it, and it can be updated whenever Apple feels like it because it’s not targeted at mainstream consumers. So why is it delayed until “later next year” despite perfectly good Xeon E5 CPUs being available today?
Interestingly, that’s probably far enough in the future to skip the Xeon E5 entirely and use Haswell Xeons. But I bet that just decides which month they get released in, not which year.UPDATE: Responders have pointed out that the Xeons usually trail the mainstream CPUs by one process generation, and since the E5 series is Sandy Bridge-based, next year’s Xeons are likely to be Ivy Bridge-based, with Haswell Xeons probably not appearing in 2013.
I bet the Mac Pro update is being held up until “later next year” because a standalone (27-inch?) Retina Display can’t be released until then, and Apple wants to release them simultaneously to capture a lot of buzz and profit in the pro market.
Why a standalone Retina Display can’t be released until then is also worth asking. My guesses help solidify the theory:
Large Retina panels will be in short supply for a while, and Apple needs them for the iMac first. They had a similar delay, probably for the same reason, between the release of the 27” iMac and the 27” Cinema Display using the same panel.
If a 27” Retina Display is a “2X” version of the current panel, that’s a 5120x2880 panel — running that at 60 Hz requires more bandwidth (over 21 Gbps for 24-bit color) than Thunderbolt offers today (up to two 10 Gbps channels).
Thunderbolt is probably going to get its first speed upgrade “in late 2013”. That’s pretty convenient timing.
So I’m guessing we’ll see a new Mac Pro in late 2013 with HaswellIvy Bridge Xeons, faster Thunderbolt, and available standalone Retina Displays.2
UPDATE 2: I’ve now heard from multiple sources that while an iMac update is indeed coming this fall, it will not have Retina displays. Oops. Can’t win ‘em all.
The big exception is the Mac Mini, which Apple doesn’t update regularly or with any strong correlation to Intel’s roadmap. I don’t even try to predict Mac Mini updates. ↩︎
Such Retina Displays probably couldn’t even work with Macs with the “old” Thunderbolt ports. Presumably, the other Mac models would see corresponding faster-Thunderbolt updates around the same time, probably as they’re all updated to Haswell CPUs.
It also wouldn’t surprise me if the Haswell MacBook Airs are the first Airs to have Retina displays. ↩︎
One of the biggest problems Microsoft will face with the Windows 8 platforms is that they’re effectively starting from zero apps. What can Microsoft do to encourage developers to create great Windows 8 and Windows Phone 8 apps?
In Developers don’t rush to new platforms, I suggested that developers are heavily swayed by three factors when deciding which platforms to develop for:
Which platforms do we use ourselves?
Which platforms have large installed bases?
Which platforms will be profitable to develop for?
Platforms that can satisfy all three will usually have very strong developer support, which is why the iPhone and iPad have had such incredible developer momentum and such amazing apps. But poor performance on one or two of these factors can make good developers stay away from a platform even if it ranks well in another factor.
It’s safe to assume that Windows 8 for PC-like hardware will have a large installed base. If the ARM-class Surface sells well, Windows RT will succeed, too.1
But will Windows 8 app development be very profitable? A large installed base alone doesn’t guarantee that. How easy will payment be for customers? How many Windows 8 PCs and tablets will have payment accounts already configured, ready to buy apps with almost no effort, from many countries? Because that, more than anything else, is why paid apps can exist reasonably profitably on iOS and why they usually suffer by comparison on Android and BlackBerry.
The even bigger problem, I think, will be the lack of dogfooding: most developers of the kind of apps Windows 8 needs don’t use Windows.
The term “developers” includes quite a lot of fairly different professions, but the kind of developers that Microsoft needs to consistently build killer Windows 8 and Windows Phone apps are generally developers who enjoy working individually, in smaller companies, or in startups, building consumer-facing apps or services.
By 2005 or so, most of those developers were working on web apps. The web was the platform for that kind of work for most of that decade.2
And during that decade, almost every such developer I knew switched to the Mac if they weren’t already there, partly because it was better for developing web apps.3
That’s one of the biggest reasons there was so much pent-up developer interest in the iPhone before the App Store opened: these consumer-product developers were all using Macs already. As the dominant consumer platform shifted from the web to apps over the last four years, most talented consumer-product developers built products for their app platform of choice during that time: the Apple ecosystem.
Many Windows developers were upset that iOS development had to be done on a Mac, but it didn’t hurt Apple: the most important developers for iOS apps were already using Macs.
But the success of Windows 8 and Windows Phone in the consumer space requires many of those consumer-product developers, now entrenched in the Apple ecosystem, to care so much about Windows development that they want to use Windows to develop for it.
How likely is that?
Anything’s possible, but that’s going to be an uphill battle.
Windows Phone 8 probably won’t do much better than Windows Phone 7, unfortunately, because Microsoft hasn’t meaningfully changed any of the conditions that made WP7 fail. And they’ve burned bridges with the few WP7 adopters by having no upgrade path to WP8. ↩︎
I still don’t know what to call the decade from 2000–2009. ↩︎
Myself included. I was a die-hard Windows PC user (and builder) until 2004, which may surprise anyone who’s new to my site. ↩︎
Phil Getzen, in February, describing a Microsoft developer event he attended:
Then I started looking at apps (both stock and 3rd party), and I realized something. Every app looked the same. Every. Single. App.
While he was talking about the now-abandoned Windows Phone 7, this issue is core to the design principles of Windows 8’s Metro environment as well.
On some level, the same is true for iOS for apps that use UIKit with the standard control appearances. But UIKit adapts to rich, custom designs well when it’s asked to, and customers have come to expect non-standard designs from high-profile apps.
Metro’s design principle in this area seems to be to let “the content” be the design, but that doesn’t work for a lot of app types. And even for apps that incorporate appropriate “content”, customers expect more from the design of the controls and non-content screens. (Believe me.)
If designers create beautiful, rich, iOS-style Metro interfaces, they’ll look garish or out of place. And if they follow Metro’s lead instead, there’s a good chance that everything will look stark, bland, sterile, and undifferentiated.
Assuming neither approach can produce great, desirable designs that fit well on the platform and give designers the creative freedom and differentiation that they need, can Metro’s rigid design language accommodate a middle ground?
This week’s podcast: cable TV, HBO, and piracy, challenges that Windows 8 and Microsoft Surface will face with developer and enterprise adoption, why other Retina Mac models may take longer than we expect, the case for Instapaper offering a “mobilizer”, and my diversification and innovation strategy.
The standard American way to heat water is to take a pot of water out to our pickup truck, open the hood (what the Brits call a “spanner”), and lock the pot onto the engine block using a set of latches readily available at any Wal-Mart. Then we drive around at high speed, reciting the Gospels and firing our shotguns out the window. After reading the Gospel of John for three minutes and sixteen seconds, the water is ready. I hope this puts to rest any confusion.
CocoaConf is an exciting multi-track conference series for iPhone, iPad, and Mac developers. We start by bringing together some of the best developers, authors, and trainers in the community, giving them the freedom to cover the technologies that they are most passionate about. Next, we sell a limited number of tickets, which provides for an incredibly low attendee-to-speaker ratio. Then we throw in some interesting keynotes and fun, informative panels. And we do it all in a manner that is designed to maximize your learning and networking experience.
Our next CocoaConf event will be held on August 9–11, 2012, in Columbus, Ohio. Registration is now open, and the Early Bird rate is good through July 6th. We will have 18 great speakers, including Daniel Steinberg, Bill Dudney, Chris Adamson, Mark Dalrymple, Josh Abernathy, and more. Get all of the details at CocoaConf.com. When you register, use the coupon code MARCO for a 20% discount.
To hear about future CocoaConf events and other interesting information, follow us on Twitter at @cocoaconf.
Thanks to CocoaConf for sponsoring the Marco.org RSS feed again this week. (And I’m from Columbus. Great city.)
I ordered a Nexus 7. This may come as a surprise, but I think it’s an interesting device.
First, I could use it to broaden my Android test-device pool, which currently contains only a relatively ancient Nexus One, a Nook Tablet, and a Kindle Fire, none of which have fast hardware or a remotely modern version of Android.1 It’s also nice for me to have a couple of Android devices for browser testing when coding Instapaper’s website and Marco.org.
But I also wanted to try it because I’m legitimately curious about what Google’s doing over there.
Most of my Android experience is on the Kindle Fire, which paired shitty hardware with shitty custom software to reach a bargain-basement price. The Nexus 7 seems to have combined mid-grade hardware with much better (and much newer) software for the same $200 price. It’s clearly a showcase of the best software experience and features that Google has to offer in Android today.
Finally, Android has something comparable to an iPod Touch: a non-phone device that runs a modern version of Android, contains decent (I think) hardware, and is very inexpensive with no cellular contract. What’s more interesting is that this role is being filled as a 7” tablet, not a 3.5” iPhone-like pocket device.
This will probably be the go-to device for people who want to try Android without much commitment or investment, such as iOS developers considering going cross-platform2 and geeky writers wanting to see what it’s like on the dark (or bright green) side.
I don’t expect this to replace my iPad, but I bet it will be a lot more interesting to geeks like me than the Kindle Fire.3
Even though Mobelux is handling the development and support of Instapaper’s Android app, I still want to be able to use it periodically so I know what’s going on over there. ↩︎
Responsible developers can’t, unfortunately, have the Nexus 7 as their only Android testing device: the vast majority of Android devices in use today are running much older software, and there’s a huge variety of hardware capabilities in the wild. ↩︎
I’m sure Amazon’s releasing a new Kindle Fire soon. I don’t care, honestly. The first one was so bad that I don’t trust them to ever make a good one, especially since most of the flaws are in software. Maybe the new one will run Amazon’s shitty software faster. ↩︎
Your success has given others a blueprint for what success looks like, and while, yes, the devil’s in the details, you have performed a lot of initial legwork for your competition in the process of becoming successful.
This entire article is extremely quotable, but this part in particular hit home for me.
I wrote Tumblr, Instapaper, and Second Crack in PHP. I continue to use it because I know it extremely well, it’s very easy to use and deploy, and it’s nearly maintenance-free on servers. When you’re a programmer forced to also be your own sysadmin, that’s very attractive.
But I hate it. It’s limited, often clunky, outdated, and deeply flawed.
I’m starting a new open source web project with the goal of making the code as freely and easily runnable to the world as possible. Despite the serious problems with PHP, I was forced to consider it. If you want to produce free-as-in-whatever code that runs on virtually every server in the world with zero friction or configuration hassles, PHP is damn near your only option. …
The best way to fix the PHP problem at this point is to make the alternatives so outstanding that the choice of the better hammer becomes obvious.
Instapaper has too large of a codebase to make a new-language rewrite practical, so I decided a while ago that I at least wouldn’t start any new projects in PHP. But when I wanted to write Second Crack, I went right back — it was a quick little project that I only wanted to spend a couple of days on, so it didn’t justify the overhead of learning a new language.1
I’m addicted to PHP.
When I do finally break this addiction (I can stop whenever I want!), I’ve always thought that the next language choice was clear: Python, which seems to fit my style better than Ruby. But I’m really not qualified to know for sure.
Whichever language I choose to replace PHP will have its own problems that will take me years to master, but by the time I know whether I chose the “right” language, I’ll have invested far too much time in it to practically switch any meaningfully sized project to another language: exactly the situation I’m in now with PHP.
The fear of making the “wrong” choice actually makes the familiar, mastered PHP more attractive. That’s the problem Jeff’s identifying, and it’s very real. If you can get PHP programmers to agree that they need to stop using it, the first question that comes up is what to use instead, and they’re met with a barrage of difficult choices and wildly different opinions and recommendations.
The same problem plagues anyone interested in switching to Linux (Which distro? Which desktop environment? Which package manager?), and the paralysis of choice-overload usually leads people to abandon the choice and just stick with Windows or OS X. But the switching costs of choosing the “wrong” programming language for a project are much larger.
Such a small-but-useful project is exactly what I should have learned a new language with, since making the “wrong” choice would have much lower stakes than, say, the start of a potentially major web app.
But that would have turned this minor project into a much more time-consuming one at a time in my life that I couldn’t afford any more projects and distractions. I shouldn’t have written it at all, but I’m glad I did, because I really like using it. ↩︎
Back in March of 2011, my colleague Ryan Sarver said that developers should not “build client apps that mimic or reproduce the mainstream Twitter consumer client experience.” That guidance continues to apply as much as ever today. Related to that, we’ve already begun to more thoroughly enforce our Developer Rules of the Road with partners, for example with branding, and in the coming weeks, we will be introducing stricter guidelines around how the Twitter API is used.
We’re building tools for publishers and investing more and more in our own apps to ensure that you have a great experience everywhere you experience Twitter, no matter what device you’re using. You need to be able to see expanded Tweets and other features that make Twitter more engaging and easier to use. These are the features that bring people closer to the things they care about. These are the features that make Twitter Twitter. We’re looking forward to working with you to make Twitter even better.
Menacing.
It’s been obvious for a while that Twitter has had a lot of internal cultural battles. We’ve been shielded from most of it so far, but I think the party’s about to end.
While I applaud Twitter-client developers for making Twitter awesome for me and a lot of others, I’m glad I’m not in that business right now.
I mostly agree with Matt Gemmell: in practice, this wouldn’t be good.
Also, keep in mind that any App Store policy or ability applies to all apps, and will be ruthlessly abused by shady, scammy, unfriendly, careless, and clueless developers.