Overcast has offered anonymous sync accounts since 2014. They’re fully functional, but they lack email addresses or passwords, so they can’t log into the website. A login token is stored in iCloud so the account can be accessed after a restore or upgrade, or from other devices you own.
Previously, the login screen pushed email logins. But with four years of perspective, feedback, and usage data, I now think that’s the wrong move. Only a single-digit percentage of customers use the website, and the iCloud token-sync method solves cross-device logins for almost everyone.
Your personal data isn’t my business — it’s a liability. I want as little as possible. I don’t even log IP addresses anymore.
If I don’t need your email address, I really don’t want it.
68% of Overcast accounts have email addresses today. To reduce that as much as possible, I’ve made major changes to account handling:
The previous login screen (left) and the new one.
In Overcast 4.2, the login screen now prominently encourages anonymous accounts by default.
If you already have an account in iCloud, it’ll pop up a dialog box over this screen asking if you want to use it.
And the first time you launch 4.2, people with email-based accounts will be encouraged to migrate them to anonymous accounts:
The migration prompt that shows on the first run.
Finally, you can now change your account between email-based and anonymous whenever you want.
Blocking ad-tracking images
In most podcast apps, podcasts are downloaded automatically in the background. The only data sent to a podcast’s publisher about you or your behavior is your IP address and the app’s name. The IP address lets them derive your approximate region, but not much else.
They don’t know exactly who you are, whether you listened, when you listened, how far you listened, or whether you skipped certain parts.
Some large podcast producers are trying very hard to change that.
Big data ruined the web, and I’m not going to help bring it to podcasts. Publishers already get enough from Apple to inform ad rates and make content decisions — they don’t need more data from my customers. Podcasting has thrived, grown, and made tons of money for tons of people under the current model for over a decade. We already have all the data we need.
One of the ways publishers try to get around the limitations of the current model is by embedding remote images or invisible “tracking pixels” in each episode’s HTML show notes. When displayed in most apps, the images are automatically loaded from an analytics server, which can then record and track more information about you.
In Overcast 4.2, much like Mail (and for the same reason), remote images don’t load by default. A tappable placeholder shows you where each image will load from, and you can decide whether to load it or not.
Overcast 4.2 also includes a bunch of minor fixes, and two big ones:
Fixed the major slowdowns and high battery usage that resulted from extremely large podcast artwork.
Password-protected episodes are now supported on password-protected feeds.
It jumps back by up to a few seconds after having been paused to help remind you of the conversation.
It slightly adjusts resumes and seeks to fall in the silences between spoken words when reasonably possible.
Both are subtle but noticeable benefits (my favorite kind), especially when you’re being interrupted a lot, such as while following turn-by-turn navigation directions.
Smart Resume is on by default, and can be turned off in Nitpicky Details.
Delete episodes 24 hours after completion: Before, episodes could either be auto-deleted immediately upon completion, or not at all. There’s now a third option, auto-deleting 24 hours after completion, which will soon be the default for new accounts.
The 24-hour threshold is only enforced after a successful sync, so it won’t auto-delete anything in the middle of an extended offline period, such as a long flight.
Auto-deletion, either immediate or after 24 hours, also no longer applies to Premium subscribers’ Uploads.
Password-protected podcasts: Some private podcast feeds, including many paid and members-only podcasts, require a username and password via HTTP Basic Auth. You can now add these in the Add URL screen.
Password-protected podcasts, and other private feeds such as Patreon bonus feeds and anything using the <itunes:block> tag, do not show up in search or recommendations.
Noteworthy bug fixes:
Resuming playback after quitting in the background, especially on very long podcasts and/or when using AirPods, no longer occasionally results in glitchy noises and incorrect durations.
Playback under certain conditions no longer stalls, requiring pausing and playing again.
Downloads now fail less often.
Playback controls no longer disappear occasionally.
Smart Speed total savings now appear at the bottom of the Settings screen for locales that use commas as their decimal separator.1
Extremely large playlists now only show the most/least-recent 500 episodes to improve app performance for users with very high subscription counts.
There’s also one removal: rotation support on iPhone. (The iPad app still rotates.) iPhone rotation has always been disabled by default, and had been buried in Nitpicky Details for a long time, so very few people have ever used it. Meanwhile, it has become increasingly difficult to support and maintain, especially with the modern complexities of rotation and the dramatically increased workload of supporting the iPhone X in landscape.
iPhone rotation has simply proven far too costly to maintain for its extremely low usage, and it had to go to free up more of my time for more highly demanded features. I apologize to the few people who did use it, and I hope this isn’t too disruptive for you.
In the next update, I’ll be addressing the biggest design failure of Overcast 4: the non-discoverability of the Effects and Notes pages in the Now Playing screen. Expect the return of an ancient user-interface tool known as “buttons”.
This was Overcast’s oldest known bug, which had been there since 1.0: some users, mostly in central Europe, wouldn’t see Smart Speed totals. It turned out to be one of the most interesting and obscure bugs I’ve seen.
The total-time-saved value is stored on-device as it accumulates, then gets sent to the server to be combined with any listening you do on other devices. The overall total is read back from the server, and the local total is reset, on each sync.
I was using an NSNumberFormatter to read the total value from Overcast’s server as a double. My server always sends values with U.S.-style number formatting, using a period as the decimal separator (e.g. “1234.5”). But by default, NSNumberFormatter uses the device’s locale, so in countries that use a comma as the decimal separator (e.g. “1234,5”), it was interpreting the server’s numbers with periods as invalid and returning zero. So the Settings screen thought they hadn’t saved any time, and hid the time-saved label.
Fortunately, it was an easy fix: setting that NSNumberFormatter locale to en_US to match what the server was sending. And since the accumulated local totals were still being sent and added properly, the correct historical data is there — it just wasn’t being displayed correctly. ↩︎
In the original 2007 iPhone introduction, Steve Jobs famously derided other smartphones at the time for running “baby” software and the “baby” internet. He was right.
Developers weren’t given access to make native apps until the iPhone’s second year. Before the native development kit was ready, Apple tried to pass off web apps as a “sweet solution” for third-party apps, but nobody was fooled.
Apple wasn’t using web apps for their own built-in iPhone apps — they were using native code and frameworks to make real apps.1 Developers like me did our best with web apps, but they sucked. We simply couldn’t make great apps without access to the real frameworks.
Apps were terrible, and didn’t take off, until we had access to the same native tools that Apple used.
* * *
Developing Apple Watch apps is extremely frustrating and limited for one big reason: unlike on iOS, Apple doesn’t give app developers access to the same watchOS frameworks that they use on Apple Watch.
Instead, we’re only allowed to use WatchKit, a baby UI framework that would’ve seemed rudimentary to developers even in the 1990s. But unlike the iPhone’s web apps, WatchKit doesn’t appear to be a stopgap — it seems to be Apple’s long-term solution to third-party app development on the Apple Watch.
The separation of Apple’s internally-used frameworks from WatchKit has two huge problems:
Apple doesn’t feel WatchKit’s limitations. Since they’re not using it, it’s too easy for Apple’s developers and evangelists to forget or never know what’s possible, what isn’t, what’s easy, and what’s hard. The bugs and limitations I report to them are usually met with shock and surprise — they have no idea.
WatchKit is buggy as hell. Since Apple doesn’t use it and there are relatively few third-party Watch apps of value, WatchKit is far more buggy, and seems far less tested, than any other Apple API I’ve ever worked with.2
Apple will never have a very good idea of where WatchKit needs to improve if they’re not using it. But this sweet solution is the only choice anyone else has to make Apple Watch apps.
WatchKit only lets us create “baby” apps. That’s all it will ever let us create.
WatchKit needs to be discontinued and replaced.
No focus on quality or expansion of WatchKit will fix this. There are only two ways to meaningfully improve Watch apps, spur third-party innovation, and unlock the true potential of the Apple Watch.
One solution is for Apple to reimplement all of its own Watch apps with WatchKit instead of their internal frameworks, which will force them to fix WatchKit’s many bugs and dramatically expand it.
The much better solution, and the one I hope they take, is for Apple to expose its real watchOS UI and media frameworks to third-party developers, as it has done on iOS.
Defensive web developers who object to my “real” classification, remember this was 2007 hardware with 2007 cellular speeds and 2007 browser capabilities. You don’t realize how good you have it today. (But most apps should still be native.) ↩︎
Here’s one from today for WKAudioFilePlayer, an abysmal, demoralizing API that I lost weeks of my life battling that feels like it had literally zero testing at Apple. I’d bet money that nobody at Apple has ever tried to build an app with it. ↩︎
App developers sometimes ask me what they should do when their features, designs, or entire apps are copied by competitors.
Legally, there’s not a lot you can do about it:
Copyright protects your icon, images, other creative resources, and source code. You automatically have copyright protection, but it’s easy to evade with minor variations.1 App stores don’t enforce it easily unless resources have been copied exactly.
Trademarks protect names, logos, and slogans. They cover minor variations as well, and app stores enforce trademarks more easily, but they’re costly to register and only apply in narrow areas.2
Only assholes get patents. They can be a huge PR mistake, and they’re a fool’s errand: even if you get one ($20,000+ later), you can’t afford to use it against any adversary big enough to matter.
Don’t be an asshole or a fool. Don’t get software patents.
If someone literally copied your assets or got too close to your trademarked name, you need to file takedowns or legal complaints, but that’s rarely done by anyone big enough to matter. If a competitor just adds a feature or design similar to one of yours, you usually can’t do anything.
You can publicly call out a copy, but you won’t come out of it looking good:
If they’re more successful than you, it’s envy and sour grapes.
If they’re less successful than you, it’s jealousy and punching down (and giving them attention!).
The public may have a very different opinion on whether you’re really being copied from, whether you were really the first to do it, or whether you deserve to “own” it.
These disputes are best kept private, or not fought at all.
Nobody else will care as much as you do. Nobody cares who was first, and nobody cares who copied who. The public won’t defend you.
The instant someone else has the same feature or design as you, the public and press see it as a collective checkbox feature, or a “standard” or “obvious” design, that apps in this category just have. It’s no longer yours.
You can try to “educate” the public, but you’ll lose.
This feels unfair when it happens to you, but it’s just how it goes, and the entire ecosystem benefits. Every app — even yours — includes countless “standard” and “obvious” features and designs that, at one time, weren’t. Everything is a remix.
A great design or feature can give you a competitive advantage for a little while, but it’s always temporary. Compete on marketing, quality, and what you can do next, not the assumption that nobody can copy what you made.
Setting the roadmap for competitors is a satisfying accomplishment, but only a personal one. You know you’ve done it. That has to be enough.
For instance, nobody else can make an app that uses Overcast’s exact icon, or anything clearly derivative of it, but I can’t stop anyone from making a different-looking, originally-drawn orange icon with a radio tower in it. ↩︎
For instance, nobody else can make an audio player named Overcast or anything similar to it, but I can’t stop anyone from making a weather app with that name. And other podcast players can’t make features named Smart Speed or Voice Boost, but I can’t stop them from making similar features with different-enough names. ↩︎
Having attended (and sometimes spoken at) many of these conferences over the years, I can’t deny the feeling I’ve had in the last couple of years that the era of the small Apple-ish developer-ish conference is mostly or entirely behind us.
I don’t think that’s a bad thing. This style of conference had a great run, but it always had major and inherent limitations, challenges, and inefficiencies:
Cost: With flights, lodging, and the ticket adding up to thousands of dollars per conference, most people are priced out. The vast majority of attendees’ money isn’t even going to the conference organizers or speakers — it’s going to venues, hotels, and airlines.
Size: There’s no good size for a conference. Small conferences exclude too many people; big conferences impede socialization and logistics.
Logistics: Planning and executing a conference takes such a toll on the organizers that few of them have ever lasted more than a few years.
Format: Preparing formal talks with slide decks is a massively inefficient use of the speakers’ time compared to other modern methods of communicating ideas, and sitting there listening to blocks of talks for long stretches while you’re trying to stay awake after lunch is a pretty inefficient way to hear ideas.
It’s getting increasingly difficult for organizers to sell tickets, in part because it’s hard to get big-name speakers without the budget to pay them much (which would significantly drive up ticket costs, which exacerbates other problems), but also because conferences now have much bigger competition in connecting people to their colleagues or audiences.
There’s no single factor that has made it so difficult, but the explosion of podcasts and YouTube over the last few years must have contributed significantly. Podcasts are a vastly more time-efficient way for people to communicate ideas than writing conference talks, and people who prefer crafting their message as a produced piece or with multimedia can do the same thing (and more) on YouTube. Both are much easier and more versatile for people to consume than conference talks, and they can reach and benefit far more people.
Ten years ago, you had to go to conferences to hear most prominent people in our industry speak in their own voice, or to get more content than an occasional blog post. Today, anyone who could headline a conference probably has a podcast or YouTube channel with hours of their thoughts and ideas available to anyone, anywhere in the world, anytime, for free.
But all of that media can’t really replace the socializing, networking, and simply fun that happened as part of (or sometimes despite) the conference formula.
I don’t know how to fix conferences, but the first place I’d start on that whiteboard is by getting rid of all of the talks, then trying to find different ways to bring people together — and far more of them than before.
Or maybe we’ve already solved these problems with social networks, Slack groups, podcasts, and YouTube, and we just haven’t fully realized it yet.
There’s a lot to like about the new MacBook Pros, but they need some changes to be truly great and up to Apple’s standards.
Here’s what I’m hoping to see in the next MacBook Pro that I believe is technically possible, reasonable, widely agreeable, and likely for Apple to actually do, in descending order of importance:
Butterfly keyswitches are a design failure that should be abandoned. They’ve been controversial, fatally unreliable, and expensive to repair since their introduction on the first 12” MacBook in early 2015. Their flaws were evident immediately, yet Apple brought them to the entire MacBook Pro lineup in late 2016.
After three significant revisions, Apple’s butterfly keyswitches remain as controversial and unreliable as ever. At best, they’re a compromise acceptable only on the ultra-thin 12” MacBook, and only if nothing else fits. They have no place in Apple’s mainstream or pro computers.
The MacBook Pro must return to scissor keyswitches. If Apple only changes one thing about the next MacBook Pro, it should be this. It’s far more important than anything else on this list.
Fans of the butterfly keyboard’s feel need not worry — this doesn’t mean we need the old MacBook Pro keyboard, exactly.
The Magic Keyboard’s scissor switches feel similar, but with a bit more travel, and all of the reliability and resilience of previous keyboard generations. They’re a much better, more reliable, and more repairable balance of thinness and typing feel likely to appeal to far more people — even those who like the butterfly keyboards.
The Magic Keyboard only needs one change to be perfect for the MacBook Pro: returning to the “inverted-T” arrow-key arrangement by making the left- and right-arrow keys half-height again. This arrangement is much more natural and less error-prone because we can align our fingers by feeling the “T” shape, a crucial affordance for such frequently used keys that are so far from the home row.
Great first-party USB-C hubs
The MacBook Pro bet heavily on the USB-C ecosystem, but it hasn’t developed enough on its own.
When people can’t get what they need from Apple at all, or at a remotely competitive price, they’ll go to cheap third-party products, which are often unreliable or cause other problems. When these critical accessories aren’t flawless, it reflects poorly on Apple, as it harms the overall real-world experience of using these computers.
If a third-party hub or dongle is flaky, the owner doesn’t blame it — they blame their expensive new Apple computer for needing it.
Apple needs to step up with its own solid offerings to offer more ports for people who need them.
Apple’s most full-featured USB-C accessory is downright punitive in its unnecessary minimalism: one USB-C passthrough, one USB-A (a.k.a. regular/old USB), and an HDMI port that doesn’t even do 4K at 60 Hz — all for the shameless price of $80.
Instead of giving us the least that we might possibly need, this type of product should give us the most that can fit within reasonable size, cost, and bandwidth constraints. I’d like to see at least two USB-C ports, at least two USB-A ports, and HDMI that can do 4K60. An SD-card reader would be a nice bonus.
To make it easier to go all-USB-C on our peripherals and cables, I’d also like to see a true USB-C hub: one USB-C in and at least three USB-C out, with power passthrough on one.
And just as we learned that the need for pro displays shouldn’t be outsourced to LG, Apple should stop outsourcing critical adapters and hubs to Belkin. They’re not as good as Apple’s, and they never will be.
USB-C is great, but being limited to 2 or 4 total ports (including power) simply isn’t enough. Even if you adopt the USB-C ecosystem, these MacBook Pros are more limited than their predecessors:
The 13” MacBook Air can connect to power, two USB devices, Thunderbolt, and an SD card simultaneously. Its replacement, the 13” MacBook “Escape” (without Touch Bar), can only connect to two total devices on battery, or one when powered.
The 2015 13” and 15” MacBook Pros can connect to power, two USB devices, two Thunderbolt devices, HDMI output, and an SD card simultaneously. Their replacements can only connect to four devices on battery, or three when powered.
If there’s not enough Thunderbolt or PCIe bandwidth to have more USB-C ports, that’s fine — not every port needs to be USB-C with Thunderbolt. All of that cost and bandwidth is unnecessary for most common real-world uses of laptop ports (power in, charging iPhones, external keyboards, etc.).
Dongles should be the exception, not the norm, in real-world use — most owners should need zero. But HDMI and USB-A are still far too widely used to have been removed completely, and neither are likely to fade away anytime soon regardless of how Apple configures their laptops. Re-adding HDMI and at least one USB-A port would reduce or eliminate many people’s dongle needs, which I bet would dramatically improve their satisfaction.
Finally, Apple should give serious consideration to bringing back the SD-card slot. SD cards are more widely used than ever in photography, video, audio, and other specialized equipment, and they provide excellent options for fast, reliable storage expansion and data transfer. And they’re going to be around for a while — Wi-Fi and cables don’t or can’t replace most current uses in practice.
Back away from the Touch Bar
Sorry, it’s a flop. It was a solid try at something new, but it didn’t work out. There’s no shame in that — Apple should just recognize this, learn from it, and move on.
The Touch Bar should either be discontinued or made optional for all MacBook Pro sizes and configurations.
Touch ID is the only part of the Touch Bar worth saving, but the future is clearly Face ID. If we can’t have that yet, the ideal setup is Touch ID without the Touch Bar. We’d retain the Secure Enclave’s protection for the camera and microphones, and hopefully get the iMac Pro’s boot protection, too.
USB-C PD charging and replaceable charging cables are great advances that should be kept. USB-C PD is the reason I didn’t include battery life in this list — occasional needs for extended battery life can be achieved with inexpensive USB-C PD batteries.
But Apple could make their chargers and cables so much nicer — and they only need to look to their own recent past.
I’d like to see them bring back the charging LED on the end of the cable, and the cable-management arms on the brick. These weren’t superfluous — they served important, useful functions, and their removal made real-world usability worse for small, unnecessary gains.
MagSafe would be nice, but I don’t think it’s essential. MagSafe 2 wasn’t universally loved because it detached with too little vertical pressure when used on laps, couches, or beds, but maybe it could be moved to a splitting module along the cable, a few inches from the laptop end, like the original Xbox’s controller cables?
The move to a detachable, “standard” USB-C cable doesn’t preclude any of this. It’s already a specialized, dedicated power-only cable in practice (high-wattage USB PD support, but no Thunderbolt, and limited to USB 2.0 speeds). Third-party cables could still work — Apple’s could just be nicer.
Keeping what’s great
There’s a lot about the current MacBook Pro that’s great — fast internals, quieter fans, Touch ID, P3 screens, Thunderbolt 3, USB-C PD charging, and space gray, to name a few.
We shouldn’t have to choose between what’s better about the previous generation — connectivity, reliability, and versatility — and what’s great about this one.
Apple has made many great laptops, but the 15-inch Retina MacBook Pro (2012–2015) is the epitome of usefulness, elegance, practicality, and power for an overall package that still hasn’t been (and may never be) surpassed.
Introduced in 2012, less than a year after Steve Jobs died, I see it as the peak of Jobs’ vision for the Mac.
It was the debut of high-DPI Macs, starting down the long road (which we still haven’t finished) to an all-Retina lineup. And with all-SSD storage, quad-core i7 processors, and a healthy amount of RAM all standard, every configuration was fast, capable, and pleasant to use.
At its introduction, it was criticized only for ditching the optical drive and Ethernet port, but these were defensible, well-timed removals: neither could’ve even come close to physically fitting in the new design, very few MacBook Pro users were still using either on a regular basis, and almost none of us needed to buy external optical drives or Ethernet adapters to fit the new laptop into our lives. In exchange for those removals, we got substantial reductions in thickness and weight, and a huge new battery.
There were no other downsides. Everything else about this machine was an upgrade: thinner, lighter, faster, better battery life, quieter fans, better speakers, better microphones, a second Thunderbolt port, and a convenient new HDMI port.
The MagSafe 2 power adapter breaks away safely if it’s tripped over, and the LED on the connector quickly, clearly, and silently indicates whether it’s charging and when the battery is fully charged.
The pair of Thunderbolt (later Thunderbolt 2) ports gave us high-end, high-speed connectivity when we needed it, and the pair of standard USB 3 ports — one on each side — let us connect or charge our world of standard USB devices.
The headphone jack was thoughtfully located on the left side, because nearly all headphones run their cables down from the left earcup. (External-mouse users also appreciate this frequently-used cable not intruding in their right-side mousing area.)
The keyboard was completely unremarkable, in the best possible way. The crowd-pleasing design was neither fanatically loved nor widely despised. It quietly and reliably did its job, as all great tools should, and nobody ever really had to think about it.
The trackpad struck a great balance between size and usability. It provided ample room for multitouch gestures, but without being too large or close to the keyboard, so people’s fingers wouldn’t inadvertently brush against it while typing.
Not every owner needed the SD-card slot or HDMI port, but both were provided for times when we might. This greatly increased the versatility and convenience of this MacBook Pro, as many pro customers use A/V gear that records to SD cards or occasionally need to plug into a TV or projector. The SD-card slot could also serve as inexpensive storage expansion.
The power adapter’s built-in cable management keeps bags tidy. And if you need a longer cable, the extension comes in the box at no additional charge.
Versatile USB-A ports allow travelers to standardize on just one type of charging cable that can charge their iPhones and iPads from the laptop itself, multi-port wall or car chargers, portable batteries, airplanes, many outlets, and nearly all other chargers likely to be found in the world around them.
The 2015 revision brought the modern Force Touch trackpad and used the space savings to increase the battery to 99.5 Wh, just under the 100 Wh carry-on limit for most commercial airlines. When paired with the integrated-only GPU base configuration, this offered an unparalleled option for great battery life without giving up the large Retina screen.
And I like the backlit Apple logo on the lid. Maybe I’m old-fashioned, or maybe I just miss Steve, but it — along with the MagSafe LED and the startup chime — reminds me of a time when Mac designs celebrated personality, humanity, and whimsy.
* * *
I recently returned to the 2015 15-inch MacBook Pro after a year away.
Apple still sells this model, brand new, just limited to the integrated-only GPU option (which I prefer as a non-gamer for its battery, heat, and longevity advantages), but I got mine lightly used for over $1000 less.
I thought it would feel like a downgrade, or like going back in time. I feared that it would feel thick, heavy, and cumbersome. I expected it to just look impossibly old.
It feels as delightful as when I first got one in 2012. It’s fast, capable, and reliable. It gracefully does what I need it to do. It’s barely heavier or thicker, and I got to remove so many accessories from my travel bag that I think I’m actually coming out ahead.
It feels like a professional tool, made by people who love and need computers, at the top of their game.
It’s designed for us, rather than asking us to adapt ourselves to it.
It helps us perform our work, rather than adding to our workload.
This is the peak. This is the best laptop that has ever existed.
I hope it’s not the best laptop that will ever exist.
I love the idea of USB-C: one port and one cable that can replace all other ports and cables. It sounds so simple, straightforward, and unified.
In practice, it’s not even close.
USB-C normally transfers data by the USB protocol, but it also supports Thunderbolt… sometimes. The 12-inch MacBook has a USB-C port, but it doesn’t support Thunderbolt at all. All other modern MacBook models support Thunderbolt over their USB-C ports… but if you have a 13-inch model, and it has a Touch Bar, then the right-side ports don’t have full Thunderbolt bandwidth.
If you bought a USB-C cable, it might support Thunderbolt, or it might not. There’s no way to tell by looking at it. There’s usually no way to tell whether a given USB-C device requires Thunderbolt, either — you just need to plug it in and see if it works.
Much of USB-C’s awesome capability comes from Thunderbolt and other Alternate Modes. But due to their potential bandwidth demands, computers can’t have very many USB-C ports, making it especially wasteful to lose one to a laptop’s own power cable. The severe port shortage, along with the need to connect to non-USB-C devices, inevitably leads many people to need annoying, inelegant, and expensive dongles and hubs.
While a wide variety of USB-C dongles are available, most use the same handful of unreliable, mediocre chips inside. Some USB-A dongles make Wi-Fi drop on MacBook Pros. Some USB-A devices don’t work properly when adapted to USB-C, or only work in certain ports. Some devices only work when plugged directly into a laptop’s precious few USB-C ports, rather than any hubs or dongles. And reliable HDMI output seems nearly impossible in practice.
Very few hubs exist to add more USB-C ports, so if you have more than a few peripherals, you can’t just replace all of their cables with USB-C versions. You’ll need a hub that provides multiple USB-A ports instead, and you’ll need to keep your USB-A cables for when you’re plugged into the hub — but also keep USB-C cables or dongles around for everything you might ever need to plug directly into the computer’s ports.
Hubs with additional USB-C ports might pass Thunderbolt through to them, but usually don’t. Sometimes, they add a USB-C port that can only be used for power passthrough. Many hubs with power passthrough have lower wattage limits than a 13-inch or 15-inch laptop needs.
Fortunately, USB-C is a great charging standard. Well, it’s more of a collection of standards. USB-C devices can charge via the slow old USB rates, but for higher-powered devices or faster charging, that’s not enough current.
Many Android phones support Qualcomm’s Quick Charge over USB-C, which is different — usually — from the official, better, newer USB-C Power Delivery (PD) standard. Apple products, some Android phones, and the Nintendo Switch use USB-C PD. Quick Charge devices don’t get any benefit — usually — from PD chargers, and vice versa.
Your charger, cable, and any standalone batteries you want to use all must support the same charging standard for it to work at full speed.
Some cables don’t support USB-C PD at all, and most don’t support laptop wattages. Apple’s cable supports USB-C PD charging at high wattages… unless you bought the earlier version that doesn’t. Most standalone batteries sold to date don’t support USB-C PD — there are only a handful on the market so far, and most of them can’t charge a laptop at full speed, unless it’s the 12-inch MacBook.
You can use USB-C PD to fast-charge an iPhone 8 or iPad Pro with a USB-C to Lightning cable. But it doesn’t work with every USB-PD battery or charger, or every USB-C to Lightning cable, or every iPad Pro.
And, of course, there’s usually no way to tell at a glance whether a given cable, charger, battery, or device supports USB-C PD or at what wattages.
It’s comforting to think that over time, this will all settle down and we’ll finally achieve the dream of a single cable and port for everything. But that’s not how technology really works.
Before today’s USB-C can become ubiquitous and homogeneous, the next protocol or port will come out. We’ll have new, faster USB 4.0 and Thunderbolt 4 standards over the same-looking USB-C ports. We’ll want to move to an even thinner USB-D port. The press will call it “the future” and Apple will celebrate its new laptops that only have a USB-D port — two, if we’re lucky.
And we’ll have to start over again, buying all new cables, dongles, hubs, chargers, batteries, and displays to adapt it to what we really need.
Maybe next time, we’ll get it right. But probably not.
The Apple Watch desperately needs standalone podcast playback, especially with the LTE-equipped Series 3, which was designed specifically for exercising without an iPhone.
Believe me, I’ve tried. But limitations in watchOS 4 make it impossible to deliver standalone podcast playback with the basic functionality and quality that people expect.
Deal-breaker: Progress sync
Unlike on iOS, Watch apps aren’t allowed to play audio in the background and continue running.1
The only way to continue playback in the background on watchOS 4 is to use the WKAudioFilePlayer API. This is unsuitable for podcast players in its current state for one big reason:
It takes audio playback out-of-process, suspending the host app indefinitely during playback, and does not wake up the host app to notify it of playback progress or other relevant events like pausing, seeking, or reaching the end of an episode.
Since most podcast listeners also listen on their iPhone, the Watch needs to sync each episode’s listening progress with its iPhone app or a web service. But if you reach the end of a workout and pause playback without launching the podcast app, such as with a headphone “pause” button, turning off your headphones, removing your AirPods, or using the Now Playing app on the Watch, there’s no guarantee that the podcast app will be notified of your progress anytime soon. So when the user later goes to the iPhone app to continue playback, progress will be lost, or episodes’ state (played, deleted, recommended, etc.) will be wrong.
Minimum fix: During WKAudioFilePlayer playback, wake the host app to let it record progress periodically (ideally at least every minute) and on state-change events, such as pausing, seeking, and reaching the end of a file.
Better fix: Let apps continue running in the background indefinitely while they’re playing audio with WKAudioFilePlayer, just as workout apps can, so they can track their own state.2
To achieve the minimum experience-quality level that Apple and Overcast customers expect, watchOS and WKAudioFilePlayer also need a few more changes:
Volume control API or widget
Overcast’s top request for the Watch app by far is volume control via the Digital Crown, both for iPhone and Watch playback (when I offered it). It’s especially necessary for standalone Watch playback, as there’s very little access anywhere in watchOS to the concept of the system volume level.
Fix: An API to set the volume level, or a system UI widget for volume control like iOS’ MPVolumeView.
The lack of volume control is especially damaging with watchOS 4’s great new “Auto-launch Audio Apps” setting. If you’re playing audio from an iPhone app and have its corresponding Watch app installed, that Watch app shows by default when your Watch screen activates. If you don’t have its Watch app installed, the built-in Now Playing app shows instead.
In the last few days since watchOS 4’s launch, I’ve had dozens of users tell me that they’ve uninstalled Overcast’s Watch app just to force the Now Playing screen to auto-launch instead, just so they could have quick access to volume control.
Output-device presence, switching
WKAudioFilePlayer will not play to the Watch’s built-in speaker — it only plays to headphones connected to the Watch via Bluetooth. If playback is attempted without headphones connected, WKAudioFilePlayer just silently fails to play, without returning any errors.
Making matters worse, there appears to be no way to reliably detect the presence or absence of a suitable Bluetooth output device to avoid this.
Another common user request for Watch audio apps is an AirPlay output control. The rise of AirPods has increased this need and exacerbated the output-device-presence issue, as their multi-device sharing behavior leads to less certainty by the user about which device they’re currently connected to, and a more frequent need to manually select them on the current device.
Minimum fix: Fix the error-reporting behavior of WKAudioFilePlayer when suitable play conditions aren’t met, and provide a way to detect the presence or absence of a suitable audio-playback device so our apps can relay this information to the user.
Better fix: Also provide a standard AirPlay UI control, possibly integrated with the volume control, again like MPVolumeView on iOS.
Podcast listeners expect previous- and next-track buttons on headphones, cars, and the Now Playing app to perform 30-second seek operations instead of changing episodes completely. On iOS, MPRemoteCommandCenter lets apps respond to these commands however they want.
WKAudioFileQueuePlayer provides no such customization, ignores previous-track commands, and forces next-track commands to skip to the next audio file, losing any progress in the current one — and if there isn’t a next file in the queue, it stops playback completely. And all of this happens without waking the host app.
Minimum fix: Wake the host app whenever the current queue item changes in WKAudioFileQueuePlayer.
Better fix: Bring MPRemoteCommandCenter or an equivalent to watchOS, and if the host app adds its own command handlers, disable the built-in WKAudioFileQueuePlayer behavior for previous-/next-track commands.
Data-transfer progress, speed
While some apps may be able to perform direct downloads of podcast episodes from the internet to the Watch, many will rely on transferring audio files from the iPhone to the Watch to ensure compatible formats, consistent timestamps, small files, or audio-processing features.
Transferring a podcast file to the Watch is a long-running task, often taking at least a few minutes per episode (and sometimes much longer), but the WCSessionFileTransfer class provides no progress information. So there’s no way for apps to inform users how long the transfers may take, or if they’re currently moving at all.
Minimum fix: Add a progress API on WCSessionFileTransfer.
Better fix: Add a progress API on WCSessionFileTransfer and provide actionable information if the transfer is currently paused or waiting for something, such as having other transfers ahead of it in a queue, or needing a different power or connectivity state.3
Additionally: Wi-Fi and LTE transfers to the Watch are currently much faster than Bluetooth, often by an order of magnitude, but the Watch seems to send all WCSessionFileTransfer data over Bluetooth even when connected to a power source. File transfers should use Wi-Fi when power allows.
Nice-to-have: Streaming audio playback
One of the most compelling features of the Series 3 Watch with LTE is streaming Apple Music playback, but there’s no streaming audio-playback API available to developers on watchOS today. Every method requires locally downloaded files.
I recognize this is a big task. That’s why it’s a “nice-to-have”.
Enhancement: Bring a relevant subset of AVPlayer to watchOS, or expose streaming-audio playback via some other method.
There’s one elaborate exception that we discussed in Under The Radar #98: workout apps, which are allowed to run in the background and play audio. So this all becomes possible if you combine a standalone podcast player with a workout app, and only allow podcast playback while a workout is active that was started from that app. But this forces the combination of two completely different app types, and users would find the workout-during-playback requirement confusing, inexplicable, and limiting.
Requiring podcast apps to also be workout apps is a user- and developer-hostile hack that Apple probably doesn’t intend. ↩︎
This could also allow the use of the more capable in-process APIs such as AVAudioPlayer, which appeared to work well in watchOS 3. ↩︎
While I have you, please also add this reporting for paused or waiting background NSURLSessionDownloadTask transfers. ↩︎
I’m sorry that I missed the iOS 11 launch for my fancy drag-and-drop update. It’s coming soon, but it’s not ready yet.
In Hello Internet episode 4, CGP Grey introduced a metaphor for work-life balance as four light bulbs — work, friends, family, and health — between which one can allocate 100 watts, total. So it’s possible to shine brightly in one area at significant cost to the others, or to try to have a mediocre spread between all of them.
Usually, I prioritize family and work, in that order, and my friends are pretty tolerant of my general neglect, but health gets left as a pretty low priority.
This summer, I decided I finally needed to devote significant time to health, and since my family always comes first, that mostly came at the expense of reduced work time.1 And iOS 11 has ended up requiring more changes and fixes than I expected.
So I have a great Overcast update for iOS 11 and iPhone X in progress, but it’s not done yet. I appreciate your patience, and I hope you’ll find it worth the wait.
I’m fine. I just finally realized that the correct amount of exercise for a 35-year-old was probably not zero. ↩︎
Many Apple fans were amused when Phil Schiller explained the removal of the headphone jack on last year’s iPhone as “courage”. But that was nothing compared to what happened last week.
For ten years, the iPhone looked like this:
A rounded rectangle, bars on top and bottom, and a circle centered in the bottom bar. If you want to get fancy, put a line in the top bar, too.
It’s so simple that even when children draw it, it’s instantly recognizable as an iPhone to most of the world’s population. Not just any phone — an iPhone, specifically.
As an app developer and fan, I was hoping that the long-rumored edge-to-edge iPhone screen would still be a rectangle, possibly even with room for a Home button on a narrower bezel, so I wouldn’t have to change my habits (or my app’s layout):
But that’s the unmistakable image of an Android phone. (Or a generic smartphone — same thing.)
Apple would’ve lost what the iPhone has had since its introduction: a unique, recognizable shape that distinguishes itself from all of the other boring rectangles out there.
Many are now speculating that Apple will find a way to get rid of the iPhone X’s top “notch” (officially called the “sensor housing”) as soon as they can, and that this is a temporary design meant only for a few years. But I don’t think so.
Many app developers are planning to hide the notch in the UI with black bars. But Apple explicitly says not to.
This is the new shape of the iPhone. As long as the notch is clearly present and of approximately these proportions, it’s unique, simple, and recognizable.
It’s probably not going to significantly change for a long time, and Apple needs to make sure that the entire world recognizes it as well as we could recognize previous iPhones.
That’s why Apple has made no effort to hide the notch in software, and why app developers are being told to embrace it in our designs.
That’s why the HomePod software leak depicted the iPhone X like this: it’s the new basic, recognizable form of the iPhone.
Apple just completely changed the fundamental shape of the most important, most successful, and most recognizable tech product that the world has ever seen.
I’ve spent many months of development on Overcast’s Apple Watch app, especially implementing standalone “Send to Watch” playback. Unfortunately, I now need to remove the “Send to Watch” feature.
I’m sorry to the people who used it. While there weren’t many of you (about 0.1% of active users), I’ve heard from some who it meant quite a bit to.
In retrospect, “Send to Watch” wasn’t good enough to ship. I’ve paid for that in a constant stream of negative App Store reviews that have reduced the entire app’s average, but that alone wouldn’t sink the feature. Here’s what happened.
Many people had been asking for standalone playback since the first Apple Watch was released. I spent months of development on it last winter, but I simply wasn’t able to get a minimum acceptable quality level out of the limited watchOS audio APIs.
I shelved the feature until other Apple Watch podcast apps revealed a workaround that made background audio much more usable on watchOS, so I decided to use the same technique and ship the feature anyway, despite its other shortcomings. That was a mistake.
That workaround doesn’t work anymore in watchOS 4. Rewriting “Send to Watch” playback to use the only supported alternative would likely take at least another month of development and testing that I currently can’t spare, and due to its limitations, the resulting usability and experience wouldn’t be good enough for me to confidently ship.1
Therefore, I’ve decided to remove “Send to Watch” in the latest update today, a bit ahead of watchOS 4’s expected release, before anyone else gets accustomed to it.
I intend to continue supporting and updating the Watch app as a convenient, lightweight, fast remote control for iPhone playback. But it’s not possible to ship a good standalone podcast player on watchOS today, and it’ll probably take a few more years of hardware and software evolution before that changes.
There isn’t much time left before iOS 11’s release, and I need to spend the rest of the summer working on highly requested features and updates in the main app.
I’m sorry again for the loss of this feature to the people who used it. If it becomes possible in the future to do it right, I’ll do my best to bring it back.
watchOS normally doesn’t let apps run continuously in the background, even if they’re playing audio, unlike iOS. The WKAudioFilePlayer API lets your app hand off a file or playlist to the system for playback, and then try to figure out what happened with it next time your app launches (to know whether the user listened to a certain time, reached the end, skipped to the next episode, etc.). Unfortunately, it has major shortcomings, bugs, and non-ideal behaviors for making a usable podcast app.
The loophole was to declare the app as a workout-monitoring app, which let it run continuously in the background on watchOS 3 if a workout was active — or, for some reason, if it was playing audio. This let me use the much better AVAudioPlayer to build a usable podcast player on the Watch.
In watchOS 4, that loophole no longer enables background audio playback, so any audio APIs other than WKAudioFilePlayer pause as soon as the foreground app changes or the Watch display sleeps. ↩︎
If you readthenews, you may think the MP3 file format was recently officially “killed” somehow, and any remaining MP3 holdouts should all move to AAC now. These are all simple rewrites of
Fraunhofer IIS’ announcement that they’re terminating the MP3 patent-licensing program.
If the longest-running patent mentioned in the aforementioned references is taken as a measure, then the MP3 technology became patent-free in the United States on 16 April 2017 when U.S. Patent 6,009,399, held by and administered by Technicolor, expired.
MP3 is no less alive now than it was last month or will be next year — the last known MP3 patents have simply expired.
So while there’s a debate to be had — in a moment — about whether MP3 should still be used today, Fraunhofer’s announcement has nothing to do with that, and is simply the ending of its patent-licensing program (because the patents have all expired) and a suggestion that we move to a newer, still-patented format.
Why still use MP3 when newer, better formats exist?
MP3 is very old, but it’s the same age as JPEG, which has also long since been surpassed in quality by newerformats. JPEG is still ubiquitous not because Engadget forgot to declare its death, but because it’s good enough and supported everywhere, making it the most pragmatic choice most of the time.1
AAC and other newer audio codecs can produce better quality than MP3, but the difference is only significant at low bitrates. At about 128 kbps or greater, the differences between MP3 and other codecs are very unlikely to be noticed, so it isn’t meaningfully better for personal music collections. For new music, get AAC if you want, but it’s not worth spending any time replacing MP3s you already have.
AAC makes a lot of sense for low- and medium-quality applications where bandwidth is extremely limited or expensive, like phone calls and music-streaming services, or as sound for video, for which it’s the most widely supported format.
It may seem to make sense for podcasts, but it doesn’t. Podcasters need to distribute a single file type that’s playable on the most players and devices possible, and though AAC is widely supported today, it’s still not as widely supported as MP3. So podcasters overwhelmingly choose MP3: among the 50 million podcast episodes in Overcast’s database, 92% are MP3, and within the most popular 500 podcasts, 99% are MP3.
And AAC is also still patent-encumbered, which prevents innovation, hinders support, restricts potential uses, and imposes burdensome taxes on anything that goes near it.
So while AAC does offer some benefits, it also brings additional downsides and costs, and the benefits aren’t necessary or noticeable in some major common uses. Even the file-size argument for lower bitrates is less important than ever in a world of ever-increasing bandwidth and ever-higher relative uses of it.2
Ogg Vorbis and Opus offer similar quality advantages as AAC with (probably) no patent issues, which was necessary to provide audio options to free, open-source software and other contexts that aren’t compatible with patent licensing. But they’re not widely supported, limiting their useful applications.
Until a few weeks ago, there had never been an audio format that was small enough to be practical, widely supported by hardware and software, and unrestricted by patents, forcing difficult choices and needless friction upon the computing world. Now, at least for audio, that friction has officially ended. There’s finally a great choice without asterisks.
MP3 is supported by everything, everywhere, and is now patent-free. There has never been another audio format as widely supported as MP3, it’s good enough for almost anything, and now, over twenty years since it took the world by storm, it’s finally free.
For photos and other image types poorly suited to PNG, of course. ↩︎
Suppose a podcast debates switching from 64 kbps MP3 to 48 kbps AAC. That would only save about 7 MB per hour of content, which isn’t a meaningful amount of data for most people anymore (especially for podcasts, which are typically background-downloaded on Wi-Fi). Read the Engadget and Gizmodo articles, at 3.6 and 5.2 MB, respectively, and you’ve already spent more than that difference. Watch a 5-minute YouTube video at default quality, and you’ll blow through about three times as much. ↩︎
Much of modern Apple’s design philosophy is to relentlessly strip down most products to the bare minimum in certain areas as technological progress allows, following (and largely defining) the modern design fashion of nearly-unquestioned devotion to minimalism.
Saying “no” is the easiest path to what’s considered good design today: if something cannot be easily accommodated, or most people won’t complain too badly in its absence, just omit it.1
While minimalism is one aspect of one view of good design, it’s often overused, underconsidered, and misunderstood, resulting in products with surface-level appeal that don’t actually work very well because they were optimized for visual design and minimalism rather than overall real-world usefulness.2
As with every design decision, removals and omissions have major trade-offs that need to be very carefully considered. Today’s design culture of “no” often goes unquestioned or is assumed to be the best outcome, but minimalism should not be axiomatic in all areas. Often, less is more. But sometimes, it’s just less.
* * *
A lot went wrong with the 2013 Mac Pro. Some of the leading factors that led to its failure:
It was more expensive than its predecessor, while also removing major features that many of its customers still needed. (The 2016 MacBook Pro has the same problem.)
It was designed to accommodate exactly two GPUs with relatively low heat output each, but CPU-heavy users didn’t need the second GPU, and GPU-heavy users needed hotter-running GPUs (and often just one really hot one).3 So the only configuration it was offered in was either overspecced (and overpriced) or underpowered for most Mac Pro customers.
Less than a year after its release, it missed the desktop Retina revolution started by the 5K iMac, and it was beaten handily in single-threaded performance by a CPU generation that Apple never updated it to use.
The last factor was poor timing that could’ve been fixed with regular updates, but the first two are simply major design flaws by making the wrong choices for this product.
I understand why Apple went down that path, and it’s all in the context that led to my favorite line ever spoken in an Apple keynote, right after the unveiling of this Mac Pro during the 2013 WWDC keynote:
“Can’t innovate anymore, my ass!” —Phil Schiller
Back then, Jobs hadn’t been gone for long, Cook had a lot to prove, and the overwhelming press and analyst narrative had been that Apple couldn’t innovate anymore and Samsung was the king of innovation, whatever that meant. (It turned out to just mean big phones.)
This clearly bothered Apple, and it almost certainly influenced their overly aggressive decisions with the design of the 2013 Mac Pro.
But the Mac Pro is the worst place in the entire Apple product lineup to drastically remove capabilities and versatility (and then to not update it for over four years).4
Overly aggressive minimalism fails most spectacularly when there’s no clear consensus among customers on what can be removed. And if you ask Mac Pro customers what they need and want, there’s very little overlap:
Video creators need as many CPU cores as possible, one or two very fast GPUs with support for cutting-edge video output resolutions (like 8K today), PCIe capture, massive amounts of storage, and the most external peripheral bandwidth possible.
Audio creators need fast single-core CPU performance, low-latency PCIe/Thunderbolt interfaces, rock-solid USB buses, ECC RAM for stability, and reliable silence regardless of load. (Many also use the optical audio inputs and outputs, and would appreciate the return of the line-in jack.)
Photographers need tons of CPU cores, tons of storage, a lot of RAM, and the biggest and best single displays.
Software developers, which Federighi called out in the briefing this month as possibly the largest part of Apple’s “pro” audience, need tons of CPU cores, the fastest storage possible, tons of RAM, tons of USB ports, and multiple big displays, but hardly any GPU power — unless they’re developing games or VR, in which case, they need the most GPU power possible.
Mac gamers need a high-speed/low-core-count CPU, the best single gaming GPU possible, and VR hardware support.
Budget-conscious PC builders need as many PC-standard components and interfaces as possible to maximize potential for upgrades, repairs, and expansion down the road.
The requirements are all over the map, but most pro users seem to agree on the core principles of an ideal Mac Pro, none of which include size or minimalism:
More internal capacity is better.
Each component should have a reasonably priced base option, but offer the ability to configure up to the best technology on the market.
It needs to accommodate a wide variety of needs, some of which Apple won’t offer, and some of which may require future upgrades.
Or, to distill the requirements down to a single word:
Just as macOS’ versatility allows iOS to remain lightweight, the ability of the rest of the Mac lineup to be more aggressive, minimalist, and forward-looking depends on the Mac Pro to cover everyone whose needs don’t fit into them.5 The Mac Pro must be the catch-all at the high end: anytime someone says the iMac or MacBook Pro isn’t something enough for them, the solution should be the Mac Pro.
Try to narrow the Mac Pro’s focus any further than a big, versatile, modular box, and it stops serving the needs of big slices of its market, forcing valuable and influential pro users to either sacrifice major areas of their needs or leave the Mac platform entirely. That’s why so many previous-generation Mac Pro towers are still in use today (and highly demanded on eBay), which all have components that are at least seven years old: Apple hasn’t made a Mac Pro since then that has addressed their owners’ needs.
The more needs you try to accommodate, the more you arrive at capabilities similar to the previous Mac Pro tower, just modernized to new components and redesigned to remove outdated options like optical-drive bays, that’s easy for Apple to update every 1–2 years as new components become available.
The 2013 Mac Pro went in completely the wrong direction: satisfying only a narrow subset of pro users, with such tight tolerances that it couldn’t be updated.
There is no single design, no single set of trade-offs, that addresses a large set of pro users: they all want different things, and the only way to serve that with one product line is to have it be extremely versatile and offer a wide variety of configuration options. You can’t do that with a minimalist industrial-design indulgence like the 2013 Mac Pro.
I hope the Apple of 2017 (and beyond) has learned this, and is confident enough in its own abilities and innovation to stare down a minimalist design culture of “no” and ship a maximum viable product at the top of its lineup that says “yes” to everything we can throw at it.
Like open-plan offices, minimalism is also usually cheaper, making it an easy sell to the finance people. And like open-plan offices, its profitability contributes to its ubiquity, making it seem like a better or more universally applicable idea than it really is. ↩︎
I’m trying to make it through this entire post without any Steve Jobs quotes. It’s not easy. ↩︎
I’m not well-versed in the specifics of this, but I believe they also lost the huge bet they placed on AMD GPUs and OpenCL. High-end GPU computing has largely gone to NVIDIA and CUDA, and Apple seems reluctant to offer NVIDIA GPUs in their products for unknown reasons (likely cost and old beefs, which I suggest they find a way to get past). ↩︎
It hasn’t been four years yet, but by the time the next Mac Pro ships, it’ll likely be about 4.5. ↩︎
I suspect that reactions to the 2016 MacBook Pro would’ve been significantly more favorable if more pros were accustomed to the Mac Pro addressing their needs. Instead, the message from Apple last fall was clearly, “Wedge all of your high-end needs into these laptops because the desktop is dead,” putting a significantly higher burden on them that they couldn’t meet. ↩︎
Overcast 3 is now available, and it’s a huge update, mostly in the design and flow of the interface. I’ve been working on it since last summer, informed by over two years of testing, usage, and customer feedback.
I designed Overcast 1.0 in 2014 for iOS 7, and it was a product of its time: it used ultra-thin text and lines against stark, sharp-edged, full-screen white sheets and translucent blur panes, with much of the basic functionality behind hidden gestures. That fundamental design carried through every update until today.
My design goals for 3.0 were:
Update the style from iOS 7 to today: More affordances, more curves, thicker fonts, less translucency, more tactility. App-design fashion doesn’t stand still, and many iOS 7-era designs now look dated.
Bring all functionality into the open: Add visible controls and affordances to anything that was previously hard to find or behind a hidden gesture, such as table-cell swipe actions and actions that first require tapping corner “Edit” buttons.
You wouldn’t believe how many customers have asked me to add features that were already there, or couldn’t find basic functions like deleting episodes, because they weren’t apparent enough in the design.
Adapt to larger phones: Enlarge touch targets and make one-handed use faster and easier, even when only part of the screen is within easy reach. I also wanted to reduce the potential for (and effects of) mis-tapping, especially around the lower left and right screen edges, which I believe will become increasingly important as future iPhones presumably get thinner side bezels.
Overcast 1.0 was designed for the iPhone 5S. Some fundamentals needed to be revisited now that the vast majority of my customers are on 4.7- and 5.5-inch screens.
Now Playing screen, card metaphor
I began by revamping the fundamental structure between the rest of the app and the Now Playing screen with a new card metaphor, which slides up from the bottom instead of pushing in from the right:
Most popular music and podcast apps have adopted slide-up methods for their Now Playing screens (including the iOS 10 Music app), so this matches what people are already accustomed to elsewhere.
It can be smoothly pulled up from the miniplayer (or just tap it), and can be smoothly dismissed by swiping down anywhere on the Now Playing screen (or tapping the “down” chevron).1
This card metaphor is carried throughout all other modal screens in the app, and they all work the same way, speeding up common tasks and greatly enhancing one-handed use.
I also redesigned the Now Playing screen itself. The old one revealed episode notes in a hidden scroll zone — you’d need to swipe up on the artwork to reveal them, which relatively few people ever discovered.
The new Now Playing screen can be swiped horizontally to reveal effects on the left or episode notes on the right, and — critically — this is indicated by a standard “page dots” indicator below the artwork.2
The Effects and Playback popovers have been consolidated into a single effects pane:3
Along with a tightening of the seek-back/forward tap zones, this moved critical controls away from the lower-left and lower-right screen edges, which are often mis-tapped when handling large phones.
Playlists, episode info, and podcast screens
Playlists have been manually reorderable since 1.0, but many iOS users never tap “Edit” buttons in navigation bars, so many people never even knew they could do it. Even for those who knew they could reorder episodes, the two-step process was cumbersome.
The new playlist screen has full-time reordering handles for faster access and better discoverability:
The old popover lacked contrast from its surroundings, had limited space, and required carefully tapping outside its bounds to dismiss, which was often clumsy when one-handed.
The new episode-info card behaves like all other Overcast 3 cards: slides up quickly, then easily dismissed by swiping down anywhere (or inward from the left edge). It can also be previewed with 3D Touch and swiped up for quick actions.
Playing, deleting, queueing
Previously, tapping an episode in the list would immediately begin playback. This is nice when you want it, but accidental input was always an issue: I found it too easy to accidentally begin playing something that I was trying to rearrange, delete, or see info about.
A lot of people also never swipe table cells (or tap Edit buttons), therefore never finding the Delete button. I’ve gotten literally hundreds of emails since Overcast 1.0’s launch asking how to delete episodes without playing them.
To address these, I’ve switched to a two-stage method: tap an episode to select it, which shows various action buttons, and tap the newly revealed Play button to play it.
I expect this to be the most controversial change in Overcast 3, as it does slow down playback, but I’ve found that it works far better and more consistently, most people accustomed to the old way get used to it in a couple of days, and it makes the app far more reliable and discoverable for everyone.
It also gave me a place to put a new button: Queue.
Some kind of “Up Next”-style fast queue management has been one of Overcast’s most-requested features since day one. It took me a long time to come around to the idea because I thought my playlists served the same role. And they mostly did, but they needed two big changes:
Easy access from around the interface to quickly add episodes to the queue.
Overcast 3’s new option for manual playlists, instead of just “smart” playlists, matching iTunes’ definitions: manual playlists only ever contain things you add explicitly to them, while “smart” playlists (previously the only kind in Overcast) are a set of rules that automatically include or exclude episodes. Many people want their queue/up-next to be a manual playlist.
The new queue features are simply Overcast playlists with special placement in the interface. If you already have a playlist named “Queue” or the default “All Episodes”, that’s used, and if not, it’s created as necessary. These show up everywhere and have full functionality just like every other playlist.
The podcast screen always had a huge design flaw. Quick: in the old screen, how do you reverse the sort order of the episodes so it plays oldest to newest?
There’s no standard for this on iOS, so I copied the desktop/web standard of a triangle indicator on the header that can be tapped to reverse the direction. Nobody ever found this, so I’ve added a clearly labeled option under each podcast’s Settings as well.
The old podcast-directory screen was filled with annoyances: podcasts you’d already subscribed to would be dimmed out and show an annoying alert if tapped, you could only add one podcast at a time, etc.
Now, everything’s visible from everywhere, the same actions show up wherever an episode is listed, and you can add multiple podcasts without having to go back into the directory for each one. (Finally.) And, of course, it’s a card, so it’s easy to dismiss by just dragging down.
Some other new stuff:
An all-new, much faster Watch app, finally natively running on watchOS 3! (The old one was watchOS 1. Really.)
And even some Swift! (This is why the app has grown from 7 MB to about 30 MB: since Swift is still young, all Swift apps still come with their own custom copy of the Swift libraries.)
Much nicer ads
When my patronage-only model effectively failed and I added Google ads last September, I had to swallow two bitter pills:
Bad ads: I had little control over the advertisers or the ad content, which could be offensive or reflect badly on my app without my knowledge. I thought I could set adequate limits, but in practice, it wasn’t good enough.
Google provides an extensive control panel that lets you block certain ad categories. Most are clearly placed in Sensitive Categories and were easily disabled before launch, like gambling, drugs, etc., but I kept hearing from customers who’d seen other ads that offended both of us. For instance, at least one listener was shown an ad for a gun, which I never even considered would be allowed with all of the “sensitive” categories turned off. But Guns & Firearms isn’t in Sensitive Categories next to drugs and gambling — it’s in Business & Industrial > Security Equipment & Services.
So I kept blocking more categories, but it was never enough to result in ads that were consistently acceptable to me.
Other ad networks exist, but they tend to be even worse, or they don’t make enough money, or both.
Mystery code in my app: I had to embed the closed-source Google ad library into my app, and accept all of its uncomfortable requirements (Advertising Identifier, permission dialogs to use things like Bluetooth or Contacts if an advertiser wanted it, etc.).
This made me a little uneasy in September, but then November happened, and by late January, I wasn’t comfortable embedding unnecessary closed-source code from a U.S advertising company in my app anymore.
I decided to do whatever it took to drop the Google ads and Fabric crash reports and analytics, which was recently acquired by Google.
No closed-source code will be embedded in Overcast anymore,4 and I won’t use any more third-party analytics services. I’m fairly confident that Apple has my back if a government pressures them to violate their customers’ rights and privacy, but it’s wise to minimize the number of companies that I’m making that assumption about.
Fortunately, the Google ads made relatively little — about 90% of Overcast’s revenue still comes from paid subscriptions, which are doing better now. The presence of ads for non-subscribers is currently more important than the ads themselves, so I can replace them with pretty much anything. So I rolled my own tasteful in-house ads with class-leading privacy, which show in the Now Playing and Add Podcast screens:
Now Playing can show ads for websites, podcasts, apps, or Overcast Premium, while the Add Podcast screen will only ever show ads for podcasts. (Want to buy an ad? Get in touch.)
That’s right, ads for podcasts. What better place to advertise a podcast successfully than in a podcast player? Tap one, and you get the standard Overcast subscription screen with a complete episode list and one-touch subscribing.
I hope I’ve succeeded in my design goals, and I hope you enjoy it.
Don’t worry, edge-swipers: you can also dismiss it by dragging in from the left screen edge, just like the old way, to fit your established muscle memory. ↩︎
Well, almost standard. I made my own so I could improve some of the built-in one’s behaviors and make my own custom little dot icons for Effects and Info. ↩︎
The continuous-play option previously in Playback, labeled “When Episode Ends: Play Next/Stop”, was frequently missed, misunderstood, or invoked accidentally. Many people have asked where the continuous-play option was, and many more asked why their app was suddenly broken and wouldn’t automatically advance between episodes. It’s now just a “Continuous Play” switch in Settings. ↩︎
Unfortunately, this precludes Chromecast support. I’d gladly reconsider if Google documents a way for apps to send audio to Chromecast devices without embedding their closed-source library. ↩︎
One of the issues of yesterday’s GitLab.com “database incident” is that most of their database backups weren’t being tested, and when they needed a restore, they discovered that most of the backup methods hadn’t been working.
Untested backup methods that turn out to be missing or broken are extremely common. I can’t fault them much because it’s a very easy mistake to make: most backups, by nature, never need to be restored from, so you never realize if something changes and they stop working… until it’s too late.
The solution is to frequently and automatically test backups by:
Regularly downloading the latest backup from S3 (or wherever) and performing a full restore onto a clean server.
Testing its validity in a way that a human is sure to notice if it stops working properly.
The first part sounds hard, but isn’t. For Overcast, I run an inexpensive Linode server devoted to automatically fetching, installing, and testing the latest backup every day and emailing me a report.1
The emailed report contains query results from multiple database tables that change regularly and are easy for me to mentally verify as I read it every day, such as the number of users and Premium subscribers, how long ago the latest user signed up, and the most recent episode titles of my own podcasts and other popular shows I listen to.
Automated backup testing isn’t difficult — it’s one simple shell script, called by cron every night, piping its results to mail. If you can run a server, you can do this.
The second part is the trick, though: it’d be too easy to start paying less attention to those daily emails over time, and if they stopped arriving, I may not notice for a while.
My solution is to tie backup tests to a task I do every week: stats collection.
I keep a running spreadsheet with pretty graphs to monitor the health and growth of my business, I update it once a week, and — critically — I pull almost all of the stats I need from the backup emails.
So if the backup ever stops working in a way that the script doesn’t detect or I fail to notice from the daily reports, I’ll still find out pretty quickly, because it’ll impact this other thing I always do that’s a high priority for me and involves important business and money things.
If the script detects any failures itself, it emails me with a very alarming subject line that a Mail.app rule highlights in red and shows an alert for. ↩︎
The quarterly results are in and Apple’s doing fine overall, but the iPad really isn’t, with another year-over-year decrease in sales.
Apple and commentators can keep saying the iPad is “the future of computing,” and it might still be. But we’re starting its seventh year in a few months, and sales peaked three years ago.
What if the iPad isn’t the future of computing?
What if, like so much in technology, it’s mostly just additive, rather than largely replacing PCs and Macs, and furthermore had a cooling-fad effect as initial enthusiasm wore off and customers came to this conclusion?
The moving-average unit sales graph would look something like this, right?
We appreciate the opportunity to work with Consumer Reports over the holidays to understand their battery test results. We learned that when testing battery life on Mac notebooks, Consumer Reports uses a hidden Safari setting for developing web sites which turns off the browser cache. This is not a setting used by customers and does not reflect real-world usage. Their use of this developer setting also triggered an obscure and intermittent bug reloading icons which created inconsistent results in their lab. After we asked Consumer Reports to run the same test using normal user settings, they told us their MacBook Pro systems consistently delivered the expected battery life. We have also fixed the bug uncovered in this test. This is the best pro notebook we’ve ever made, we respect Consumer Reports and we’re glad they decided to revisit their findings on the MacBook Pro.
Apple’s tone and framing here, and in most recent PR statements where they’re on the defensive, rubs me the wrong way.
Consumer Reports has a spotty history with calling Apple out on product flaws. They’re usually written overly sensationally, and they often overstate the importance of minor issues.
But almost every time, the problem they’re reporting is real — especially in retrospect, after everyone’s defensiveness has passed and we’ve lived with the products for a while. It’s just debatable how big of a deal it is in practice.
The iPhone 4 antenna design really was flawed. The iPad 3 really did get uncomfortably warm. And the 2016 MacBook Pro really did have poor, inconsistent battery life in their test.
Apple’s framing here is almost Trumpian, evading responsibility for the real problem — Apple’s bug — by attempting to insult the test (“does not reflect real-world usage”), discredit and imply malice by Consumer Reports (“a hidden setting”), and disregard the bug as irrelevant (“obscure and intermittent bug”).
It reframes the story to be about Consumer Reports’ own failings and Apple helping them see the right way forward.
But disabling the browser cache during a battery test to make results more consistent is reasonable, Apple’s browser offers that feature, and it’s neither very well hidden nor unused by any customers.1
Nothing about a battery-life test truly reflects “real-world usage.” Battery tests are approximations, designed to mimic the most common tasks but in an artificial, automated, repetitive way for hours on end to get reproducible results.
Real-world usage is so varied, with such wildly different software and usage patterns, that nobody’s battery test is relevant to much of anything except relative comparisons of its own results. A test that lasts 5 hours on a 13-inch and 6 hours on a 15-inch tells you that the 15-inch probably has better battery life than the 13-inch, but that’s about it — it certainly doesn’t mean that your 15-inch will get 6 hours the way you use it.
Reloading web pages without a browser cache is no more or less valid than whatever Apple uses to tell me that my laptop will get 10 hours of battery life when my actual workload usually gets about half of that.
The real story here is that Consumer Reports did get very poor and inconsistent results from their battery test, which was a reasonable and valid test, due to a real bug in Apple’s web browser.
With the bug now fixed (in beta, at least), the MacBook Pros deserve a retest, and Consumer Reports is conducting one.
But their previous results were real, and Apple’s bug was to blame.
There are a lot of web developers out there, and I bet a lot of them use MacBook Pros. Power users, geeks, and developers are Apple’s customers, too. ↩︎
It was a cold Tuesday afternoon in New York. My hair was still brown. We were supposed to be working, but even David had to suspend his usual workaholism for a few minutes in awe of what we were seeing.
Everything about the iPhone seemed impossible to the technology world of early 2007.
“You can’t make a good phone without buttons.” “You can’t fit a desktop-class OS on a phone.” “There’s no way that’s a full-blown web browser.” “That has to cost a thousand dollars.”
Yet over the course of an hour, Steve destroyed every rule we thought we knew.
Not only was it truly mind-blowing at the time, but in retrospect, so much of modern computing was invented for the first iPhone and revealed to the world in that hour. Just watch the software demos: most modern UI mechanics and behaviors, large and small, began that day.
When it shipped six months later, it was possibly the best 1.0 in tech history, followed by a decade of relentless hardware and software improvements with the highest success rate and fastest advancement of any product line I’ve ever seen.
I’ve seen a lot of major product launches and technology changes in my life and career so far, but nothing else I’ve seen has ever come close to the surprise, magic, and magnitude of the first iPhone, and I don’t expect it to be surpassed in my lifetime.
This was before Apple events were streamed, so “watching it live” really meant watching liveblog transcripts with occasional photos from people who were there. ↩︎