From Justin’s more general post about shipping incomplete platforms:
When Microsoft released Windows Phone 7 last November, the reviews generally said it was a good first effort but many features were missing that smartphone users have come to expect.
Initially, Windows Phone 7 looked surprisingly capable. It might have become the next major smartphone competitor to iOS and Android.
But, as Justin notes, it was incomplete. It was a very good 1.0 by 2008’s standards, but it was released in 2010, so it needed regular updates to be competitive.
This depended on WP7’s initial quality not being just a fluke, but an indication of a truly effective Microsoft team and product channel, like the Xbox division. I thought time would tell which of those was the case.
And unfortunately, time has told us so far that WP7’s initial promise was probably just a fluke, and that for whatever reasons (it doesn’t really matter why), I doubt we’ll be seeing any more success or noteworthy innovation from the Windows Phone 7 platform.
RIM is very good at making the same phone over and over again. That worked for a while, since it happened to fit a large and profitable market very well.
But then the iPhone changed the game. Everyone in the business scrambled to overhaul their platforms and has since dramatically improved their products. Everyone, that is, except RIM.
Many current iPhone and Android-phone users previously owned BlackBerry phones. RIM has no trouble losing customers. But what are they doing to make any of them return, or to attract new customers as they choose their first smartphone?
What’s a normal dose of chocolate covered espresso beans? 30-40?
Actually, chocolate-covered espresso1 beans aren’t as highly caffeinated, relative to most coffee, as people think.
Disclaimer: I am not a doctor or a nutritionist and I have absolutely no qualifications to make any of these claims. The only research I’ve done on this topic is from the internet and my kitchen scale. Do not base any decisions on this blog post, ever. Also, these numbers are very imprecise, since they all vary a lot depending on how much coffee you use and how you make it. But this is a good ballpark.
A typical 8-ounce mug of coffee, if made properly (which most people would consider too strong), was brewed from about 12 grams of beans. That’s about 100 beans, and it will contain about 120 mg of caffeine, yielding approximately 1.2 mg of caffeine per coffee bean.2
Dark chocolate is usually about 0.8 mg of caffeine per gram.
Chocolate-covered espresso beans usually contain 1-3 grams of dark chocolate and one coffee bean. So they’re likely to contain approximately 2-4 mg of caffeine each. It’s worth noticing here that, depending on its thickness, the chocolate might contain more caffeine than the coffee bean.
Since the typical mug3 of coffee is about 120 mg of caffeine, it’s roughly equivalent to the caffeine levels of 30 of these theoretical chocolate-covered beans. (The buzz you get from eating a bunch of them is also partially attributable to the sugar in the chocolate.)
If you can handle the caffeine from that mug of coffee, you should expect a similar-strength buzz — albeit with a lot more sugar, which I certainly wouldn’t advise — from about 30 chocolate-covered beans.
So Josh actually, and probably inadvertently, nailed it.
There’s no such thing as an “espresso bean” — beans labeled as such are usually just coffee beans, of unspecified origin, with a dark roast. ↩︎
Different brewing methods can have higher or lower caffeine-extraction ratios — how much of the caffeine is actually released into the coffee — but, in practice, since caffeine is highly water-soluble, most drip brews extract nearly all of it fairly quickly. So it’s probably safe to assume that drinking drip-brewed coffee will yield approximately the same amount of caffeine as if you had eaten the amount of beans that it was made from. ↩︎
If you get your coffee from Starbucks, your typical cup might contain much more coffee. A “tall” (12-ounce) regular coffee there, for instance, is usually at least 260 mg of caffeine, partially because the cups are so large and partially because they use a very high beans-to-water ratio (and very dark roasts) to achieve (mediocre) flavor from stale beans. I can’t drink Starbucks’ coffee — even if I could get past the taste — because it’s far too much caffeine for me to handle. ↩︎
In this week’s podcast, we discussed work-life balance and time management for self-employed people, APIs and sync-services for apps, long-term economics of one-time purchases with ongoing costs, and choosing a new Mac for production work.
The new iMacs released yesterday have some extremely impressive CPUs at the high end. The Sandy Bridge architecture in these iMacs, and the recent MacBook Pros, is so good that they’re competitive with the Mac Pro in some benchmarks.
So why buy a Mac Pro instead of an iMac or MacBook Pro?
I use a 2008 Mac Pro that’s still doing quite well, although now that its AppleCare has expired, I’ve started to think about what I’d get if an expensive component died and I needed to replace it.
Every Mac Pro revision after its introduction in 2006 has raised the prices of the midrange configurations. Mac Pros are now so expensive that almost nobody like me — geeks who like big, fast, expandable desktops but don’t do many long-running CPU-bound tasks, like video processing, for a living — can afford or justify them. Sure, I’ve gotten three solid years of use (so far) out of this one and it’s still doing fine, but it was also only $2800 for the mid-speed dual-socket model. (The similarly positioned model in today’s lineup is $5000 and is approximately 2.5 times as fast, which, while impressive, isn’t as far ahead as I’d like it to be for that price.)
In 2008, and for a long time before that, three major factors severely inhibited the performance and long-term usefulness of laptops (and iMacs, since they used many laptop-class components):
Laptop hard drives were extremely slow relative to desktop drives, and most laptops can only hold one drive.
Laptop CPUs usually had significantly lower performance than desktop CPUs.
No laptop components, except RAM and the hard drive, can be upgraded after purchase. And RAM often maxed out too low for power users.
Some things have changed since then. For one thing, laptop CPUs are now awesome. But the biggest change, by far, is something that 2008-me never thought would be economical and practical: SSDs.
The hard drive — usually the biggest bottleneck in personal computers, and formerly the biggest performance gap between laptops and desktops — can now be replaced at sane prices with an SSD that’s hundreds of times faster.1 The SSD is the most important performance increase to happen to personal computing in a very long time. And, notably, desktops and laptops use the same SSDs.
So the performance gap between desktops and laptops, and between Mac Pros and iMacs, has noticeably narrowed.
I have a Mac Pro and Tiff has a 24” iMac. … Now that both of our computers are nearly three years old, mine’s still doing fine for the foreseeable future, but we’re ready to throw Tiff’s out the window. …
While the Mac Pro costs a lot more up front, high-performance users also get a lot more value and versatility over its lifespan, which is likely to be much longer and end much more gracefully.
Some of these points have already become less relevant, but most are still true. The iMac still has very limited internal expansion compared to the Mac Pro. It’s still one integrated unit, so the display comes and goes with the computer. Most people still can’t (or won’t) upgrade or replace its hard drive or SSD because they’re so difficult and dangerous to access.
But I realized today that I’ve been thinking about the iMac the wrong way.
Since I’ve also kept buying laptops alongside the Mac Pro, the actual “cost” of the Mac Pro should include those. And I might not be doing so well there. Most of my “good value over time” arguments for the Mac Pro only apply if it’s your only computer, but for me (and many geeks like me), it’s not.2
Laptops have all of the same limitations as iMacs.3 We’re fine with that, because laptops provide a lot of value in other ways, and they cost a lot less than Mac Pros.
I should really think of the iMac, therefore, more like a laptop — but one that trades its portability and small size for a much larger screen and slightly better hardware. And that doesn’t sound like a very appealing tradeoff, since you can plug laptops into external monitors, and since many iMac owners will still want to own a laptop for portability.
It’s worth thinking about this if you’re considering an iMac purchase: will you still want a laptop? If so, will you be better served by just buying a fast laptop and connecting an external keyboard, mouse, and monitor to it at your desk?
My answer would be “absolutely”, so an iMac definitely isn’t right for me. The question for me, therefore, isn’t “Mac Pro or iMac?”, but rather, “Mac Pro and a laptop, or just a MacBook Pro?”
Two factors complicate this decision:
The most desirable laptops for many people — the MacBook Airs — are too slow and low-capacity for many of the geeks who are still reading this post to use as their only computers.4
The iPad might remove the need for a laptop, at least in some circumstances, for many people.
So, right as the MacBook Pro has become good enough to replace a Mac Pro for almost all of us and let us consolidate into one computer, Apple gives us good reasons to want either two or zero laptops.
I’m glad all of my computers are working, because trying to decide what to get right now would drive me crazy.
SSDs really are hundreds of times faster than hard drives in random small reads and writes, the most real-world-representative usage pattern for a personal computer that’s being slowed down by its hard drive. ↩︎
Having multiple computers has a few other significant costs, such as the complexity and aggravation of synchronization. Multi-computer use does have a major advantage, though: fault tolerance. If one needs to be serviced for a few days, you can usually get a lot of your work done on the other one. ↩︎
Actually, laptops are usually a bit more expandable over time than iMacs for one key reason: with the exception of the MacBook Air, all other Apple laptops have user-serviceable hard drives. ↩︎
Many of you have probably already composed an email to me about how you need a very small, low-powered laptop (like an Air) because you travel all the time, but it’s too slow to be your only computer. That’s perfectly valid.
In my case, I hardly ever use my laptop on planes (even though I always think I will), so I don’t really need to worry about getting a small enough one to fit on a tray table with its seat reclined. (Only the 11” Air really works there.) So I’m perfectly fine to have a 13” or 15” laptop. And I think it’s safe to say that this applies to most people. ↩︎
Watts Martin on TextMate 2’s increasingly apparent abandonment:
We first started hearing about TextMate 2 in early 2006, and as people will always respond if you point out that it’s now 2011, the author never gave an ETA other than “after Leopard.” All well and good, but if your dad walks out one Thanksgiving saying he’ll be back “sometime after Christmas” and it’s now five years later, when your little sister tells you “he didn’t say how long after Christmas” she’s maybe not facing reality. If your dad pops up to make a blog post once a year saying he’s still working on it, he is just possibly not facing reality, either.
He goes on to intelligently survey the old and new alternatives that TextMate users have, concluding:
First off: if you are a Mac user and compatibility with TextMate is an absolute must-have, let me ask you two questions. Is TextMate 1.5 still working for you? Can you keep living with its limitations? If you answered both those questions “yes,” our work is done here. Go in peace.
Otherwise, I’m going to make a radical prescription. … Your decision is between three editors: BBEdit, MacVim and Cocoa Emacs.
I’m still a TextMate user, but I know it’s only a matter of time before it’s effectively abandoned, it loses compatibility (or gains a major annoyance) with a future Mac OS update, and I’m forced to switch to something else.
If I had to switch today, knowing I’d be facing a steep learning curve regardless of where I went, I’d switch to BBEdit — not because of any particular features, but because I’m very confident based on what I see from Bare Bones and their customers that it’s already awesome and it’s likely to be regularly updated for a very long time.
Far too many apps seek your public praise with in-app alerts asking you to take a moment to review the app on the App Store. Of course, doing so interrupts your flow and would require that you exit the app completely. If I like an app enough—or dislike one enough—I’ll write a review without further prompting.
Actually, you probably won’t, but it doesn’t matter. More reviews and higher ratings1can drive sales, but a highly satisfied customerbase drives a lot more.
When someone has spent $4.99 for my app, they’re entitled to a hassle-free experience. I wouldn’t feel right shoving a dialog box in their face a few days later asking for a time-consuming favor when they’re trying to read.
To me, once you’ve paid that $4.99, you get a first-class, luxury experience. I want you to feel great about having bought the app. And every time an update comes out that adds a bunch of features at no additional charge, I want you to feel like you can’t believe how much more value I’m giving you.
People who feel that great about having bought the app are the ones who tell their friends, or the internet public, to go buy it for themselves. And that’s far better for my sales than any App Store review will ever be. If you’re searching for the app by name because you heard it was great, you’re probably already going to buy it, and it doesn’t really matter what someone says below the screenshots.2
Creating more of those devoted customers by giving them a great product is a far better investment in your app’s future than annoying and interrupting them with a dialog that makes you appear cheap and desperate.
I have to wonder how good the reviews tend to be when they were prompted by an annoyance. Do a lot of people really go leave positive reviews, who otherwise wouldn’t have done so, when they see these dialogs? ↩︎
On the iPad 2’s launch day, due to an iTunes Connect glitch, there were no screenshots in Instapaper’s listing in the App Store, and the top few reviews were horrible because of a minor bug in the previous version. Yet it was one of my highest sales days ever, because even a $4.99 app with no screenshots and bad reviews is appealing if your iDevice-owning friend has been raving about it to you for months. ↩︎
The right laptop to get is the one that will be able to serve most of your needs, most of the time, with the fewest compromises on factors that matter to you.
One of the core tenets of happy computing is to have a holistic view of your overall intended usage that can help you distinguish between “needs” and simply “nice to haves”.
The iPod Classic still exists for people who “need” to bring their entire music collections with them everywhere. (Some people really need that, but most Classic buyers simply “need” it.) They can do that, but it comes with huge tradeoffs, most notably an outdated, limited design with an often-sluggish interface that misses out on the much more broad usefulness of the iPod Touch. And many Classic buyers would actually be much happier with a 32 GB Touch if they were willing to budge on their all-music-all-the-time “need”.
Many people have a similar issue identifying their true needs when choosing cars. They often choose based on remote “what-if” scenarios that they’ll almost never need — e.g. “I might need to haul furniture in here someday” — and get a big, unwieldy, expensive vehicle that grossly mismatches the way they actually use it the vast majority of the time. Or they go in the other direction and get an impractical, limited car like a two-seater — “I’ll just only ever have one person with me” — and then need to buy a second car because they so frequently exceed the limits of that one.1
Almost everyone can point to a handful of situations in which a given Apple laptop is impossible, impractical, or frustrating to use for a particular task. Some popular examples:
The lack of GPUs on the Airs and 13” MacBook Pro makes them run games and some pro media-processing apps very poorly, if they can run them at all.
The Air’s CPU is uncomfortably slow for heavy media processing.
None of them except the 11” Air can reliably be used on an airplane tray table if the person in front of you reclines their seat.
The Air’s lack of Firewire makes many media-production scenarios difficult or impossible. Some such scenarios even require ExpressCard, which is now only available on the 17” MacBook Pro.
It’s important to distinguish which of these types of needs, for you, are really “needs” or are just “I might want to do that someday, although realistically I probably won’t want to do it regularly within the lifespan of this laptop.” For instance, my current laptop needs are mostly satisfied by an Air because I have a Mac Pro at home for anything computationally intensive, and I know that the Air is mostly for lightweight tasks like email, web browsing, and writing. (But I hate having multiple systems, because sync sucks.)
Most people put far too much consideration on size and weight. There are situations in which this matters, such as the tray-table example, but evaluate your own situation before deciding based on that: How often do you travel on planes, how much time during the flight would you realistically be working on your laptop, and how bad would it be if you couldn’t?
Consider how “portable” you really need your laptop to be. Are you going to be carrying it significant distances every day? Or is it going to be sitting on one or two desks most of the time?
The laptops have huge differences in footprints and thicknesses. If you truly have a size restriction, that’s generally pretty inflexible. But it’s also rare.
Weight is another matter, since most people don’t carry the bare laptop — they carry it in a bag with other items. Consider how you carry it: how heavy is the bag? (Pack the bag normally and weigh it. You’ll be surprised how heavy it is.2) I once found that my everyday backpack was about 15 pounds, so whether I chose the 13” Air (3.0 lbs.) or the 15” matte MacBook Pro (5.2 lbs.) didn’t really matter. And when I started carrying a lighter bag with almost nothing in it, I found that I couldn’t really tell the difference between the 15” and the Air, since the entire bag weighed very little compared to the old one regardless of which laptop was in it.
Carry weight can be reduced with a conscious effort. Do you really need to bring the power brick back and forth every day, or can you just buy a second power adapter and keep it at work? Do you really need to carry that large paper notebook all the time, or would a smaller one suit your needs?
Perceived weight reductions are also powerful. Do you currently use a messenger bag or briefcase? (That’s probably horrible for your back if it weighs more than a few pounds.) Are you willing to try a backpack? Nice ones do exist, and if you’re carrying your laptop in one, you almost definitely won’t notice small weight differences.
Realistic evaluation like this can lead you to conclude that you don’t need a big, fast laptop because you don’t need its power, and you’d be happier overall with an ultralight like the 11” Air. Or it can make you realize that the larger3 laptops like the 15” aren’t that much less portable in your life, and you need their advantages often enough that the smaller ones would frustrate you.
I’ve been able to evaluate my needs (and “needs”) over time and decide that my next computer setup probably shouldn’t be a Mac Pro and an Air. I’d be served better most of the time by a decked-out 15” MacBook Pro. (Alex Payne was right.) And if an airplane passenger reclines the seat in front of me far enough that I can’t open the laptop’s lid fully, I’ll just use my iPad.
Sorry for the car analogy. For whatever it’s worth, I disagree with Ben’s classification, because I don’t think it’s possible to span a good car analogy across laptops and desktops. Sticking within the realm of laptops, I’d say the 11” Air is the Mazda Miata, the 13” Air is the Mini Cooper, the 13” Pro is the Audi A3, the 15” is the BMW 3-series, the 17” is the X5, and the 13” plastibook is the Nissan Cube.
(Most of those being luxury or premium models isn’t an accident.) ↩︎
Want to be even more sad about your bag’s total weight? Weigh it empty. Most bags, themselves, are much heavier than they need to be. ↩︎
We Mac geeks often forget how well-off we are. Ask a PC user how thin and light their high-specced 15” laptop is. The 15” MacBook Pro is thin and light relative to most laptops in use today. ↩︎
Firewire and USB operate separately from the computer’s internal I/O bus (PCI in older computers, PCI-Express today). To communicate with the system, Firewire and USB devices need to shove their data through “bridge” controllers (effectively translators) that add latency and reduce performance.
Thunderbolt has great promise: unlike Firewire and USB, it’s effectively extending the PCI-Express bus over the cable directly to external devices. It should be much simpler and much faster. It’s like ExpressCard slots, but over a cable.
The simplicity and raw speed of the communication path means that it can support Thunderbolt-to-Gigabit or Thunderbolt-to-Firewire adapters. That’s enough to overcome one of the major hurdles to adopting the MacBook Air’s wedge shape in a future 15” MacBook Pro, as I speculated about in January:
It would be very difficult to fit Ethernet and Firewire ports into the sides of a wedge-shaped case. … I don’t think Apple would make proprietary tiny ports with dongles for Ethernet or Firewire, and neither can be operated well through USB, so to adopt the wedge, they’d probably need to drop both.
Now we have a better solution: the next 15” MacBook Pro can drop its Ethernet and Firewire ports and just offer optional Thunderbolt adapters.1 These probably won’t be cheap for a while, but even if they’re $50 or $75 each, Apple would be able to safely drop the big ports from the 15” without much complaining from high-end buyers.
And since it’s effectively impossible for pre-Nehalem CPUs to support Thunderbolt safely, we can deduce that Apple probably won’t bring Thunderbolt to any product lines that currently ship with Core Duo CPUs until they can update them with the Core i3/i5/i7. It only makes sense to ship a Thunderbolt-equipped MacBook Air (or Mac Mini, or plastic MacBook) when a Sandy Bridge CPU upgrade is ready to ship with it.
So the next MacBook Air, presumably with both Thunderbolt and much faster CPUs, is going to be significantly more awesome. And that’s saying a lot, because the current Air is pretty great. And I can’t wait to see what happens to the 15” line next.
I’ve recently written a lot about laptops closing the gaps with desktops, and that’s saying a lot, because I love the Mac Pro. But Thunderbolt can bring even more of the Mac Pro’s former advantages into the laptop world.
We just need to wait and see how good the Thunderbolt devices end up being, once they’re eventually available. But I bet 2012 will be a better-than-usual year to be in the market for a new laptop.
Apple could similarly drop USB in favor of Thunderbolt-to-USB adapters, but it’s probably not worth doing so for a long time, since most customers still use a handful of USB devices but very few use Firewire or Ethernet. ↩︎
On this week’s podcast: “good” and “bad” programming languages (related to Hypercritical #15), code-writing styles, intellectual-property infringement, and third-party RAM in Macs.
Standard hard drives, without Apple’s custom firmware, don’t provide temperature monitoring in the way that the new iMacs require:
For the main 3.5″ SATA hard drive bay in the new 2011 [iMacs], Apple has altered the SATA power connector itself […]. Hard drive temperature control is regulated by a combination of this cable and Apple proprietary firmware on the hard drive itself. From our testing, we’ve found that removing this drive from the system, or even from that bay itself, causes the machine’s hard drive fans to spin at maximum speed and replacing the drive with any non-Apple original drive will result in the iMac failing the Apple Hardware Test (AHT).
In examining the 2011 27″ iMac’s viability for our Turnkey Upgrade Service, every workaround we’ve tried thus far to allow us to upgrade the main bay factory hard drive still resulted in spinning fans and an Apple Hardware Test failure. We swapped the main drive out (in this case a Western Digital Black WD1001FALS) with the exact same model drive from our inventory which resulted in a failure. We’ve installed our Mercury Pro 6G SSD in that bay, it too results in ludicrous speed engaged fans and an AHT failure. In short, the Apple-branded main hard drive cannot be moved, removed or replaced.
(Then the piece turns into a weak political argument that’s probably better suited somewhere else.)
I don’t think this is a big deal: iMac hard drives are not considered user-serviceable at all, since accessing the bay requires the extremely risky removal of the screen (and the dust-free reattachment afterward). This is something that Apple only wants authorized service centers to do, understandably, because otherwise they’d have a bunch of people botching the repair and then complaining to unfortuate Genius Bar workers that their now-much-more-broken iMac should be replaced at Apple’s expense.
Since Apple doesn’t need to support non-Apple-branded hard drives in this machine, they can take shortcuts to slim down the design, improve reliability, reduce parts, and reduce cost. And it looks like one of those shortcuts is that they no longer need a separate cable and sensor to monitor the hard drive’s temperature — they can now read the drive’s internal sensor directly through the SATA power cable’s unused pins, as long as the drive has their custom firmware to send the data to those pins.
The iMac is a very clear, known tradeoff to the types of geeks like us who would even think about replacing its internal hard drive ourselves (or having an unauthorized place do it to save money or add unsupported parts):
You get a beautiful, slim, all-in-one, high-end Mac, with one of the best LCD panels on the market built in, for a very good price relative to PCs and an excellent price relative to the other Macs. For these benefits, you give up all after-purchase internal customization, expansion, and self-repairs, except RAM. If you want a more customizable desktop, you can either get a Windows PC (which, if you want a Mac, isn’t an alternative), you can spend a lot more money for a Mac Pro, or you can just deal with the iMac’s limitations.
Michael from OWC is upset1 because they can’t continue to offer one part of a service that Apple has never permitted, on a non-user-serviceable part, to install unsupported hardware — and that this is the result of Apple being unnecessarily evil, rather than the more pragmatic explanations. It doesn’t seem like a very strong argument.
He’s also predisposed to discredit the new iMacs because he recently bought the prior version and wants to assure himself that he made the right purchase. ↩︎
Asked today about the possibility of Amazon launching a multipurpose tablet device, the company’s president and CEO Jeff Bezos said to “stay tuned” on the company’s plans. In an interview at Consumer Reports’ offices, Bezos also signaled that any such device, should it come, is more likely to supplement than to supplant the Kindle, which he calls Amazon’s “purpose-built e-reading device.” […]
“We will always be very mindful that we will want a dedicated reading device,” he said. “In terms of any other product introductions, I shouldn’t answer.”
That’s about as clear of a confirmation as you can get from a company that doesn’t preannounce products.
The threats accuse devs of patent infringement regarding Apple’s in-app purchase mechanism, but the patent holder appears to be targeting independent developers individually instead of going after Apple itself.
The threats are from Lodsys, a “patent holding firm” (a company, offering no products or services, whose sole purpose is to accumulate patents and extort fees from people who accidentally infringe them), citing patent 7222078 with this abstract:
In an exemplary system, information is received at a central location from different units of a commodity. The information is generated from two-way local interactions between users of the different units of the commodity and a user interface in the different units of the commodity. The interactions elicit from respective users their perceptions of the commodity.
My brain melted when I tried to mentally summarize this patent and figure out if it applied to in-app purchase, but in practice, it doesn’t really matter — like nearly any intellectual-property infringement threat in the U.S., even if it’s inapplicable or invalid, very few independent developers have the money, time, and willpower to fight back.
Either Apple needs to help or indemnify all developers somehow (I’m not sure how that works, legally, but some people have mentioned it as an option), or only the largest and richest corporations will be able to use in-app purchase in their apps.
Need a powerful calculator? Buy PCalc and help make James Thomson fabulously wealthy so he can stop worrying about in-app purchase on his free version.
Ben Brooks, citing Steve Ballmer’s failures as Microsoft’s CEO:
Microsoft should be searching for a new CEO right now.
I completely agree: Ballmer is not just the wrong person for the job, but he’s the wrong kind of person for the job.
As Ben writes, the CEO of Microsoft should be a lot more product-focused and passionate about technology. Ballmer is more of a plug-and-play CEO: the kind of general “business” executive who other “business” people think can effectively manage any corporation. That’s why he’s able to do a mediocre job of keeping everything going and copying what everyone else is doing, but it’s extremely unlikely that he’ll ever substantially innovate or increase shareholder value.
Unfortunately for Microsoft and its shareholders, I’m not confident that they’ll make a change anytime soon. As John Gruber says often, including in The Talk Show this week, it’s baffling why Ballmer hasn’t been fired yet. There’s been sufficient justification to fire him for years, yet he’s still there, implying a bunch of behind-the-scenes political reasons for his continued role as CEO. And if that’s the case, it’s also unlikely to change anytime soon.
Last night for some reason I wanted to try to redesign Instapaper, just for fun. […]
Note: This is not me criticizing the current interface for Instapaper. Marco did a terrific job with it, and it is hands down the best app on the iPad. This was just me being bored for an hour or 2.
It looks great. (And I’m honored that the basic interface actually isn’t very different from mine.)
Here’s Instapaper’s current design, with pagination off (left) and on (right):
(Click for big. Note that pagination removes the top divider line and adds a bottom margin.)
Tim’s design:
(Click for big.)
With the same good-natured spirit in which Tim created this, I’d like to explain some of my design decisions, and why I can’t or shouldn’t adopt most of his big changes:
Two-column text
A lot of people have argued well over whether iPad text should be single- or multicolumn, and I prefer single-column. But there are a few technical reasons why I can’t do multicolumn text.
Instapaper’s pagination is optional and can be toggled at any time while reading, coexisting with scrolling. This is especially helpful when selecting text for copying or sharing: even when pagination is on, it temporarily turns itself off if text is selected so you can select across the page break. If pagination is permanently enabled, like in iBooks and the Kindle app, there’s no way to span a selection across two pages.
Multicolumn text makes this even worse: there’s no good way to span a selection across a column break. This is one reason why, as far as I know, no iPad app with multicolumn text allows selection at all (Correction: The Kindle app now does. Oops.). And multicolumn text would also need to give up vertical scrollability, so either pagination would need to be the only viewing mode, or scrolling users would need to scroll horizontally.
Predefined pages
Instapaper’s pagination is completely dynamic: you can scroll to any point in a document, turn pagination on, and it’ll make a page out of it with proper margins and be able to navigate to the next or previous page on demand.
It’s done this way for the hybrid pagination/scrolling reasons above, plus a major technical reason: Instapaper allows so much variability in HTML, images, and styling that there’s no way for me to know how many pages there are without rendering them all. And the devices (especially the older iPhones) can’t do this quickly enough once an article is longer than a few pages — a lot of people read very long articles (or even single-page versions of entire novels) in Instapaper.
The new bottom bar
Does it stay while pagination is off and people are just scrolling? If it goes away, where does the origin information go, and how does it transition? If it stays, I’ll get complaints every day that the interface is too “cluttered” and there’s not enough room for the text. (Yes, even on iPad. You wouldn’t believe how many people want a full-screen mode.)
The bordered image in the article
Many images look much better borderless, like logos and diagrams, and it’s hard to know the difference algorithmically and quickly.
The removal of the text adjustment (“ᴀA”) button:
Where should this feature go? If I remove it, I lose a major selling point, and my existing customers set me on fire. If I move it, to where? It doesn’t really make sense under the actions menu, and if I bury it in Settings, it’s inconvenient to access and many customers won’t find it.
I’ve often cut entire features during development because there was nowhere good to put them in the interface.
The removal of the pagination/tilt-scrolling button:
Tim’s idea was probably to have pagination permanently enabled, so this would make sense. But a lot of people prefer manual- or tilt-scrolling, and even many pagination fans (myself included) don’t want to be locked into it constantly.
The word “Archive” in the Back button
I assume this was using the name of the folder that Tim was browsing at the time — the Archive — to indicate that he’s going back to it when he’s done. Normally in a navigation controller, this is the correct labeling method.
But folders can be user-defined, and many of the names might be too long to look reasonable in the Back button. (Summarizing could help, but not well, and it would need to be a very small width limit.)
And “Read Later”, “Liked”, and “Archive” are all names of folders and actions. Many people would forget which folder they were browsing, want to go back to the list after reading, and hesitate because they think “Read Later”, “Liked”, or “Archive” mean their respective actions rather than those navigational locations.
Moving all toolbar buttons to the right side
They’re too close together in this mockup, and people would often accidentally hit the wrong one. Comfortably spacing them out would significantly intrude on the title’s width, which is probably why Tim pushed them closer together. This is one reason I split them up between the top corners.
Another big reason I split them up that’s probably worth keeping: I put the two most common post-reading actions, Like and Archive/Delete, next to the “back” button in the upper-left. Customers reach for that area, not the upper-right, to go back to the list after reading an article, so it made sense to put the common “I’m done with this” actions there. (And while Archive/Delete is more commonly used than Like, I put the Like button closer to the Back button to make it less likely to accidentally hit the potentially destructive Archive/Delete.)
That said…
If I were creating a new reading app from scratch that didn’t need some of my existing features that get in the way of this design, this has some really great elements that I’d gladly steal. Or, more likely, I’d just hire Tim to design it.
On this week’s podcast, Dan and I discussed why PHP really is bad, Safari’s Reading List competing with Instapaper, the Lodsys patent, the rules of the Apple ecosystem, and getting decent coffee at work.
Twitter announced yesterday that third-party apps will have their access to direct messages (DMs) revoked at the end of the month, and the apps that need DM access — any full-featured Twitter clients, except Twitter’s own — need to start requesting a new type of OAuth token if they still want to use DMs.
And these tokens will only be issued in OAuth web-browser flows, not xAuth, so apps need to pop up little web-browser windows or kick you through Safari for you to log in, rather than the common xAuth practice of just showing a simple username-and-password form in the app.
Oh, and one more thing: formerly-xAuth apps that need DM access have only 12 days to build this completely new login interface, test it, and release a new version — and, for iOS and Mac App Store apps, get it approved — before their existing apps start being denied access to DMs and probably display confusing and incorrect error messages, since the developers could never have foreseen this condition. Such aggressive timing is definitely a dick move.1
The Twitter ecosystem contains hundreds of thousands of interesting third-party applications designed to enhance your Twitter experience. Third-party apps let you do things like automatically share your Tweets on other networks, connect to other players on gaming platforms, or instantly tweet whenever you update your blog.
Translation: “Add-on or piggy-back apps are what we consider ‘third-party applications’ now. We will not acknowledge any other application types. There is no such thing as a complete Twitter client by someone other than us, as far as we’re concerned. If you choose to develop one, we will not make it easy for you, and we can and probably will kill it in the future at our convenience.”
Full-featured clients are completely dead to Twitter. Gone. Invisible. Like most web companies, they prefer a clean break. Down the memory hole. These apps never even existed. Doesn’t matter if they helped make Twitter popular.
It’s easy to look at this DM policy change as a sleazy way for Twitter to make third-party clients worse, as John Gruber speculates:
I can’t think of any reason why Twitter would force native apps through OAuth other than to create a hurdle that steers users toward Twitter’s own official native clients. Because Twitter’s official clients aren’t going to force users to jump through OAuth to authenticate — they’re still going to simply ask for your username and password in a simple native dialog box.
There’s actually a very good, pragmatic, non-evil reason for them to do this: they want to make sure that people know what permissions they’re granting the app before they click that big green OAuth “Allow” button, and the xAuth flows used so far in most clients don’t give Twitter a chance to explain to users what level of access is being granted. In other words, Twitter wants to control the messaging. And that’s understandable, although misguided.2
Twitter couldn’t possibly care less about the inconvenience this causes for third-party client developers.
And that’s also understandable, for three big reasons:
1. It’s not that big of a deal.
This isn’t a huge problem for most Twitter-app developers. (Most Twitter-integrated apps aren’t full clients and don’t even need DM access, and therefore don’t need to do anything. This is only a pain for full-client developers.)
The 12-day deadline sucks, and there’s no reason for Twitter to be so aggressive with it. But in a few months, everyone will forget about any problems that result from it, and we’ll all have time to flame Twitter for whatever changes or requirements they force next.
2. Twitter is not ours.
Twitter can do whatever they want.
It’s the simple, brutal truth. Twitter must do what’s best for Twitter. They owe us nothing.
It’s not a public good. It’s not a right. It’s a private, entirely centralized service with no meaningful competition and a massive network-effect barrier to competitive entry. Twitter has all of the power in its relationship with users and developers.
It doesn’t matter whether third-party clients helped make it popular. Twitter has reciprocated for years by giving such apps a compelling platform for which to sell software. Successful Twitter-client developers have made a ton of money in exchange for the help they provided in making Twitter popular.
It was a fair deal to both parties, but Twitter believes that they no longer need this help and they can reap many benefits from controlling the full client experience, so developers of other full clients are being cut out of the deal.
3. Twitter is unstable and constantly changing.
Twitter is a huge service with correspondingly huge operating costs, a staff of hundreds of people, major problems and constant shuffles among the founders and leaders, and just barely enough revenue to be profitable (as far as I know) after raising $360 million in funding.
It’s a very different company today than the Twitter we knew a year ago. This wasn’t a devious change — it was forced to transform based on its sheer scale. But many of the people who made Twitter that developer-friendly company have since left or become burdened by an influx of other employees above and below them.
The old Twitter is gone. The new Twitter is faster, bigger, much more stable, full of Javascript and dysfunctional hash-bang URLs, and much more interested in owning the clients that most people use. And next year’s Twitter might be radically different from today’s.
You can’t count on anything about Twitter to remain constant.
The entire company — and, by extension, the product and the API — is in constant flux. What’s there today might not be there tomorrow.
And because of point number 2 above, they don’t need to get our permission or give us much warning before changing or taking away something that we like or depend on.
These are the risks that you take when you base your personal happiness or your business on a single, irreplaceable, young, evolving third-party service.
This terminology is not a reference to Twitter CEO Dick Costolo. ↩︎
The OAuth web-browser flow is no more “secure” than xAuth, as Daniel Jalkut illustrates — nothing’s stopping a native app from stealing your password in all sorts of ways as you type it into the embedded browser window. This isn’t being done in the name of technical security, but rather, in a plausible attempt to more accurately tell users what they’re granting access to.
But even that is… optimistic. People don’t read security warnings, and they’ll type their password into pretty much anything that asks for it. Just ask Microsoft and PayPal. ↩︎
UPDATE: Shortly after I published this post, Apple came to the rescue, which will hopefully render most of the Lodsys problem moot. Glad to be proven wrong so quickly on this one.
On last week’s Build and Analyze, prefaced by a big disclaimer that I’m not a lawyer and this should be taken with a grain of salt, I said that developers threatened by the Lodsys patent troll should probably just pay the small extortion fee that they’re demanding because any sort of legal fight would be extraordinarily expensive and would likely bankrupt any small developer.
I thought this was pragmatic, but it was a surprisingly controversial stance.
Patrick Igoe, a patent lawyer who has previously written well about why developers probably aren’t infringing on the Lodsys ‘078 patent, dedicated an entire post today on why I’m wrong:
Marco Arment’s advice to simply roll over and pay Lodsys could be damaging not only to the developers taking his advice, but to the independent development community as a whole.
After a wall of irrelevant personal attacks on my knowledge of the patent system (the lack of which I guess I didn’t adequately disclaim in my casual remarks during a podcast about software development), and criticizing me for not reading and understanding the actual patent (try it and get back to me when you fully understand it), he clarifies:
Arment is correct that standing up will cost money. I take serious issue with his assertion to developers, however, that “you’re going to lose,” especially when it appears he has not even read the patent claims.
With the claim limitations described above, I have been unable to come up with a plausible explanation as to how the targeted developers could be, especially without Apple as a joint infringer, directly infringing the patent. Patent litigation is unpredictable, and yes, developers could lose nonetheless, but based on my read, this is far from a slam dunk for Lodsys.
I didn’t mean to imply that winning a patent infringement suit against Lodsys would be impossible. Rather, my stance was that any attempts to fight this patent are going to be so expensive that no small developer will be able to afford to finish the fight, and will therefore likely “lose” in every way — but instead of paying a small tax until the patent expires, they’ll probably lose their entire business (and maybe more).
Igoe and I are both biased. A patent attorney working for large companies wants to be right that there’s a valid case against this alleged infringement, and that the ideal move is to fight it by giving vast amounts of money from somewhere (?) to patent attorneys. A software developer wants to be right that the ideal move is to keep developing software and forget about this as quickly and inexpensively as possible.
…[If] you’re just a little app developer and if Apple doesn’t give you blanket coverage for whatever the consequences of a legal fight would be (also including the risk of a devastating damage award), your paramount consideration must be to avoid that Lodsys files a U.S. patent infringement lawsuit against you.
…
Being sued by Lodsys can ruin your little business. In case you don’t have a company that comes with limited liability, it can ruin you personally, possibly for the rest of your life. In a situation like this, there’s no way that you can afford the luxury of defending a principle, or depend on anyone’s solidarity.
Dr. Drang said I’m wrong about patents in many ways (which I want to write about separately), but does make a very good point if you choose to license the patent from Lodsys:
You can’t “just pay it and get on with life” because it isn’t well defined. How long will the payments last? How will the base of the 0.575% be determined? How long will that rate be applicable? … The answer, in a nutshell, is “As long as we (Lodsys) say it is.” You cannot make a business decision with such an open-ended liability in front of you.
So if you settle, it’s a good idea to hire a lawyer (which I hope you’ve done already if you’re being threatened by Lodsys) and see if you can negotiate a fixed royalty until the patent’s expiration as part of the agreement. (Or whatever the lawyer says you need. I’m just a software developer with “a lack of understanding of the patent system.”)
I don’t agree with Igoe at all that the right move for small developers is to avoid cooperating with Lodsys, risk being sued, and spend what it takes to attempt to win.
You can’t depend on Apple to step in. They probably won’t. (UPDATE: They did. Very happy to be wrong about this.) Apple’s behavior toward developers has repeatedly shown that we’re on our own. Apple’s not going to go out of their way for us unless our benefit is an accidental side effect to a much bigger upside for Apple that they wanted for other reasons.
You can’t depend on other developers taking any legal stand against Lodsys, because they probably can’t and won’t. Even if someone fights, you can’t afford to wait for the result, and it might not even help you.
The right move is to consult with a lawyer and take the advice you’re given, and a sensible lawyer will almost certainly tell you to avoid a lawsuit and just settle with Lodsys as cheaply and contractually safely as possible.
The crux of Apple’s letter, from senior vice president and general counsel Bruce Sewell, is right in its opening paragraph, which reads in part: “Apple is undisputedly licensed to these patents and the Apple App Makers are protected by that license. There is no basis for Lodsys’ infringement allegations against Apple’s App Makers. In addition to stating that Apple would share the letter with developers—which it has—the company also says that it “is fully prepared to defend Apple’s license rights.”
Brian Christian, in an in-depth article on the Turing Test and trying to convince judges on the other end that he’s a human:
Humphrys’s twist on the Eliza paradigm was to abandon the therapist persona for that of an abusive jerk; when it lacked any clear cue for what to say, MGonz fell back not on therapy clichés like “How does that make you feel?” but on things like “You are obviously an asshole,” or “Ah type something interesting or shut up.” It’s a stroke of genius because, as becomes painfully clear from reading the MGonz transcripts, argument is stateless—that is, unanchored from all context, a kind of Markov chain of riposte, meta-riposte, meta-meta-riposte. Each remark after the first is only about the previous remark. If a program can induce us to sink to this level, of course it can pass the Turing Test.
Once again, the question of what types of human behavior computers can imitate shines light on how we conduct our own, human lives. Verbal abuse is simply less complex than other forms of conversation. In fact, since reading the papers on MGonz, and transcripts of its conversations, I find myself much more able to constructively manage heated conversations. Aware of the stateless, knee-jerk character of the terse remark I want to blurt out, I recognize that that remark has far more to do with a reflex reaction to the very last sentence of the conversation than with either the issue at hand or the person I’m talking to. All of a sudden, the absurdity and ridiculousness of this kind of escalation become quantitatively clear, and, contemptuously unwilling to act like a bot, I steer myself toward a more “stateful” response: better living through science.
Indeed, while the new Nook is quite Kindle-like in appearance and functionality, the company went to great lengths to tell the audience how inferior Amazon’s e-reader is to the Nook. “Kindle 3 has 38 buttons. That’s 37 more than the all-new Nook,” Lynch said, adding that so many buttons “assault the user.”
Barnes & Noble abandoned the problematic dual-screen approach of the previous e-ink Nook and adopted an infrared touch screen, allowing them to cut off the entire lower section of the device — a huge advantage, at least aesthetically, over the Kindle.
Touch e-ink screens aren’t new — Sony has offered them for years — but before the most recent models, they’ve required touch-sensitive films over the e-ink screens, causing problems with contrast, sharpness, and glare. But the current generation of infrared touch screens, as implemented by Sony, Kobo, and now Barnes & Noble, supposedly work well.
This Is My Next has great pictures and a video of the new Nook. Points worth noting so far:
They’re saying it has only one button, but there are actually five: the left and right sides, like the Kindle and prior Nook, have page-turning buttons that are apparently disabled by default and can be enabled in the Nook’s settings. (That’s weird.)
The e-ink “blink” is almost completely gone when turning pages in text. It’s still present when entering or leaving most other screens, so they’re probably doing a lot of tricks to get it to work as it does. But this looks like a huge accomplishment that will be a major selling point to people new to e-ink.
They have improved but not solved the case-clip problem. The new case (photo) clips around the entire top and bottom edges of the Nook. This will be more secure than spanning fabric bands diagonally across the corners, but not as secure (or compact) as Kindle cases, which clip into dedicated holes in the Kindle’s left side.
The screen is inset very deeply into the bezel (photo), presumably to accommodate the optical sensors for the touch screen. This could be annoying when tapping edge targets or wiping dust and fingerprints off of the screen.
I’ve preordered it so I can test it for Instapaper use, and I’ll post a review if it’s noteworthy. I’m also curious to see how Amazon responds with the next e-ink Kindle, since the Kindle 3 is relatively old. But today, the Nook looks like the better-designed device.
I have 27 GB of Phish music, all legal, that would take 10 straight days to play through. This is moderate, as Phish collections go. They’re by far my favorite and most-listened-to band, and I’m not the only one: they’ve had a massive and devoted fanbase for most of my lifetime.
But Phish is surprisingly inaccessible to casual music listeners, so a lot of people who would otherwise like Phish either have never heard their music or have an inaccurate preconceived judgment about it.
I can’t blame them. Most people reading this, upon hearing about a band that sounds interesting, would go to iTunes and preview some of the band’s top singles or albums, maybe buying a few if they sound good. But this doesn’t work well for this band, or any band like it.
Phish is a jam band, and jam bands excel at live shows. Much of a jam band’s appeal is the improvisational, extended jams in and around their songs that often vary significantly between performances. But the studio albums only contain a single, well-polished, usually shortened version of each song that loses the variety, the energy, and much of the personality of the live performances.
And jam-band songs evolve over time. Usually, the studio-album version of a song (if it exists) is an early version, before a lot of its live performances. 1994’s studio version of “Down With Disease”, for example, is much shorter, slower, and more bland than the modern version.
If you’ve only ever heard a jam band’s studio albums, you’re missing out on the majority of their music, talent, and appeal.
So play this video (switch it to 720p for better sound quality), turn up the volume, and read on to see if this might be for you.
Why you might like Phish
If many of these are true, there’s a good chance you’ll like Phish:
You love music, you listen to a lot of music, and you feel noticeably happy and energized whenever you get into a new album or band.
You love musical complexity, and you’re not crazy about the lack of it in today’s pop music.
You consider most modern pop music “overproduced”.
You don’t easily tire of the songs you love and hear often.
You’re a sucker for a good electric-guitar solo. Your favorite part of one of your favorite rock songs is the quick little guitar riff toward the end, and you wish it was longer.
You prefer to buy and listen to full albums rather than singles. (Bonus points if you rarely use shuffle, preferring to listen to songs in album order.)
You need a lot of music to fill long periods, such as listening in headphones while working at a computer all day.
Why you might not like Phish
It’s not for everyone. You probably won’t like Phish if any of these are true:
You don’t like long songs, or songs with repetitive elements that build up over a long time. The well-known songs are often at least 8 minutes, and a typical show is nearly 3 hours.
You won’t tolerate flaws such as misplayed notes or missed lyrics. These are live, improvisational recordings where the fans expect a lot of variety, so mistakes are inevitable, and you’ll hear them in every show.
You care a lot about meaningful lyrics. If most Phish lyrics are meaningful, the meaning is way over my head.
You want your music to help you feel angry or angsty. Sorry, you won’t find anybody raging against the machine or mourning a failed relationship around here. Phish usually makes people feel energized and happy (which is why so many college kids smoke pot at the concerts). And the band shows none of the rockstar “screw you” ego: they get on stage, share the spotlight among the four members, play like crazy, have a lot of fun, and humbly thank the fans for our support.
You want your music to be available in the iTunes Store, possibly because you’re heavily invested in the Ping social network (someone must be, right?). While some live Phish is available in the iTunes Store, most of the shows aren’t, and they’re always cheaper and available sooner on LivePhish.
If these don’t concern you, you’re lucky, because being a Phish fan is pretty great.
Why it’s awesome to be a Phish fan
If this music resonates with you in the right way, you’re likely to get really into it, and all other music will seem simplistic and shallow for a long time. It’s like discovering great black coffee or a fine wine for the first time, after only ever having mass-produced mediocrity: whoah, there’s a lot going on there.
And if you end up loving it, you’re really in luck.
There’s a huge library of live show recordings, and most of them are sufficiently distinctive that there’s value in listening to a lot of them. This applies to individual songs, too: every time the band plays one of your favorites, it’s likely to be different enough from the others that you’ll enjoy it and appreciate the differences. (Once you’re a true geek, you’ll even rank your preferred performances of your favorite songs.)
You can download official, high-quality, legal, DRM-free MP3 recordings of every concert within hours after it ends from LivePhish.
You can play an “album” (a show) and not need to touch your music for hours. No skipping around, no strange back-to-back songs from shuffle. This is why I listen while I’m working: I can play hours of music I know I’ll like without getting distracted by bad track selections every three minutes.
Being a Phish fan is nothing like being a fan of traditional rock bands. I love the Foo Fighters, but in the last five years, they’ve only released two albums, with a combined length of less than two hours. I don’t think there’s much reason to see them in concert more than once (although this live play-through of their new album is great for the impressive display of stamina), and their studio albums really are the best representation of their music. I can be an active Foo Fighters fan for about one week per year, because there’s just not enough new stuff to keep me engaged more often, but Phish is cranking out many hours of new material every few months.
How to be a Phish fan
Telling a potential fan to skip the studio albums and listen to a band’s live shows isn’t very helpful. It’s easy to follow up on a recommendation for a traditional band’s newest album, but a touring jam band can produce many live shows each year, and they all look very similar.
Every fan will have a different idea of which one you should listen to first. My pick is December 30, 2010 at Madison Square Garden: it’s a great show that I think gives a representative overview of Phish’s style.1
So if you think you might like Phish, give it a try. Buy the entire show in MP3 format. It’s not a big risk at $9.95.
Then put on headphones, turn the volume up to at least medium, and listen all the way through while you’re doing whatever you usually do when listening to music. Then, ideally, play it again.
Even if you passed my guidelines above, you still might not like it. That’s fine. At least you tried, and now you can accurately say that you don’t like Phish when people like me try to tell you how great it is.
But you might like it.
And you might really like it.
In which case, you might have a hard time listening to anything else for a while.
Phish’s 33-show summer tour starts tomorrow night and runs through Labor Day weekend. I’ve preordered the entire thing.
Generally, I like the newest shows best. Many Phish purists will tell you that their performances from the ’90s are the best, but they’ve really come a long way since then, in my opinion for the better, and they’re almost a different band today. The sound quality is also much better for recent shows, especially starting with 2010’s.
If you’re looking to get more shows, my favorites tend to be toward the end of tours, since they make fewer mistakes and tend to really rock out in the jams.
Generally, two-day shows have more energy — they pull out some obscure songs in the three-day shows that are sometimes great but often dull. But I imagine every Phish fan, including future-you, will have a different opinion on this.
And if you get the rest of the 2010 New Year’s show at Madison Square Garden, which was pretty great, it’s worth seeing why Meatstick was 18 minutes long. ↩︎
The Google Translate API has been officially deprecated as of May 26, 2011. Due to the substantial economic burden caused by extensive abuse, the number of requests you may make per day will be limited and the API will be shut off completely on December 1, 2011.
I hope you didn’t build a business depending on this free API that was completely out of your control.
Nearly every free or inexpensive iPhone translation app is just an interface to this. You can tell whether an app uses this by whether it works offline: Google Translate API apps, obviously, can’t. (They’re not very useful for international travelers who don’t use expensive data-roaming plans.)
I wonder how Google felt about all of these iPhones hitting their free translator constantly. Seems like the kind of service that they might want to be exclusive to Android. Maybe iPhone apps are the “abuse” they’re talking about.
These apps will probably just switch to the Bing Translation API. Microsoft loves losing money with online services, so this should stay free forever… unless they get a new CEO who isn’t crazy about pouring billions into a hole. And if that happens, these apps are going to be in trouble, because I don’t know of any other free translation APIs.
The receptionist handed me a clipboard with forms to fill out. After the usual patient information form, there was a “mutual privacy agreement” that asked me to transfer ownership of any public commentary I might write in the future to Dr. Cirka.
The copyright claims may not stand up to scrutiny, but as with all intellectual-property matters in the U.S., most people can’t afford to fight any threats made against them.
Guy English’s excellent response to John Siracusa’s “Copland 2010” argument made in Hypercritical #14. Both Guy’s post and John’s podcast are excellent and very interesting for programmers.
For the app developers who have been sued, this is now a very critical situation. As I explained in my Lodsys FAQ, patent litigation in the United States is extremely costly. The most important thing for those app developers is to clarify with Apple — and to the extent that Android apps are involved, with Google — whether they will be held harmless and receive blanket coverage including possible damage awards.
Notably, the targets include Iconfactory and Quickoffice.
I stand by my position that the best thing to do when threatened with patent litigation is to consult with a lawyer and be prepared to just pay the troll. A lawsuit is far more expensive.
Now these developers need to spend significant time, money, and stress on a lawsuit that they’ll probably “lose” by settling as quickly as possible — the longer it takes, the more they’re likely to lose — rather than making their products better and creating value in the world.
I’ll leave it up to smarter people in these areas to discuss whether Apple can and will cover anyone’s legal costs and possible damages. My best guess is that they won’t. If they do, great. But it would be foolish to make any legal decisions based on the assumption that they will.