According to two sources, Apple’s new Dock Connector features only 8 pins, seemingly contradicting claims of “16-pin” and “19-pin” connectors. Although the original Dock Connector contained 30 pins, reports of 16- or 19-pin connectors seemed hard to square with the port’s small size and Apple’s actual need for additional pins beyond what USB/Micro-USB offer.
Sounds a lot like the next Dock interface may just be USB 3.0 with a custom connector, abandoning all of the 30-pin connector’s analog A/V outputs in favor of AirPlay, Bluetooth, and full USB implementations in peripherals.
But amid the confusion of the past month, nearly all have overlooked the section of Sippey’s post which holds the key to Twitter’s future: Cards. Twitter’s new Cards technology allows third-party developers to create richer, more compelling — and, above all, visually consistent — content inside of Twitter itself.
Therein lies Twitter’s goal: A rich, consistent Twitter experience for every user. When the hammer drops and Twitter changes its guidelines, those apps that can’t deliver this consistency will no longer be able to integrate with Twitter.
A big question is whether Twitter will even give third-party apps the chance to display their “consistent experience” before cutting them off. I’m guessing they won’t.
Maybe the reason promoted tweets still don’t show up in the API, and therefore still aren’t displayed by third-party clients, is that Twitter never had any intention of monetizing the timeline outside of their official clients, because there won’t be any more third-party clients soon.
I see the current Twitter Card offerings and I’m extremely underwhelmed. I don’t get the point of the news excerpts. Inline pictures are fine, but I don’t think I want to play micro-games inside of Twitter. I certainly don’t want to watch long-form video content there. And so I’m worried once again that all of these “extras” are going to overwhelm and distract from what Twitter has always really been about: information.
Over the last few years, Facebook has tried in a number of big ways to be more like Twitter because too much social activity was happening on Twitter and Facebook was threatened.
Now, Twitter may be about to turn their community upside down and make a lot of enemies to be more like Facebook because Facebook’s type of platform has more monetization potential and more platform lock-in effects.
The mutual envy is palpable. No wonder the two companies don’t likeeach other. They’re fighting for the same ground.
Facebook was already mostly there. For Facebook to add major Twitter-like features, it didn’t need to change very much, as far as users could tell. But for Twitter to add major Facebook-like features, they need to upend much of their service, causing a lot of destruction in their wake and breaking a lot of what people loved about Twitter.
Or, to use Twitter’s language, they’re about to break a lot of “the features that make Twitter Twitter.”
But I don’t think Twitter’s current leadership knows what makes Twitter Twitter anymore, if they ever did.
“This is a case of principle,” EA’s Lucy Bradshaw said in a press release that accompanied the filing. “Maxis isn’t the first studio to claim that Zynga copied its creative product. But we are the studio that has the financial and corporate resources to stand up and do something about it. Infringing a developer’s copyright is not an acceptable practice in game development.”
This could be interesting, but it probably won’t go anywhere.
Zynga’s response at the bottom of the article is hilarious. Loosely translated, they said, “We are committed to copying every popular game, and we know we can get away with it.” Sadly, they’re probably right.
Most coffee nerds will tell you that paper filters block flavorful oils from getting into the brewed cup, while the superior metal-mesh and perforated filters will let these oils through for a superior taste.
The AeroPress is designed to use its paper filters, which the extreme coffee nerds have always held against it. A few years ago, Coava (now Able Brewing Equipment) made a stainless-steel perforated “Disk” replacement filter, which I really didn’t like, mostly because its holes were too large for the fine AeroPress grind and it complicated cleanup.
Kaffeologie says that the final version should be nearly identical, but with more precise welds attaching the outer ring to the mesh.
S Filter in front of Disk.
Note how much more fine the S Filter’s holes are compared to the Disk filter behind it. Here’s a closeup, with circles to indicate where the Disk filter’s holes are because they’re hard to see at this scale:
From the Kickstarter page, Kaffeologie claims:
It’s the finest reusable filter in the world. The S Filter is designed with smaller openings than any other reusable filter. By far. With tens of thousands of holes per square inch and hole openings smaller than a human hair, the S Filter will brew a much cleaner cup than other reusable filters.
I can’t verify their finest-in-the-world claim, but based on what I’ve seen, it’s certainly plausible.
The S Filter lets water through noticeably more quickly than the paper filters. Therefore, when brewing with the S Filter, you should use the AeroPress inverted (upside-down) method if you’re not already using it.
Like the Disk, the S Filter’s thickness prevents the AeroPress’ cap from fully closing into its retaining teeth:
I was afraid that this might lead to grounds leaking around the filter, like they sometimes do when using paper filters without a tight fit, but this didn’t happen in practice.
The S Filter produced very good coffee. Like the paper filters, and unlike the Disk, it left no sediment in the cup.
Unlike the Disk, it also didn’t clog on my fine grind (“6” on a Virtuoso 586). Because it didn’t clog, it was much easier to clean: the grinds fell right off of it with a quick rinse, just as Kaffeologie promised.
Of course, cleanup is even easier with the paper filters: just pop the puck of grounds, filter and all, into the trash and get another filter next time. The S Filter really isn’t a hassle, though: it’s still far easier than cleaning a French press or an auto-drip coffeemaker.
Paper-filtered cup on left, S-Filtered cup on right. Click for larger version.
I couldn’t detect any difference in taste or oils. As far as I know, the tiny, shiny dots on the surface of a fresh cup are the oils, and both of my test cups — one made with paper, one made with the S Filter — had approximately the same amount of visible oil. I also couldn’t taste any meaningful differences in overall flavor.
I cannot, therefore, recommend the S Filter for taste alone — I suspect the common paper-blocks-flavor-oils wisdom is a myth,1 or the difference is just too small to notice beyond the placebo effect.2
But the S Filter is a well-designed, very effective reusable AeroPress filter. I’m glad I have it in case I run out of filters or need to minimize paper waste in certain situations.
If you’d like one, too, they’re $10 on Kickstarter for the next 25 days, and they ship in September.
I also can’t taste the difference between a rinsed and unrinsed AeroPress filter, so I don’t rinse them. Rinsing a paper filter might have a larger effect when there’s much more (and much thicker) paper being used, such as a thick pour-over or Chemex filter. ↩︎
French presses, often lauded for their superior oil-filled coffee, have a lot of other reasons why their coffee could be so good. ↩︎
I still can’t get into Gmail. My phone and iPads are down (but are restoring). Apple tells me that the remote wipe is likely irrecoverable without serious forensics. Because I’m a jerk who doesn’t back up data, I’ve lost at more than a year’s worth of photos, emails, documents, and more. And, really, who knows what else.
There are some big lessons we can all learn from this. First, it’s very, very important that your email password is extremely secure because so many other accounts use email for password resets. If someone else can receive your email, they can become you on the internet very quickly.
But more importantly, it’s completely inexcusable for people who know better not to have even one backup. And ideally, you should have more than one: between Time Machine, scheduled disk clones with SuperDuper, and continuous online backup, pick at least two. For the extremely lazy, Time Machine and online backup are automatic and very affordable to anyone who can afford a Mac. (And especially anyone who can afford a Mac, and iPad, and an iPhone.)
If you still won’t back up, you should probably disable remote wipe.
But really, back up, for goodness sake. It’s not hard. You can get a huge external hard drive for very little money, relatively speaking. Then you can set up online backup for a few dollars per month for unlimited space. Really.
If one edits a document than does Save As, then BOTH the edited original document and the copy are saved, thus not only saving a new copy, but silently saving the original with the same changes, thus overwriting the original.
What a terrible implementation by Apple. Why add “Save As” back to Mountain Lion if it doesn’t work the way people expect, and instead behaves destructively?
And why didn’t John Siracusa note this in his review? ★☆☆☆☆ Useless.
In this week’s podcast, we follow up on ergonomic keyboards and text editors, then discuss the appeal of the command line, AeroPress grind size, the Metro Style renaming, Hulu Plus, why Instapaper needs rate limiting and the challenge of finding an appropriate limit, and which apps give the best bang for the buck.
Adobe Revel is a photo app that gives you one place for all your photos that you can access from your Mac, iPad, and iPhone. Revel combines a set
of easy-to-use organizing and editing tools with a cloud service designed specifically for photos, so everything you do in Revel is automatically
synced across all your devices. Organize your photos using event tags. Crop and apply photo filters to get professional-quality results without all the work. And post photos to your favorite social network to share with your friends.
With Revel, you always have access to all your photos no matter what device you are using, and you have everything you need to make them look great.
Thanks to Adobe Revel for sponsoring the Marco.org RSS feed for this five-week span. Here’s the last tip, for now: Revel is much faster than the built-in iOS Photos app when browsing a large collection. I like to keep my last two years of photos in Revel (published there by Lightroom) and it doesn’t slow down at all.
Apple tech support gave the hackers access to my iCloud account. Amazon tech support gave them the ability to see a piece of information — a partial credit card number — that Apple used to release information. In short, the very four digits that Amazon considers unimportant enough to display in the clear on the web are precisely the same ones that Apple considers secure enough to perform identity verification.
The scariest part of his hacking was that it didn’t rely on a single password being guessed, brute-forced, phished, or stolen. It wouldn’t have mattered whether his password was “password” or “XEyOI^5FyC6gE!1BokW;uPpv2ick+lBo”.
Amazon’s system is partially at fault, but the weakest link by far is Apple. It’s appalling that they will give control of your iCloud account to anyone who knows your name and address, which are very easy for anyone to find, and the last four digits of your credit card, which are usually considered safe to display on websites and receipts.
At the bare minimum, for this level of recovery that bypasses security questions, they should require confirmation of the entire credit-card number and verification code, no matter what they need to do to remain PCI-compliant and pull that off.
And ideally, before resetting a password by phone, they’d send a forced “Find My”-style push alert to all registered devices on the account saying something like, “Apple Customer Service has received a request to reset your iCloud password. Please call 1-800-WHATEVER within 24 hours if this is unauthorized.”
Then make the person call back the next day. If you forget your password and the answers to your security questions, it’s not unreasonable to expect a bit of inconvenience.
Neither do I, honestly. I don’t think it’s possible.
First, $50 per year is far too much, even for people who pay for things. Twitter probably couldn’t sell a lot of paid subscriptions at that rate.
But the bigger problem is that I just don’t see a social platform growing quickly enough to overcome the network-effect barrier when it’s not free to join, especially when the goal is effectively to replace an existing, free, extremely successful network.
The nature of the network effect probably means that general-purpose social communication services must be free in order to ever grow to a significant size. This is even more true when there’s an established free competitor.
I’d love to be proven wrong on this. That’s why I backed App.net: if they reach their goal, that’s a pretty good sign that I’m wrong, and I want to be a part of it.
The problem is, we can’t all be Daring Fireball - we can’t get away with posting a witty headline and a blockquote 5-10 times a day. We’ve adopted John’s concept of linking, but not the idea that we need to tell a bigger story on our sites.
There’s no reason to link to something unless it’s something readers probably haven’t come across already or you can provide a unique perspective on it. Only link to something when you’re adding some value.
We do have a surplus of bad copies of Daring Fireball, but the link-blog format isn’t the reason. I think the real reasons are environmental:
Many modern blog engines support easy creation of link posts.
The huge growth in the Apple-and-nearby-topics genre over the last few years, with Gruber’s own success, in particular, has attracted many copycats.
Boutique ad networks with low barriers to entry, including Fusion for Deck-like ads and Marcelo’s Syndicate for RSS link ads (which Gruber also pioneered as part of the modern link-blog format), made the link-blog format far more profitable than cheap Google ads ever could.
Blaming the format itself for link-blog overload is like blaming Canon for the deluge of mediocre SLR photography over the last decade. The tools are now available to everyone, which is great. Most people won’t become world-class users of these tools, but the surplus of mediocre output doesn’t mean that there isn’t room for more people who can be truly great at it — it just means that most people’s link blogs aren’t worth following.
We don’t need more Daring Fireballs. We have Daring Fireball already. People who read it have little reason to read anyone else’s minimally differentiated clone.
Rather than letting my links tell a story arc with minimal commentary, I use link posts as a formatting convenience when I have a paragraph or three in response, but not enough material or time for an article. If I don’t have anything meaningful to say about a link, or if Gruber already did a better job of commenting on it, I’ll usually pass on it.
I’ve taken the link-blog format Gruber popularized and found my own way with it, and hopefully, that provides value and differentiation for readers.
And I highly suggest to other writers that they find their own way.
Chris Foresman got some sentences out of Allan Odgaard, but not many answers.
It sounds like Odgaard has little interest in making TextMate 2 shippable and is now hoping that other people will finish it for him, but he doesn’t care very much because the features he uses work acceptably.
Suspecting this was going to happen last fall, I tried a few alternatives: BBEdit, Sublime Text 1, and an alpha of Chocolat. At the time, none of them won me over. Once the TextMate 2 alpha was released, I switched to it full-time.
But a few weeks ago, after concluding that TextMate 2 was probably going to be abandoned, I started trying the alternatives again with their significant updates: Sublime Text 2, and Chocolat 1.2.
I’ve now chosen my TextMate replacement, but before I reveal it, let me give a huge disclaimer: You will have your own opinion. It’s probably safer to talk about Jesus, gun control, Israel, global warming, parenting techniques, regional pizza styles, Linux distributions, why I don’t like cats, or my favorite PHP features.
Yet here I am, comparing text editors and giving my subjective and arbitrary opinions on them. What could possibly go wrong?
The big three
Like most software, text editors are best when they’re under active development and have large user communities. These usually ensure the fewest bugs and the strongest ecosystems of plugins and themes. In the world of general-purpose Mac text editors, that leaves three choices, in decreasing order of popularity:
BBEdit: Very long history, very active development, and top-notch developers. Unfortunately, it’s not my style in many big and small ways. I think its long history will continue to endear it to its userbase and long-time Mac users, but it doesn’t feel like the younger apps at all. It also has a much simpler syntax-parsing engine, which I think keeps it very fast but reduces the usefulness of syntax highlighting and scope-related editing features relative to the other editors. But I’m also pretty sure it will outlive them all. I wish I liked it more. (Used full-time for 2 weeks.)
Sublime Text 2: Cross-platform, fairly young, active development. In many ways, it’s “not Mac-like”, possibly because of the cross-platform implementation. It’s a bit ugly in places, and some common operations are unintuitive. But it has a huge fanbase and tons of plugins, and the engine seems solid: it’s extremely fast, seems stable, and supports every modern feature I tried. (Used full-time for 2 weeks.)
Chocolat: Very young, active development. It has the most modern Mac interface, but it also bears a creepy, uncomfortable, Samsung-like resemblance to TextMate: it’s effectively a TextMate clone with a few new features added. It’s very pleasant to use, but its youth is obvious: it’s still noticeably incomplete, and it suffers from serious performance problems frequently. The performance issues scare me, and I’m not sure it will be able to mature into a fast, full-featured, rock-solid editor. (Used full-time for 2 weeks.)
Vim and Emacs: Not native, very hard to learn, ugly, lack many modern features (flame suit: on). But they’re very powerful, they’re available on all modern OSes, and they work in remote terminals. Bonus feature: you can throw away your mouse. These apps are good for many things, but not the type of app I’m looking for. (Used Vim full-time for 2 years, and still use it on remote servers. Cursed at Emacs a few times in college.)
MacVim: Slightly native, but otherwise the same drawbacks as Vim.
Coda 2: A great web-development IDE by Panic. You couldn’t ask for better developers. But it’s a complete IDE, not a general-purpose text editor, so I can’t really include it here: if you want the type of app that Coda is, you should definitely try Coda, but if you’re looking for a text editor, it probably isn’t a good fit.
SubEthaEdit: A very good editor, but it was dramatically outclassed by TextMate 1. Coda 1’s editor was based on SubEthaEdit’s engine. Recent development seems minimal. I bet that if you’re reading this and you have SubEthaEdit, the last time you launched it was at WWDC, where its collaborative-editing feature is still very popular for publicly shared, collaborative notes. And before that, the last time you launched it was probably WWDC 2011. (Used full-time for a few months in 2006.)
TextMate 1: You could just use TextMate 1 until it stops working. But it has many small flaws, a few big ones, and some performance issues, and it lacks many modern features. It will probably never get another significant update, and it might not even get any future bugfix or compatibility updates. (Used full-time for 5 years.)
TextMate 2 alpha: Development has just been open-sourced after 7 months of going almost nowhere, so it’s probably safe to assume that it’s effectively abandoned. It supports many modern features and is quite good in many ways, but there are huge bugs, shortcomings, and performance issues that will probably never be fixed. If someone else took this over and worked on it full-time for a year or two, it might become a great editor, but today, it isn’t, and there’s nobody at the wheel. (Used full-time for 7 months.)
So what’s next?
I almost picked Chocolat, but its performance problems gave me pause.
So I picked Sublime Text. So far, I don’t love it, but I like it. The more I used Chocolat, the less I liked it, but as I continue to use Sublime Text 2, I like it more.
And I think it has a more promising future of stable, high-quality, long-term development than anything else on this list except BBEdit, which I still wish was more my style, but it still isn’t.
Your X-ray glasses for App Store sales: Tapstream helps iOS, Android, and Mac developers see which sites, tweets, emails or web ad units are getting you more users instead of just web visits. It allows you to finally put Google AdWords or Facebook Ads to work with an eye on real user acquisition. See how social engagement translates into app activations and which referring sites best drive converting traffic.
Tapstream’s web-to-app analytics is completely free. Larger publishers will appreciate Google Analytics integration and our unique customer lifetime value (LTV) metrics that help you identify the most valuable web traffic sources.
Thanks to Tapstream for sponsoring the Marco.org RSS feed this week.
The console, manufactured by Blaze under license from SNK, will arrive with 20 games preloaded and an “expandable game card slot” (previously reported as taking standard SD cards) for additional titles. With four face buttons, an analog stick, and a 4.3-inch display of unknown resolution, the Neo Geo X should be able to handle native Neo Geo games…
When I was a kid, the Neo Geo was the legendary phantom console that nobody had but everyone knew about. We heard that it cost six hundred dollars, the games cost a hundred dollars each, and its cartridges were the size of VHS tapes. And every argument about whose Genesis was better than whose Super Nintendo would immediately end with “Well, the Neo Geo is better than both, and my cousin’s friend has one, and I’ve played it and it was awesome!”
Because we knew that even though that kid was full of shit and none of us had ever seen one, he was right — it kicked our consoles’ asses.
Now a company nobody has ever heard of can cram a Neo Geo with 20 (presumably emulated) games, a controller, a battery, and a screen far better than any TV sold in 1990 into a tiny iPhone-ripoff enclosure and sell it for $200. We live in amazing times.
This week’s podcast: Sublime Text 2, TextMate 2’s open-sourcing, App.net’s funding and why Kickstarter-like campaigns can get a huge boost of funding at the end, the difficulty of having good presences on multiple social services, the effects of cross-posting, and how to allocate time for unpopular apps.
They raised $528,144, an impressive amount that just barely achieved the removal of all ads on their front page for one year. Most of the original-content goals were not reached, and they’ll have ads on other pages on the site, just not the front page.
They deserve congratulations for raising so much money, but I still think this was a weird move.
We all know that the RTS genre has taken a hit. There just aren’t that many quality original RTS games coming out. If you love to play games like Total Annihilation your options are pretty limited. This is where Planetary Annihilation comes in. It is meant to be a truly innovative spin on what RTS games can and should be.
Having spent almost a decade lugging around desktops and CRTs to my friends’ houses to play Total Annihilation LAN games, and even making my own units and later making a custom rebalancing mod for our use, the idea of a(nother) “spiritual successor” to TA made by some of its original creators is a no-brainer.
More smart analysis from industrial designer Don Lehman, this time on the leaked photos of the purported replacement for the Dock connector:
I can’t say for sure that this is the new iOS dock connector, but my gut feeling says that it probably is. Even if it’s not, we can at least agree that this design has some pretty smart features baked in.
As soon as I saw these photos a couple of weeks ago, I was convinced of four things:
It’s definitely the new Dock connector.
It’s reversible, which is awesome.
It’s basically USB 3, but with a proprietary Apple chip so they can control (and profit from) licensing and provide a few extra features.
We’re going to love it.
I’m confident that it’s real because the design is so small, simple, friendly, and smart. That’s Apple at its best.
This has the stench of a man looking to make a name for himself, not someone that’s doing what’s best for Apple or more importantly, its customers. To take one of the most heralded retail experiences in the world and gut it, stripping it of everything that makes an Apple store what it is, just doesn’t make sense.
Nobody has ever walked into an Apple store and said, “I wish this was more like other retail stores.”
Anyone who bought this should be annoyed, but buyers should also blame themselves for casting insufficient skepticism on the deal in the first place. Nothing in this business is ever truly “unlimited”, and “lifetime” never means your lifetime.
One of the key things we’ve learned over the past few years is that when developers begin to demand an increasingly high volume of API calls, we can guide them toward areas of value for users and their businesses. To that end, and similar to some other companies, we will require you to work with us directly if you believe your application will need more than one million individual user tokens.
How, exactly, will Twitter “guide” developers who are required to “work with them directly”? What exactly are “areas of value for users and [our] businesses”?
Translation: “Once you get big enough for us to notice, we’re going to require you to adhere to more strict, unpublished rules to make sure you don’t compete with us or take too much value from our network.”
And “big enough” might not be as big as you think:
Additionally, if you are building a Twitter client application that is accessing the home timeline, account settings or direct messages API endpoints (typically used by traditional client applications) or are using our User Streams product, you will need our permission if your application will require more than 100,000 individual user tokens.
Instapaper’s “Liked By Friends” feature reads timelines and will need more than 100,000 tokens. And that’s a relatively minor feature in a small web service run by one guy.
We will not be shutting down client applications that use those endpoints and are currently over those token limits. If your application already has more than 100,000 individual user tokens, you’ll be able to maintain and add new users to your application until you reach 200% of your current user token count (as of today) — as long as you comply with our Rules of the Road. Once you reach 200% of your current user token count, you’ll be able to maintain your application to serve your users, but you will not be able to add additional users without our permission.
Got a successful Twitter app or Twitter-integrated service already? Either “work with” Twitter quickly and make whatever changes they require before you get too many more users, or shut down.
Finally, there may also be additional changes to the Rules of the Road to reflect the functional changes in version 1.1 of the Twitter API that we’ve outlined here.
There will definitely be more rules that we’re not ready to discuss yet, possibly because we haven’t decided what they are yet, or possibly because we know you’re not going to like them.
For instance, I bet this is finally how clients will be required to display tweet ads. That requirement, probably worded roughly as “you must display every tweet in a timeline, and display them all consistently”, will also kill any clients’ filter and mute features.
Twitter for Mac and iPad
Twitter for iPhone has been thoroughly gutted of any traces of its Tweetie origins, and it’s clearly Twitter’s premiere client. (It probably gets more usage than their website.)
But Twitter’s own Mac and iPad apps, both also acquired as versions of Tweetie, haven’t been meaningfully updated in many months. Both lack significant features and have glaring bugs, and neither of them comply with the Display “Guidelines”.
Twitter’s inaction on these apps suggests that they’re probably going to be either discontinued entirely (most likely for Mac) or gutted and replaced with an interface more like their iPhone app (most likely for iPad).
Subjectivity and uncertainty
Twitter has left themselves a lot of wiggle-room with the rules. Effectively, Twitter can decide your app is breaking a (potentially vague) rule at any time, or they can add a new rule that your app inadvertently breaks, and revoke your API access at any time.
Of course, they’ve always had this power. But now we know that they’ll use it in ways that we really don’t agree with.
Anil Dash wants us to compare this to Apple’s App Store review process (while not using App.net if we’re white geeks, or something like that). The amount of power Twitter has over developers is similar to the App Store setup, but the incentives are completely different.
Many uses of Twitter’s platform compete with Twitter on some level. Twitter doesn’t need a lot of its nontrivial apps, and in fact, they’d be happier if most of them disappeared. Twitter’s rules continue to tighten to permit developers to add value to Twitter (mostly “Share on Twitter” features) but not get nearly as much out of it (e.g. piggyback on the social graph, display timelines, analyze aggregate data).
By comparison, Apple needs its apps much more than Twitter does, and Apple’s interests conflict much less with its developers’. Even its famous anticompetitive rules, such as the prohibition against “duplicating existing functionality”, have been minimally enforced and have actually diminished over time.
Furthermore, we know pretty well how Apple will behave and what sort of rules we’ll need to follow in the future. They’ve been consistent since the App Store’s launch. But Twitter has proven to be unstable and unpredictable, and any assurances they give about whether something will be permitted in the future have zero credibility.
I sure as hell wouldn’t build a business on Twitter, and I don’t think I’ll even build any nontrivial features on it anymore.
And if I were in the Twitter-client business, I’d start working on another product.
We’ll be working with Twitter over the next 6 months to make sure we comply with these new requirements as much as possible.
I’m sure Tweetbot will have no trouble complying with the new rules, as they’ve been stated so far.
The bigger question is what the rules will be in a few months or years, and whether the clients we like will be arbitrarily cut off when Twitter’s next big initiative starts, when yet another founder or CEO leaves or rejoins the company, or when their suits1 gather in a conference room with a spiderphone on the table and redefine their quadrants to better synergize their consistent experiences for their integrated revenue strategy with their real customers.
California-web-company suits need not actually wear suits to be suits. But trust me, they’re there. ↩︎
My Dropbox password was itself a 1Password-generated litany of nonsense. Without access to Dropbox, I couldn’t get my [1Password] keychain. Without my keychain, I couldn’t get into Dropbox.
My email and Dropbox passwords are both unmemorized 1Password gibberish. To prevent this scenario when I started using 1Password, I printed these two passwords onto two different pieces of paper, unlabeled and inconspicuous, and hid them in safe places.
From a clean install, with access to my email and Dropbox, I can get 1Password up and running to unlock everything else I need. (In theory, I could just do this for the Dropbox password, but it makes me feel more comfortable to have emergency email access, too.)
If you use 1Password or similar password generators, evaluate your contingency plan: if all of your computers and devices were stolen, destroyed, or rendered inoperable suddenly, and you had to start fresh from a completely clean setup, can you get through your own security measures?
This is especially important for data-encryption passwords and keys, since there usually isn’t a customer-service department you can call to reset those.
On this week’s podcast: Twitter’s controversial API-policy changes, why they want (and need) to enforce them, their incentives and motivations, why this is different from Apple’s rule over the App Store, the challenges of a decentralized Twitter replacement, and worms.
As with Instagram, the change does not seem to have affected sharing outwards, but will impede users’ ability to follow friends that they may know from Twitter. This is part of Twitter turning the screws on sharing information about the users of its network out to other services like Instagram (and by extension, Facebook) and Tumblr.
First, a huge disclaimer: I worked at Tumblr for its first four years, and I’m still a shareholder. But I had no knowledge of this issue before reading this article, and I haven’t asked anyone at Tumblr about it.
That said, purely from the outside, this looks like some serious bullshit on Twitter’s part. When they cut Instagram off, the between-the-lines explanation was because of Twitter’s rocky history with Facebook, with Facebook’s previous blocking of Twitter. It was a dick move to everyone using Instagram, but I guess Twitter’s rationale was “two dick moves make a right.”
I’m not aware of any such history with Tumblr, so I can’t think of any reasonable explanation for Twitter’s motives here other than the obvious one: Twitter will now only permit large services to add value to Twitter, not get any value from it.
And Tumblr was (is?) even working with Twitter to be a major Cards partner.
So when Twitter says something like “[developers] will need to work with us” to “identify areas of value” when they get big enough for Twitter to notice, I don’t have high hopes for what “working with us” might actually entail.
The problem with this solution is that Twitter was built on the backs of the very developers it is now blocking. It now expects those developers to continue supporting Twitter by syndicating content into its platform, but it no longer wants to provide any value to developers in return. This is an extremely dangerous position because it creates resentment in the minds of the people most likely to influence the future. When the disruptive competitor comes along – when, not if – who are the developers going to side with?
Twitter’s current wager is that they’re too big for developers’ feelings about them to matter.
John Gruber suspects that the upcoming iPhone and iPad launches will be separate events:
I don’t think Apple would want reviews of both a new iPhone and new-size iPad appearing at the same time. Why share the spotlight? Why have another Apple product battling with the iPhone for the top spots in news coverage?
Harry Marks rethinks his blog’s role after tiring of low-quality, bulk-posting tech sites:
These websites perpetuate a myth that they are well-informed, knowledgable news outlets that tell the world what it needs to know. What I’ve learned, however, is just the opposite: they’re ad-driven FUD machines that run on pageviews stolen from attention-deficient readers who would rather digest a shocking headline on a digital tabloid than read thoughtful commentary provided on an actual news site.
The biggest losers here are consumers. If the verdict stands, then the costs of the judgment will be reflected in the cost of mobile devices. Furthermore, other manufacturers will feel the need to buy Apple’s official permission to build useful phones, passing down the possible $20-per-handset fee.
I disagree that “useful” phones need to be so close to the iPhone that they run into Apple’s patents and trade-dress claims in the Samsung case.
I also don’t buy the “we’ll have to pass the costs along” argument. Businesses always say that to scare people, usually government regulators via their voters, into maintaining the status quo and avoiding additional regulatory, safety, or environmental costs that are usually better for consumers.
Smartphone and “tablet” manufacturers will keep doing what they always do: sell us their products at the highest prices they can possibly charge for them to maximize total revenue.
Maybe we’ll pay this theoretical “extra $20” in patent-license fees for our smartphone up front, a surcharge less than any carrier in the U.S. will charge to “activate” it, because it’s a drop in the bucket relative to the $2,000-over-two-years contract. In that case, this discussion is moot.
Or that extra $20 is significant, we won’t pay it, and the manufacturers will find a way to save $20 somewhere else to remain competitive and continue selling us their products that are so close to the iPhone that they run into these patents.
And it’s possible that the next great phone, the one that shames the iPhone the same way that the iPhone buried the Blackberry, will never make it to market. Designing and selling an advanced smartphone just became a dangerous business.
Apple’s claims from this case aren’t very far-reaching. What they won, effectively, is a weapon to use against anyone who copies a narrow set of behaviors, appearances, and packaging designs.
If Samsung wasn’t so blatantly idiotic about copying so much from the iPhone, Apple wouldn’t have won so many of their claims. In fact, Apple lost most of their more generic, less-blatantly-copied iPad claims.
Google has already sidestepped most of Apple’s interface-behavior patents with the newest versions of Android, which might eventually be used by more than a handful of customers. And Android is much more of an iPhone-ripoff “iOS-inspired platform” than Windows 8, which has avoided almost all relevant Apple patents.
What’s really going to disrupt the iPhone is going to be something completely different, not something that tries so hard to clone the iPhone that it hits Apple’s patents.
Unoriginal manufacturers will need to pay for their unoriginality. The most reasonable course of action, therefore, is to truly innovate and design products that aren’t such close copies.
I just returned from a five-day trip in which I worked a lot, doing significant amounts of writing, web development, and especially iOS development. And I did it all on my base-model Retina MacBook Pro: the $2199, 2.3 GHz model with “only” 8 GB of RAM and a 256 GB SSD.
This is the best computer I’ve ever used. And I can say that with no hesitation, qualification, or equivocation.
It still can’t be my primary computer, and in that sense, I can’t say it’s the best computer for me, necessarily. But that’s mostly only because I’m a picky asshole who doesn’t like a cable-covered desk, clamshell mode, dual-monitor annoyances, or external hard drives, yet I don’t mind the cost and inconvenience of having both a desktop and a laptop.
For most reasonable people’s needs, this can easily be their only computer. And if some crackpot legislators passed a law tomorrow that everyone could only have one computer, I’d definitely pick the Retina.
To be quite honest, the hardware in the rMBP isn’t enough to deliver a consistently smooth experience across all applications. At 2880 × 1800 most interactions are smooth but things like zooming windows or scrolling on certain web pages is clearly sub-30fps. At the higher scaled resolutions, since the GPU has to render as much as 9.2MP, even UI performance can be sluggish. There’s simply nothing that can be done at this point - Apple is pushing the limits of the hardware we have available today, far beyond what any other OEM has done.
I used it at the scaled “1680 × 1050” resolution the entire time, since the native “1440 × 900” resolution isn’t enough space for me to comfortably do iPad development in Xcode. I definitely felt the sluggish UI performance.
But that’s the only negative point I can make, and it’s really not that bad.
With mostly CPU-bound tasks, the Retina is not technically as fast as my “new” 2010 3.33 GHz 6-core Mac Pro, but in real-world use, it’s close enough to not notice the difference most of the time. And I have the base model. (The three available CPUs are all within about 10% of each other’s real-world performance, so I opted for the lowest-end one to keep the cost reasonable, minimize heat, and maximize battery life.) The long-standing performance gap between the Mac Pro and the MacBook Pro is now effectively moot, except for that UI performance issue.
In addition to the value of the high performance and potentially available screen space, iOS development is especially great on the Retina because the Simulator runs natively at Retina density. So you see your app in the Simulator exactly as it will appear on any modern iOS device. (Except maybe that rumored non-Retina iPad Mini in October.)
While people accustomed to MacBook Airs probably think the 4.5-pound Retina is big and heavy, I’ve found it to be noticeably thinner and lighter compared to the previous 15” MacBook Pro. It no longer feels like a “big” laptop — few would argue if Apple just called it the 15” MacBook Air. Given this size reduction and the huge increase in power and usefulness over the MacBook Airs, I no longer wish that I was carrying a 13” Air instead: the Retina is small and light enough to alleviate that desire, especially knowing what I’m getting for the additional mass.
And the screen.
I thought, having previously used Retina screens on my iPhone and iPad, that I had a pretty good idea of how good a Retina screen would be on a laptop.
I was wrong. It’s far nicer than I expected. And after five days of only seeing Retina screens, the 30” HP ZR30w on my desk really looks like garbage. Huge, spacious garbage.
My only regret about the Retina Display is that I can’t buy a standalone one for my desk, and this one’s not big enough to just prop up the laptop on a stand and use it as the only monitor in a desktop setup.
We’re obviously in the middle of two awkward transitions: toward all-Retina screens, and toward all-SSD storage. The difficult computer choices that many power users will struggle with will probably be much easier in 2–3 years, when even the most die-hard desktop users can probably get a MacBook Something with a priced-within-reach 2 TB SSD and an external 27” Retina Display for desktop use.
Today, that’s just a fantasy. So while this Retina MacBook Pro is the best computer I’ve ever used, I’m also impatiently waiting for the day when I can comfortably make one of these my only computer, and it’s not there yet.
In the meantime, I can definitely see why so many people are migrating from Mac Pros to the Retina MacBook Pro. When the next Mac Pro comes out, I might not want it.
A good one this week: using abandoned software, Twitter’s 100,000-token limit in practice, how Twitter and Dropbox would need to change to become $40 billion companies, revealing development plans in advance, Instapaper’s operating-cost breakdown, and home network wiring.
We’ve been working with Twitter over the last few days to try to work around this limit for the duration of the beta but have been unable to come up with solution that was acceptable to them. Because of this we’ve decided its best for us to pull the alpha. …
Just to be perfectly clear, Tweetbot for Mac will still be available for sale in the near future, we are just stopping the public part of the alpha/beta testing.
The language in this is so businessy that I barely even understand what they’re saying. Here’s my parody edit, containing only the mumbo-jumbo:
innovative products and services to businesses and organizations the Twitter ecosystem, brands, publishers, nonprofits, governments help them engage with their audience we’ve identified areas where we see huge demand for innovation platform program launching with three verticals based on needs we see from partners: Engagement Products, which help brands, Analytics Products, which help businesses, Data Reseller Products, which serve as platforms for innovation.We’ve summarized the program’s verticals along with example functionality to indicate features that businesses and Twitter find interesting.As new opportunities surface, we’re launching with partners unique product that fits the product verticals, solves a market need and has the potential for a large impact, please verify that your product meets all of the Program Requirements, and then head to the Program Application to let us know your interest may be a fit, we are very interested to learn more about your products and will get back to each application as soon as we can.
The bold sentence in the middle is unmodified.
It’s clear that, at every possible level in the company, Twitter is definitely “open for business”. The business culture has completely taken over.
It’s not nearly as friendly to users, of course. Twitter couldn’t give any less of a damn about us anymore. We’re the product now for their real customers who they’re so busy identifying value for. In the words of the great George Carlin, “It’s a big club, and you ain’t in it!”
In this new photo, an “A6” designation can be seen on the main chip, suggesting that Apple may indeed be rolling out a brand-new chip family with the next-generation iPhone.
Of course they’re using a new chip and calling it the A6. This is probably the easiest prediction to make about the new iPhone.
The A5X is too big and hot to run in the iPhone. Even a die shrink wouldn’t make it ideal.
And the A5X doesn’t have any CPU-power advantage over the A5 — the primary difference is much more parallel GPU power to drive the iPad 3’s Retina display. That’s why CPU-bound operations on screen images, such as visual effects using renderInContext:, are actually much slower on the iPad 3: four times as many pixels are being processed by the same CPU power as the iPad 2.
There’s not much need for a big, hot, power-sucking, GPU-only boost to the iPhone.
But there is demand for more CPU and GPU power together, in a small and efficient package suitable for the phone’s size and battery capacity. Of course it will be a new chip, not the A5X, and of course they’d name it the A6.
Reader Adam Lacy sent this via email, reprinted here with permission:
I was reading an article about the new 84” Toshiba 4K TV and it got me thinking about how 4K relates to Retina Display PPIs.
I wanted to speculate on the possibilities for the future iMac/Cinema Displays. In doing so I came across some interesting math.
4K = 3840 × 2160
If you work out the PPI for 4K at 27 inches it conveniently comes out to 163 PPI.
That’s pretty close to the 165 PPI for the iPhone 3GS and the speculated iPad Mini.
Apple’s been having this display PPI manufactured for a long time now and I bet they’ve been testing cuts at 26.7 inches (almost 27).
This would be exactly 4K (3840 × 2160), it would be 165 PPI, same as iPhone 3GS/iPad mini, and would also be exactly 1.5 times the size of the current iMac’s resolution of 2560 × 1440. Not doubled, but I think that matters less for OS X.
I did some measurements myself: I sit about 24” away from my 30” desktop monitor. If I sat approximately as close to a 26.7” Retina Display, the effective pixel size is similar to the iPad 3 at my viewing distance from it, too. It would barely qualify as “Retina”, but if it had a scaling mode to give me the same space as 2560 × 1440 (much like the Retina MacBook Pro’s simulated 1680 × 1050 and 1920 × 1200 modes), I’d take it.
By the time Thunderbolt is fast enough to send 4K video (some have speculated that this will come with Thunderbolt 1.2 in 2014) and GPUs are fast enough to render the interpolated modes without significant lag, I bet it’ll be surprisingly affordable to manufacture such a panel.
I saw two curious entries in Instapaper’s device stats today: one iPad2,5 and one iPad2,6.
(There were also a few iPhone5,1 devices, but that’s not a surprise — that’s almost certainly next month’s new GSM iPhone.1)
These device models, as reported by the OS, could be faked by a jailbreaker with enough free time.2 But I’ve never had a device show up there that didn’t end up being a real, about-to-be-released Apple device.
iPad2,1 through iPad2,3 are the iPad 2’s original Wi-Fi, GSM, and CDMA models, respectively.
iPad2,4 is the 32nm die-shrunk update that quietly replaced the 16 GB Wi-Fi iPad 2 when the iPad 3 was released, yielding better battery life and lower cost, and probably partly responsible for the iPad 2’s price drop to $399. From AnandTech’s detailed review, this remark now seems prescient:
It’s rare these days that we actually see a pure die shrink anymore. With Intel’s tick-tock model we almost always see increases in functionality to accompany each process node shift. … With Apple’s 32nm A5 however, we truly end up with a die shrunk version of the 45nm A5 SoC. About the only part of the computing world where we see these pure shrinks is in the console space where performance doesn’t have to go up within a generation, but cost must go down.
As far as I know, this was the first time Apple invested in a die shrink mid-cycle for any of the iOS devices. They haven’t even done it for the still-sold iPhone 3GS or iPhone 4. The decision to revise the iPad 2 internals, therefore, seemed a bit odd at the time, but makes a lot more sense now.
The iPad2,5 and iPad2,6 could be boring: GSM and CDMA versions of the die-shrunk iPad 2 so the shrink isn’t only available on the Wi-Fi model, bringing lower costs to the other iPad 2 configurations that are still for sale. But now, even later in its lifecycle, that would be a pretty strange move.3
The much more likely explanation is that iPad2,5 and iPad2,6 are the new “iPad Mini” in Wi-Fi and GSM, and I haven’t recorded the likely iPad2,7 CDMA version yet.4
If so, this suggests that the iPad Mini is, effectively, an iPad 2: an A5 with 512 MB of RAM and enough GPU power to drive the Gruber Display, but not a Retina Display.
It’s a textbook Tim Cook supply-chain move: selling the last generation’s hardware at a lower price point to expand marketshare.
But this time, it’s more dramatic. Rather than just sell the original iPad 2 with a price cut, they’ve made a new product designed to be far less expensive from day one by combining old and new parts: the 32nm iPad 2’s guts, larger-cut iPhone 3GS screens, a smaller case and battery, and the new iPhone’s low-power LTE chip for $100 more.
This is all speculation, of course, but I’m convinced: like the leaked Dock connector, this move is so ingenious that it’s most likely to be what Apple has really done.
I bet they could sell that for $249, and that would be a steal. The iPad 2 is still great by today’s standards, and in some ways,5 it’s actually better than the iPad 3.
This is going to be an interesting two months.
The sixth iPhone bearing an iPhone5,1 model identifier isn’t going to help the “it’s not the iPhone 5” cause. (The numbering is off because the iPhone 3G was model iPhone1,2.) ↩︎
But who would fake an oddly numbered iPad 2? A faker would almost certainly choose iPad4,1. ↩︎
Update: Some readers have suggested that these iPad2,5 and iPad2,6 model identifiers could just be another standard iPad 2 revision with the new Dock connector. That’s also possible, but I think it would be just as strange and unlikely: that’s a major revision for a product that might only be sold for another 6 months. And I’d probably also see an iPad3,4, iPad3,5, and iPad3,6 representing the similarly revised iPad 3 models.
I think the more likely explanation is that the iPad 2 and iPad 3 won’t be updated to the new Dock connector — the next 10” iPad will get it, and the iPad 2 never will. The iPad 2 might even be quietly discontinued after the Mini is released and existing stock is depleted. ↩︎
Update: As many have pointed out, the iPhone 4S has a universal radio and doesn’t have separate model identifiers for GSM and CDMA. But the iPad 3 does. If the iPad Mini has a universal radio, there may not be an iPad2,7. ↩︎
It’s thinner, lighter, cooler-running, faster at some graphics operations, and much faster to recharge. And with the 32nm shrink, it has a noticeably longer battery life. ↩︎