¶ The Fast Pace of Getting Left Behind

In hardware and software, fragmentation is inevitable. Eventually newer software will demand too much of older hardware, and the older hardware will need to enter the realm of being unsupported. Sure, this can be caused artificially by the software or hardware maker not wanting to put forth the effort to support the past. The decision can also be made for the sake of not impacting the experience of a device. No one wants to run software that performs poorly because the hardware can't keep up.

Normally this retirement process takes years. But as technology moves forward at an ever increasing pace, the span between hardware debut and retirement is closing faster than ever. Sometimes it is done out of necessity, and other times artificially.

Let's take the Mac. Apple tends to support hardware with software on the Mac for about five years. This day and age, that is plenty reasonable in my book. Apple's approach is to support the hardware until it becomes a technological burden to the advancement of the software. The chief exhibit is the current version of Mac OS X — Snow Leopard. Snow Leopard cut off support for the old PowerPC architecture. Folks with PowerPC Macs are cut off using Leopard until they buy a modern Mac.

Why did Apple need to do this? Because supporting older hardware was eating up too many development resources for newer software. Eventually you have to stop supporting things you no longer make. When Apple cut off PowerPC support, Mac OS X went from an installed hard drive footprint of around 13 GBs to 6 or 7 GBs. The result was a faster, leaner operating system.

In the upcoming Mac OS X revision — Lion — Apple will be dropping support for 32-bit Intel processors, which were the first Intel Macs. Again, these Macs are 5 years old. And the reason this time is to cut out supporting 32-bit and 64-bit processors, especially at the kernel level. The goal is to be faster and leaner.

Now, let's look at iOS. This is a whole different ballgame, as mobile development is moving so much faster than desktop and notebook development. There have been 4 iPhone and iPod touch generations. The current generation of these devices are leaps and bounds faster and more efficient than the first generation models. Yet Apple supported first-generation devices through the third OS revision. With iOS 4, Apple finally pulled the plug on those first-generation devices, because the software had truly outstripped the hardware.

Here is where Apple made a bit of a mis-step though. They were still selling the second-generation devices as discount, entry-level prices just before iOS 4 shipped. So they felt obligated to support them. And that didn't work out so well because the second-generation of handheld iOS devices shared much of the same hardware as the first-generation. This caused these devices to perform poorly, and Apple scurried to optimize iOS 4 for performance on these older devices in 4.1 and 4.2. But it really wasn't enough. So with iOS 4.3, Apple pulled the plug on support for second-generation hardware, which I am sure they didn't want to do until iOS 5.

What I've described above for iOS is only one side of the coin. Those were necessary hardware retirements. That isn't to say that Apple hasn't artificially retired features improvements along the way. For instance, iOS 4 brought along Game Center integration. This was included in the second-generation iPod touch, but not the second-generation iPhone. I can't imagine that was truly a hardware limitation. Or how about this: iOS 4.3 brought Personal Hotspot to the iPhone 4's tethering ability, but not to the iPhone 3GS, despite the fact that jailbreakers can do Personal Hotspot on the iPhone 3GS. Are artificial limitations a jerk move? Yeah, they are. And everyone can be a jerk at times.

Finally, let's look at Android. Android has been the prime target of the fragmentation blame game. And it often seems like it has been earned. But who is really to blame? Google? Or the carriers? I say a little of both. Vlad Savov wrote on Engadget over the weekend:

Where the trouble arises is in the fact that not all Androids are born equal. The quality of user experience on Android fluctuates wildly from device to device, sometimes even within a single phone manufacturer's product portfolio, resulting in a frustratingly inconsistent landscape for the willing consumer. […]

The point is not that carrier or manufacturer customizations should be abandoned entirely (we know how much those guys hate standardization), it's that some of them are so poor that they actually detract from the Android experience. Going forward, it's entirely in Google's best interest to nix the pernicious effects of these contaminant devices and software builds. The average smartphone buyer is, ironically enough, quickly becoming a less savvy and geeky individual and he (or she) is not going to tolerate an inconsistent delivery on the promise contained in the word "Android."

And this is exactly how things are in the Android world. There isn't a uniform experience standard. Perhaps this is why, according to Bloomberg Businessweek, Google has started handpicking partners to showcase Android, and delaying the source code to everyone else:

Over the past few months, according to several people familiar with the matter, Google has been demanding that Android licensees abide by "non-fragmentation clauses" that give Google the final say on how they can tweak the Android code—to make new interfaces and add services—and in some cases whom they can partner with. […]

Google has also started delaying the release of Android code to the public, putting smaller device makers and developers at a disadvantage. On Mar. 24, Bloomberg Businessweek reported Google won't widely release Honeycomb's source code for the foreseeable future.

The company's moves are hardly unprecedented in such a fast-moving industry. Google owes it to its partners and consumers to prevent Android from running amok.

Android has been running amok. It is saddening when I hear some friends — who are normal, non-geeky people — lament about how the phone they bought three months ago isn't getting the new features of so-and-so's phone from last week.

As I stated at the beginning, every platform will experience fragmentation. Apple does a pretty good job at mitigating that effect because they control the platform from top to bottom. Google let the main Android experience get out of hand because they have been controlling very little in the grand scheme of things. Why have they been controlling so little? Marco Arment writes:

Nobody “opens” the parts of their business that make them money, maintain barriers to competitive entry, or otherwise provides significant competitive advantages. That’s why Android’s basic infrastructure is “open”, but all of Google’s important applications and services for it aren’t — Google doesn’t care about the platform and doesn’t want it to matter. Google’s effectively asserting that the basic parts of a modern OS — the parts that are open in Android — are all good enough, relatively similar, and no longer competitively meaningful. Nobody’s going to steal marketshare from Google by making a better kernel or windowing API on their competing smartphone platform, regardless of whether they borrowed any of Android’s “open” components or ideas derived from them. But Google’s applications and services are locked down, because those are vulnerable to competition, do provide competitive advantages, and are nowhere near being commoditized.

Unfortunately, Google spent the last few years letting Android's core experience go unchecked, allowing the carriers to decide whether or not to use Google's applications and services, and whether a certain phone gets an update or not. Google hasn't been giving Android a chief place in their bottom line, they've let carriers use Android to pump up their bottom line, and have been sticking it to customers.

It all comes down to this: let the end-user be your customer, and use the carrier as the channel; or let the carrier be your customer, and the end-user is the channel.

Just Like Ripping Off a Band-Aid

It should come as no surprise that many of the apps in the Mac App Store are existing Mac apps — many of them being paid apps.

Unfortunately, transitioning to the Mac App Store isn’t exactly a cake walk for either developers or users. Developers already have users on the old business model of issuing serial numbers and such, and users have that software. There isn’t a way for developers to easily move customers over to the Mac App Store, so there are a couple avenues to travel down:

  1. Continue down the old road — selling and maintaining software through a web store.
  2. Support the web store and the Mac App Store, essentially having two different variations of the same app out in the wild. Or,
  3. Dive head first into the Mac App Store exclusively, and have customers re-buy apps they already own.

I don’t foresee option 1 lasting for very long. Many users, especially customers that don’t fully understand computers, are going to embrace the simplicity of the Mac App Store very quickly. Especially new Mac owners, since the Mac App Store will be their first and primary way of installing apps. One exception are apps that don’t meet the requirements of admittance into the Mac App Store. These apps will have a lot of extra work ahead of them to remain seen.

Option 2 is great for the short term. This is where many developers will sit for the time being, waiting to see which business model is more successful. Some developers may be utilizing this method until their next major release, requiring users on the old model to move to the Mac App Store at that time, as users are more likely to be understanding.

And then there are the developers who are bold and take option 3 right away. These are the developers who like to get things over with and rip off the band-aid quickly. It stings, its kind of ugly, but it the long run, the agony fades rapidly.

As I was listening to Episode 7 of Build & Analyze, Marco Arment said something profound about the Mac App Store [quoted tot he best of my ability]:

“I’ve always heard from developers that payment processing, serial number issuing and recovering, and installation support were always the three biggest support needs. This has solved all three of those. That’s awesome!” -Marco Arment

The developers behind Pixelmator are doing just that — diving head first into the Mac App Store in order to focus on their product. They explain their transition plan for existing customers very well, and sweeten the pot quite a bit. It is advantageous in the long run for existing customers to re-buy Pixelmator, which is now exclusive to the Mac App Store. However, they aren’t abandoning existing customers, they will receive free updates right up to version 2. But there will no longer be new customers under the old model. This allows the developers to focus less on supporting a store and focus more on development.

I mentioned the other day that Realmac Software has moved one of their products exclusively to the Mac App Store. I think they are dealing with this very well, too, in that they are refunding existing customers, who can then re-buy at a lower price. It has to sting in the short term, but I imagine in will pay off for Realmac in spades in the long term.

Not all developers are transitioning as gracefully, in my opinion. Existing users of CoverSutra are getting left in the cold at version 2.2.2, with no future updates, while version 2.5 goes exclusively to the Mac App Store. This doesn’t sit well with me, as buyers of CoverSutra 2 were promised free updates to version 3.0. Instead, users have to re-buy, or sit there knowing they won’t have support. It also doesn’t thrill me that the developer isn’t even apologetic in the slightest. Granted, everything would probably be peachy if the developer would have just slapped 3.0 on the app instead of 2.5. I understand her wanting to make a clean cut, but more respect should have been shown for those who bought CoverSutra. Hence, I am voting with my wallet, and not re-buying.

This transitional period of business models is interesting to say the least. It is especially intriguing to see how different developers handle the experience. I expect it won’t be too long until the dust settles.

Kindle's 14-Day Lending Limit

So the other day Amazon introduced the ability to lend Kindle books. Unfortunately, you can’t (yet) do this from an actual Kindle device or app, but you must go to the area of Amazon’s site where you manage your Kindle, and do it from there.

The terms of lending, well, suck. First, the book must be eligible for lending (the publisher sets this). If it is eligible, it can only be lent out once. Ever. Also, the lending period is 14 days.

Ben Brooks makes an observation about the 14-day period, in that, it’s hard for an average person to finish a book in 14 days. (Aside: my wife is an exemption. She can read a rather large book in a couple days. I don’t know many folks that can do this, though.)

I have to echo Ben in that it takes me quite a while to finish a book. Usually between a month or two.

So, this got me thinking about the 14-day limit and the reasoning behind it. Scenario: you recommend a book to a friend, and, even though they could get a free sample from Amazon, that locks them into buying it if it is interesting. Instead, you “lend” it, and they think that is ideal because they may be able to read a whole book for free. But, 14 days is enough for them to get well into the book, but likely not finish it. By that time, they’re hooked, and end up buying the book anyway. Rinse. Repeat.

In a way, it’s sort of ingenious.