Economy of the Cloud

Ed Felten at Freedom to Tinker recently posted an article about the economics of cloud computing (partially in response to last week’s New York Times op-ed about the cloud). While I agree with Felten’s main contention – namely, that the cost of resource management is a driving force in the movement towards the cloud – I would argue that there are also much more important factors at play.

Within the last few years, startups and small companies have seen a growing focus on web- or phone-based applications, from which customers expect much higher uptime, performance, and reliability. Even beyond the cost factor, there’s no real reason for a small company to grow their own servers* when alternatives exist with much better guarantees of all three of those expectations. This is particularly true near the start of an application life cycle, when the amount of data required to run effectively is often disproportionate to the size of the actual user base – if a small company were to run their own servers, they would have to endure remarkably low resource utilization. But even with established applications, it makes much more sense from a practical standpoint not to waste CPU cycles during low traffic times of the day or year. So ultimately I see the movement to the cloud as being spawned mostly from four separate factors:

  • The desire for greater reliability
  • The fact that most companies would prefer to focus on their primary objective (e.g. developing software) rather than working on supportive infrastructure
  • The fact that servers serving multiple applications in different areas will always be more efficient
  • The fact that the cloud has become both trendy and easy to use.

From the consumer standpoint, of course, there’s also the convenience factor – most of us have multiple devices (laptops, PDAs, work stations, etc) from which we regularly need to access central data, and it’s much easier to store that data somewhere that we don’t have to think about. True, cloud computing does put a slightly greater load on client-side resources (most notably bandwidth), but those resources are generally cheap and available enough that, with a bit of ingenuity on the part of cloud application developers, the load is rarely noticable. The reliance on network connectivity will generally be a greater problem.

It’s also not really true that server-side infrastructure is cheaper than client-side; cloud providers just have a lot of unused infrastructure sitting around for most of the year, and so are able to lease it out to people relatively cheaply (or exchange it for advertising). Given the cost of infrastructure, it would be very difficult to actually make a profit starting out as a cloud provider. Most estimates I’ve seen say that doing so would require such a high utilization level that it would be pretty much guaranteed that your servers would be overloaded during high traffic times.

On another note, I’ve always been a bit surprised that more development isn’t in the cloud. If the driving force towards cloud computing were the cost of resource management, development would seem a natural candidate, since setting up proper and consistent dev environments is often a pain. I was talking with a developer at Aprigo during an unconference a few weeks ago on the subject, and he mentioned that one of the main reason they don’t develop in a cloud is because it doesn’t allow for reliable performance benchmarks. That certainly makes sense from a testing standpoint, but it still seems to me that there would be plenty of areas that could benefit from cloud migration.

* Servers are like plants. You should water them every day so they’ll grow healthy and strong.

Related posts: