I’ve been following Sean Park’s blog for about a year now, and have been finding his insights to be very interesting. His latest post, on Amazon Web Services, is typical in that he connects several different innovations to speculate about the future. What follows is my response to the ideas and issues he raises.
The latest development is Amazon’s launch of ‘spot’ pricing for EC2 instances. In short, this means that the price for computing time can vary over time and customers can bid for that time. If their bid is equal or higher to the current spot price, they get their time. If it’s lower, they don’t - their currently running instances are shut down until they increase their bid or the spot price falls below their current bid. The customer’s bid is only a maximum that they would be willing to pay, so if the spot price is lower than the bid then the customer only pays the spot price. This should enable cost savings for AWS customers who don’t care when their processing happens, possibly at the cost of those who need it now.
As Sean points out, this is only the first baby-step towards something that we might recognise as a market. The spot price is apparently arbitrarily determined by Amazon, presumably based on their own algorithms used to monitor the capacity available on EC2. I would argue that there is a competitive market in cloud computing capacity at present, but it is a considerably less fluid market than it will be in years to come. What I think Sean and others have in mind for the future is a fluid market in which prices shift constantly in response to supply and demand across many different cloud providers, with real-time decisions being made - probably automatically - to re-allocate computing tasks in response.
The current market in computing power is impeded by technological costs - it’s not always easy to move a computing task from one cloud provider to another. The different cloud providers have sufficiently different products - in terms of SLAs, connectivity, bandwidth, processing power and other available resources - that it’s not trivial to determine which is offering the best value. For example, some computing tasks depend on having low-latency network access, which makes geographical and network location important. This makes commodification non-trivial (and can mean that sometimes, “a compute cycle is a compute cycle is a compute cycle” is not true).
I suspect that the market in computing power in the future will not move towards the idea of a single commodity or even a small number of commodity products. There will be many products, differentiated by a range of technical factors. As cloud computing matures, the needs of customers will become more varied, and this will mean that some products may be unsuitable for certain needs. It’s not hard to imagine how some customers may place a premium on security, others on network latency, others on available system RAM and others on energy efficiency - and, of course, on price. Although any of the available products could 'get the job done’, there will be big differences in suitability.
So, that’s my guess. There are, I suppose, some reasons why I might be wrong. Perhaps the infrastructure required to discover and access the most suitable computing resource will be too expensive. We might be better off with a simple market with less choice, where computing time is packaged in standard units: at present, Amazon offers 21 products - seven sizes of EC2 instance multiplied by three available locations. We might imagine that competitors will have to mirror this structure and will offer a similar geographical breakdown in order to compete on a like-for-like basis with EC2. Perhaps customer needs will be less varied than I imagine, and the majority of demand can be met by the supply of a small number of commodity products delivered with high efficiency.
But if I’m right, then we’re going to have a very complex market. Each customer will have unique requirements and will need to be able to explore the available products based on their suitability according to mutiple criteria. Provisioning across a wide variety of platforms will be a considerable technological challenge (though CohesiveFT seem to be doing good work here) and there is definitely scope for the development of a complex exchange (or multiple exchanges?) where bidding for computing time can take place, with appropriate agencies in place to handle provisioning once bids are accepted.
The ultimate aim should be to make the workings of this system entirely transparent to the people using it. Say I’m looking at pictures I’ve taken on an iPhone and I want to stitch these together with pictures taken at similar locations by other people to generate a 3D walkthrough of a particular locale - a fairly intensive, though technically feasible, task. For this I will need a certain amount of processing time and RAM, and a certain amount of bandwidth and data transfer. Assuming (with some magic involved, I admit) that my software can create an accurate estimate of my needs, it should be able to automatically go to the exchange and locate the cheapest provider that satisfies the requirements (a VRM system!) and automatically provision the required computing time. A short while later, I’m browsing a 3D landscape on my phone.
The uses for corporate customers are probably where the real value lies, but when individual consumers have seamless access to computing power in this way then we will know that we have a fully functioning and fluid market.Share