Monday, 4 January 2010

2010: The year the Netbook turns into the Web-book

2010 is set to be a bumper year for Consumer Electronics. With people spending less outside the home they are focusing more inside and as just about everyone now has some monstrous TV it's the little things that count.

2009 was arguably the year of the Netbook. After Asus launched with their eeePC in 2008 a land-rush occurred last year with virtually every Notebook manufacturer providing an offering. HP's Mini and Acer's Aspire One ranges both did very well as did Asus with their eeePC.

At the end of 2009 however it became virtually impossible to get a Netbook that is truly still a Netbook. Acer, Dell, Asus and HP all fell back into their same tired old routine - bigger, faster, more capacity! And the Netbook experience suffered.

I've made this point before but the power of the Netbook is in the Network - not in how big a hard drive it has. Why do I need a 160GB hard drive when I have Terabytes of NAS and Gigabytes of cloud storage? I don't need 5 USB ports and I certainly don't need Windows.

Having said all of this the Netbook category has gone ballistic - having doubled from 16 to 33 Million units sold in 2009 - sales are worth about $USD11Bn globally (Display Search research for 2009)

My money is on the next generation - so called Web Books, Slates or Tablets. These devices are being actively invested in by a number of investors and represent a merging of several types of computing behaviour.

Architecturally, most are small form factor (10 inch or less), are either a tablet or have a folding range far outside the normal Notebook range (can be flipped over on itself entirely - so is just a screen), they are generally touch screen capable with many being multi-touch and the big one - most are not running windows (generally running flavours of Linux).

Behaviourally, the Web Book is designed to be a piece of Consumer Electronics. It's not a desktop replacement, it's not an office machine. It's a device that is a general purpose computer but built to use in the home as such it plays on the following:

  • It's relatively small and definitely light.
  • The processor is powerful but not an energy guzzler (Intel Atom's do brilliantly here as do ARMs)
  • The display is gorgeous and has high viewing angles so multiple people can see it
  • It uses wifi and may not even have Ethernet connectivity
  • Solid State disks are a must but are low capacity (you don't need more than 16GB in a machine that is connected to a network) thus saving on energy
  • battery life is a must - the longer the better thus every component is energy efficient
  • Ideally the screen is touch capable and ideally multi-touch (thus eliminating the need for a keyboard)
The device is permanently connected to the network and thus the Internet. It's there to connect with people, view photos, play your tunes, watch movies and read web pages. It's not there to write documents, do full scale design or programming (though people will use it to do this in a limited, fast fashion).

I've been excited about Internet tables, Slates, Web Books - call them as you will since Nokia released the N710 and with Apple, HTC (Google), Litl and others all about to play in this space in a big way over the next few months, there will be a lot of people asking for a Web Book or Internet Slate in their Christmas Stocking next year.

Expect to see masses of innovation in this space as companies that have not been too caught up in the Netbook scene enter the fray for the first time and start showing off some new ideas. Litl does this with their awesome Easel Frame style web book and both HTC and Apple will doo some great stuff on the user experience end of things too.

2010 will definitely be the year that the Internet goes increasingly mobile both inside and outside the house but the experience of it literally becomes more tactile and less bound to the keyboard.

Tuesday, 22 December 2009

Why I'm interested in AWS Spot Prices for EC2

There's been a lot of chatter going on around the intertubes over the last couple of weeks since Amazon Web Services released their Spot Instances pricing model for EC2.

In a nutshell - AWS have created a compute market. Instead of charging the same price to every person for the same product they have basically created a market where people can buy compute time at less than a price they are willing to spend based on the current demand.

There's been some conversation about the fact everyone should just put the current demand price in as their maximum and this would game the system (the comments here for example) however this misses the point slightly. The Clouderati often talk about Utility Computing or Commodification as one aspect of Cloud Computing and what AWS have done is the logical conclusion of that - they have created a true market for the provision of computing time based on supply and demand.

Now what's interesting with the ideas some commentators have come up with regarding gaming is it assumes everyone's working the interest of everyone else. That isn't the case. Yes I know I could get some resource cheaply if I keep my bid low and am willing to wait for a period of time however I have clients and they have deadlines. That big compute job crunching all the marketing data needs sorting out this afternoon - so I'm going to put a high bid in for 50 nodes NOW! The market will accommodate that and those with low bids will be knocked off. Thus the market constantly corrects to the requirements of demand.

But it's the flipside of this which makes me really interested in EC2 Spot Instances. I can have a battery of servers doing work at little to no cost if I build my system correctly.

The critical element to this is I need to address availability correctly - that is I need to ensure that my entire system doesn't go down because I've been priced out of the market.

This is a really rough idea at this point but I'd love feedback around it - it's obviously based around some kind of online application that requires multiple nodes.

  • I have an instance which is the master. All parts of the stack could be retreated back to this server if it's needed.
  • I have Cloudwatch or some other monitoring system assessing the performance of my nodes so I can see when I have spare capacity or when I'm under utilised.
  • The master server has a series of heuristics looking at the current work loads and the current costs that each server is incurring versus the work it is carrying out. Thus low utilisation and low cost is okay but low utilisation and high cost would cause alarms to go off.
  • The heuristic set up makes reference to the demand pricing level and strives to always keep each instance below that price.
  • As the spot prices go up and over the demand price I immediately terminate expensive spot instances and start replacing them with lower price demand ones. As the price comes back under then I can replace demand prices with spot prices
  • The master server then creates instances as required to fulfill the work units that are required and link them into the system.
  • Each node is able to be switched off mid unit so the entire network is self-healing
So the only thing that would be required to get this up and running now is having a reliable system for creating nodes and getting them working into the network as quickly as possible and producing the heuristic system to monitor and create and destroy instances based on some rules that would create some intelligence around pricing.

Not least the system would need to determine whether a mix of different types of instances would be appropriate if there are large distinctions between their current spot price for given work units. For example:

If we were serving a bunch of web pages using some heavy duty memcached system then RAM is the most important commodity. Say I have an instance of 1.7GB RAM at 3C/hr and another instance of 7.5GB RAM at 15C/hour then my intelligence system needs to understand the component (Memcached) just needs buckets of RAM and that getting 5 instances at 3C/hour is better value than 1 at 15.

Importantly it can then ramp up towards that number based on what is actually required rather than doing the whole lot and then under-utilising.

So I think we're quite a way away from this type of system but my opinion is that this isn't out of the realms of possibility and importantly the market Amazon has created has allowed (I could almost say "is going to force") these types of architectural considerations to start being made.

Interestingly all of a sudden decisions I am going to make around infrastructure is going to be much more value based. It's not about ROI - it's about value and am I getting the best value from my infrastructure. IT teams that get this are going to make an absolute killing with the type of services they can offer and the prices they'll be able to do it for.

Am I off my rocker? I'd love to explore this idea further.

Monday, 21 December 2009

Prediction: 2010 will be the year Apple and Google have a cage fight

The pre- match slanging is pretty much over and the location of the fight has been chosen. 2010 is going to be the year Apple and Google finally stop dancing around and actually get in the ring. Unlike a nice clean refereed boxing match (Apple V Microsoft) this is going to be a dirty underground cage fight complete with barbed-wire wrapped gloves - expect to see a lot of blood on the floor - and fanbois rucking in the concourses.

The ground is, of course, Mobile and the massive dominance both organisations have taken in this space over the last 12 months. Mobile is still a fast growing area of communications but smartphones is where it's at. There's no question Apple ignited the world's imagination of what is possible in the mobile space and capitalising on the fact that the fashionability of a phone is important in a way that RIM and Microsoft just didn't get.

Google have taken that to a whole different level with Android which just "gets" what it is to be a data capable and Internet connected phone. Couple this with some fashionability and the stage is set for an almighty fight.

Looking through the AdMob report for November, it's astonishing to see how fast Android has grown in the last 2 months (doubled on traffic requests through their network) but more importantly was the launch of the Motorola Droid and the whole Droid Does campaign. The Droid is one of the fastest selling phones of all time almost hitting iPhone 3Gs sales levels (which was working from an installed base upgraded) and is now accounting for about a quarter of Android device share - only behind the G1 which has been out for 18 months - expect to see that change over December.

Now Motorola have entered the fray and with Samsung and Sony Ericsson both scheduling major launches into Q1 2010 the mobile landscape is going to get increasingly messy as the iPhone isn't the only great phone out there. Indeed I think Sony is going to do a Motorola with the Xperia X10 as it is simply stunning and is a big name in the mobile space - especially in Europe. HTC have had a great lead but 2010 will see Motorola and Sony return to some dominance here - and they can fight Apple in the Fashionability stakes.

The biggest challenge for Apple is how to combat Google on the phone itself. Outside of iTunes, Apple has little in the way of first party apps for the iPhone and whilst it has a huge developer network it is definitely alienating them through it's App Store management nightmares. Many developers are developing for both iPhone and Android devices - especially those using Web technologies for building and apps like Phone Gap to cross-package.

A lot of what makes the iPhone really useful are Google applications (native Gmail, Maps and most importantly Search!) - Apple has no way to combat this. Are they going to deny Gmail or Search like they did with Google Voice?

Apps that are available on both platforms and services that are available "in the cloud" eg Maps, Comparison Shopping etc dilutes Apple's position as it's only point of differentiation becomes fashionability and both Sony Ericsson and Motorola have competed for over a decade against Nokia by building highly fashionable phones.

I'm not sure this fight will be a death match but all the signs are there for a battle of epic proportions. Both are likely to be extremely battered by the time they come out the other side and would be wise to hold a little bit in reserve in case Nokia's Maemo platform takes off the way they are expecting it to - at that point things could get really messy.

Friday, 13 November 2009

SPDY could gain acceptence very quickly - with some product innovation

Google have announced some early findings about their research into a faster protocol to reduce latency times due to good old fashioned HTTP. HTTP was designed as a really simple protocol to delivery (primarily) text content over the Internet and thus was born the Web.

One of the problems with HTTP is that it only really allows a single request to be serviced at any one time. The reason this doesn't APPEAR to be the case is because modern browsers create multiple connection threads that connect independently to the server and it gives the appearance of things downloading in parallel. It's a neat hack and works because we have good network speeds and mast processors to manage all this multi-tasking. Go back to a Pentium II with Netscape 2 and you'll watch the glacial procession of elements loading in from the top and goes down the page.

The Google project page talks a lot about why HTTP pipelining doesn't work and some of the technical architecture behind SPDY which I won't cover here other than to say that it's great we are seeing this type of innovation at the protocol level. What's most interesting for me however is how we get it in production.

There is a lot of nay-saying going on around this suggesting that because of the size of the Web you'll never get people to shift to a new protocol HTTP:// won, let's all leave it at that because there are too many web servers and web browsers to convert. This is what I want to address in this post.

Yes - there are fair too many legacy browsers to deal with to make this transition happen. Look how many IE 6 browsers are still in use, but we'd also have to shift all the Mozilla users, Chrome users (easy because of forced update) and Safari users as well. Not to mention all those pesky mobile devices that are springing up.

Dealing with the web servers is a much more straightforward issue. There really aren't that many in the scheme of things. Indeed much of our existing infrastructure runs multiple servers, Apache alongside a lightweight server like nginx and this is increasingly common.

As such there's nothing stopping me dropping in a SPDY server alongside my existing infrastructure for those users that can directly access it (Chrome 4, Firefox 5, Safari 6 and IE 10 for example).

But let's not stop there. A network admin could create a software appliance at the Firewall or Internet Gateway level for the corporate network that took HTTP requests, turns them into SPDY requests and then proxies these back. Now I have doubly fast Internet connectivity without upgrading my connection. For the price of a box that is well worth it.

For home users we could do the same thing. This protocol is software - it runs on TOP of TCP so because of that a Firmware upgrade of your average Netgear or Linksys home router could get you the same benefits as those above. ISPs could force this remotely on certain systems (Cable for example) or provide info on how to do it such as through a web, phone or personal service.

So for all the nay-sayers out there - this is a MASSIVE opportunity to speed up the web and people need to think outside the browser sometimes. QoS was delivered at the router level based on intelligent packet analysis - that speeds up network traffic massively but it's a software change not a hardware one.

I don't think it will be long until we see Netgear and Linsys start promoting this like they did with the WiFi standards and force adoption because it makes a great marketing case to do so.

I'll be trying this out at the rawest state to see if we can make it work and if I can, watch how fast our servers and network gateway get upgraded before I embark on upgrading client machines.

Tuesday, 10 November 2009

AdMob purchase by google paves way for interesting developer funding

It's just been announced that Google is set to buy AdMob for $750M in an all-stock deal. This is the third biggest purchase Google has ever made (the only two bigger are YouTube and DoubleClick).

AdMob started in 2006 so they have capitalised very well for a 3 year old business. Indeed they've been cash positive for a while now so this is a great acquisition by Google. The full gory details of the deal can be found here and a press site by google here

We know this is all aligned to Google's interest and in particular their big appetite presently for anything Mobile. However this also opens up some enormous opportunities for developers.

This acquisition brings with it some great opportunities for in-application display advertising that is delivered contextually but also based on Google AdWords auctioning technology. Along side this I can then use the same advertising account to drive ads on my mobile website that compliments my application and then use standard ads on my main website that provides additional information / community support etc.

All of a sudden a possible revenue opportunity opens up that was kind of there previously but wasn't very smart. Over the last 18 months in particular we've been watching the rise of free-ad-supported applications as well as paid-no-ad versions of the same application. I would expect to see a lot more of the ad-supported apps once this deal goes through.

The reason for this is twofold:

1. As a developer I can manage all of my advertising spaces with one vendor. I don't really want to have to deal with all these businesses I just want to get some beer money for my app that I'm spending my non-work hours producing.

2. With contextual ad serving, I can make certain elements of data within the application available and use that to generate calls to the Ad Server - much the same way AdWords works with a web page or in Gmail. This means the ads that are served will be more relevant to the content which should lead to higher Click Through which then leads to potentially more revenue for me (see note above about beer money)

This makes a lot of sense for an advertiser as well. Certain applications have huge amounts of uptake - twitterific on iPhone or Twidroid on Android for example. Imagine having contextual ads served based on the content of your twitter stream. Twitter might resist it but it could make some serious cash for the app developers.

Overall I think this will really blow the top of mobile advertising. Advertisers who have been a little shy in the mobile space will be comforted by the fact it's Google doing it. App and mobile site developers stand to gain some good funding from it and it be relevant for their audiences and as the world goes increasingly smartphone mobile mad over the next 18 months this will be worth serious $Billions in the next 5 years or so.

Cross posted to Citrus Agency Blog

Thursday, 5 November 2009

Crown Oaks Day Racing Challenge

Last night I was writing code to play around with an idea I had rather than studying the Form Guide. See today I am off to the races (Crown Oaks day at Flemington) with some clients - hence why I should have been studying the form guide and not playing around with Erlang.

So I've decided to try an experiment:

Can the wisdom of the Twitter crowd outperform both blind luck and the bookies favourites with regards to return on bets during the racing.

Now we all know blind luck should lose. Betting on a winner at random is probably not going to get a single hit but against the favourite should prove interesting.

What interests me most is that betting on a race is actually contributing to an Information Market. Now theoretically information is held by all the various agents (people betting) and each one doesn't have a full picture but together the market becomes efficient and pushes down the odds of certain horses winning and then having long shots.

In smaller races where you have not so many people betting this works and either the favourite or a horse will relatively low odds will win. At large race meetings this doesn't work because a lot of people bet randomly (based on name, birthday number etc) and because of this it creates a lot of noise in the market so it breaks down.

So here's the challenge.

I'll start this from Race 3 at Crown Oaks Day today.

Each of the races is displayed below with a link to the field list. I'll be making a bet based on complete randomness (random number of the horse) and following the bookies' favourite. I'll then take the majority from messages to my twitter account (@ajfisher) for that race and place a bet following that. Simply send me a tweet "@ajfisher Race: NUMBER Horse: Name or Number"








Race 3
Race 4
race 5
Race 6
Race 7

Also you can use the tag #crowd-oaks if you want.

So can a smaller crowd provide more wisdom and outperform the bookies and complete blind luck on a big race day. Let's find out. It'll be fun either way.

Sunday, 11 October 2009

The only reason why Linux isn't ready for prime time desktop

Okay, so this title's probably a bit misleading as there are probably a few reasons but as far as I'm concerned there's only one thing stopping my final transition to desktop Linux for complete every day usage.

Presenting

In my job I do a lot of presenting. I give major milestone presentations on projects, I present to the business on things that are going on, I present in pitches where we are attempting to win new business and recently I've started presenting at conferences.

I would not use my Linux desktop (and I have combinations of Ubuntu and KUbuntu 9.04, CentOS) to present with at all - even if someone paid me.

Before I say why I'll also lay out my Linux credentials. I use RHEL, Ubuntu and Centos EVERY day. All of my home computers are Linux based, I have a Linux PDA, I prefer my Ubuntu desktop for work and I administer numerous Linux (CentOS and RHEL) servers - via command line - all of the time. I've used it for over a decade and am more than happy with it and more than happy to hack on it to get stuff working.

However, there comes a point where I am not going to entrust a complete presentation that our business or my reputation relies upon to Linux's extremely flaky graphics system.

Yes, I know laptop Linux is problematic (but if the rest of the desktop is stable why not my second video out?)

Yes, I know that graphics card support (particularly from ATI) is very closed so there's lots of reverse engineering going on (but again if I can have one video out working why not two?).

I'm not sure why this is the case - I think it's a combination of X.org config and poor tools for configuring multiple screens with different resolutions but it definitely needs a lot of work to go ready for prime time.

I was at a conference this week and I had built my entire presentation in my Gnome desktop using FLOSS tools like Open Office Impress, had a great looking presentation and was legitimately keen on presenting using either my Ubuntu or Centos desktop. After hours of mucking around however I didn't feel supremely confident in just walking up to the podium, plugging in my laptop and "It Just Works"TM. It's just too hit and miss.

I don't generally experience this with Linux in general and Ubuntu specifically although I am aware of other people saying it. For me 99% of the time it does actually just work.

So I defaulted back to my dual-boot Windows partition and presented from that instead. This was the partition that I had considered nuking because I hadn't used it in about 6 months. In this instance though I didn't have any other choice - and sure enough it did just plug in and go.

I still presented from Open Office Impress though (which is a fantastic bit of software I might add!) and I think I was the only one at WDS09 that presented with it (and I'm sure no one could tell I wasn't using PowerPoint or Keynote).

Desktop experience is exactly that - an experience and our experience, particularly when we are doing something social with a computer can affect our mental state quite substantially.

If I'd have taken the decision to present using Ubuntu I would have felt worried about whether my laptop would work and I would have been nervous and probably would have delivered a terrible presentation. In contrast because I knew I wasn't going to have any support issues I felt confident, in control and delivered what I hope was a good presentation to the audience.

Ubuntu are trying to address many of these issues with the Paper Cuts project but that's really aimed at business. Apple have addressed similar issues (hardware compatibility) by having a presenter's kit (which you buy) which provides all kinds of adapters to go from Mac to just about every video input type. Microsoft addressed this years ago from Windows 2000 with a great set of dual head tools that made it simple and a standardised way for vendors to incorporate them and it is extremely rare for it to fail.

Business use is one of the areas that Linux (and especially Ubuntu) has got a real opportunity to shif users across as there are so many other business benefits but users want a single consistent desktop so they aren't going to build on one desktop and present on another - it's too inconsistent.

For me this issue on presenting and graphics support isn't so much a paper cut as it is a gaping flesh wound and it really needs to be addressed.