Friday, October 31, 2008

Watching Google watching the world

If there was a prize for the best presentation at this week's Informa "Handsets USA" forum in San Diego, it would have to go to Sumit Agarwal, Product Manager for Mobile from Google. Although there were several other very good talks there, Sumit's was in a class of its own.

In the first place, Sumit had the chutzpah to run his slides directly on a mobile device - an iPhone - with a camera relaying the contents of the mobile screen to the video projector. Second, the presentation included a number of real-time demos - which worked well, and even the ways in which they failed to work perfectly became a source of more insight for the audience (I'll come back to this point later). The demos were spread among a number of different mobile devices: an Android G1, the iPhone, and a BlackBerry Bold. (Sumit rather cheekily said that the main reason he carried the Bold was for circumstances in which the G1 and the iPhone run out of battery power.)

One reason the talk oozed authority was because Sumit could dig into actual statistics, collected on Google's servers.

For example, the presentation included a graph showing the rate of Google search inquiries from mobile phones on different (anonymised) North American network operators. In September 2007, one of the lines started showing an astonishing rhythm, with rapid fluctuations in which the rate of mobile search inquiries jumped up sevenfold - before dropping down again a few days later. The pattern kept repeating, on a weekly basis. Google investigated, and found that the network operator in question had started an experiment with "free data weekends": data usage would be free of charge on Saturday and Sunday. As Sumit pointed out:
  • The sharp usage spikes showed the latent demand of mobile users for carrying out search enquiries - a demand that was previously being inhibited by fear of high data charges;
  • Even more interesting, this line on the graph, whilst continuing to fluctuate drastically at weekends, also showed a gradual overall upwards curve, finishing up with data usage signifcantly higher than the national average, even away from weekends;
  • The takeaway message here is that "users get hooked on mobile data": once they discover how valuable it can be to them, they use it more and more - provided (and this is the kicker) the user experience is good enough.

Another interesting statistic involved the requests received by Google's servers for new "map tiles" to provide to Google maps applications. Sumit said that, every weekend, the demand from mobile devices for map tiles reaches the same level as the demand from fixed devices. Again, this is evidence of strong user interest for mobile services.

As regards the types of textual search queries received: Google classifies all incoming search queries, into categories such as sports, entertainment, news, and so on. Sumit showed spider graphs for the breakdown of search queries into categories. The surprising thing is that the spider graph for mobile-originated search enquiries had a very similar general shape to that for search enquiries from fixed devices. In other words, people seem to want to search for the same sorts of things - in the same proportion of times - regardless of whether they are using fixed devices or mobile ones.

It is by monitoring changes in server traffic that Google can determine the impacts of various changes in their applications - and decide where to prioritise their next efforts. For example, when the My Location feature was added to Google's Mobile Maps application, it had a "stunning impact" on the usage of mobile maps. Apparently (though this could not be known in advance), many users are fascinated to track how their location updates in near real-time on map displays on their mobile devices. And this leads to greater usage of the Google Maps product.

Interspersed among the demos and the statistics, Sumit described elements of Google's underlying philosophy for success with mobile services:

  • "Ignore the limitations of today": don't allow your thinking to be constrained by the shortcomings of present-day devices and networks;
  • "Navigate to where the puck will be": have the confidence to prepare services that will flourish once the devices and networks improve;
  • "Arm users with the data to make decisions": instead of limiting what users are allowed to do on their devices, provide them with information about what various applications and services will do, and leave it to the users to decide whether they will install and use individual applications;
  • "Dare to delight" the user, rather than always seeking to ensure order and predictability at all times;
  • "Accept downside", when experiments occasionally go wrong.

As an example of this last point, there was an amusing moment during one of the (many) demos in the presentation, when two music-playing applications each played music at the same time. Sumit had just finished demoing the remarkable TuneWiki, which allows users to collaborate in supplying, sharing, and correcting lyrics to songs, for a Karaoke-like mobile experience without users having to endure the pain of incorrect lyrics. He next showed an application that searched on YouTube for videos matching a particular piece of music. But TuneWiki continued to play music through the phone speakers whilst the second application was also playing music. Result: audio overlap. Sumit commented that an alternative design philosophy by Google might have ensured that no such audio overlap could occur. But such a constraint would surely impede the wider flow of innovation in mobile applications.

And there was a piece of advice for application developers: "emphasise simplicity". Sumit demoed the "AroundMe" application by TweakerSoft, as an illustration of how a single simple idea, well executed, can result in large numbers of downloads. (Sumit commented: "this app was written by a single developer ... who has probably quintupled his annual income by doing this".)

Google clearly have a lot going for them. Part of their success is no doubt down to the technical brilliance of their systems. The "emphasise simplicity" message has helped a great deal too. Perhaps their greatest asset is how they have been able to leverage all the statistics their enormous server farms have collected - not just statistics about links between websites, but also statistics about changes in user activity. By watching the world so closely, and by organising and analysing the information they find in it, Google are perhaps in a unique position to identify and improve new mobile services.

Just as Google has benefited from watching the world, the rest of the industry can benefit from watching Google. Happily, there's already a great deal of information available about how Google operates. Anyone concerned about whether Google might eat their lunch can become considerably wiser from taking the time to read some of the fascinating books that have been written about both the successes and (yes) the failures of this company:

I finished reading the Stross book a couple of weeks ago. I found it an engrossing easy-to-read account of many up-to-date developments at Google. It confirms that Google remains an utterly intriguing company:

  • For example, one thought-provoking discussion was the one near the beginning of the book, about Google, Facebook, and open vs. closed;
  • I also valued the recurring theme of "algorithm-driven search" vs. "human-improved search".

It was particularly interesting to read what Stross had to say about some of Google's failures - eg Google Answers and Google Video (and arguably even YouTube), as a balance to its better-known string of successes. It's a reminder that no company is infallible.

Throughout most of the first ten years of Symbian's history, commentators kept suggesting that it was only a matter of time before the mightiest software company of that era - Microsoft - would sweep past Symbian in the mobile phone operating system space (and, indeed, would succeed - perhaps at the third attempt - in every area they targeted). Nowadays, commentators often suggest the same thing about Google's Android solution.

Let's wait and see. And in any case, I personally prefer to explore the collaboration route rather than the head-on compete route. Just as Microsoft's services increasingly run well on Symbian phones, Google's services can likewise flourish there.

Wednesday, October 29, 2008

A market for different degrees of openness

To encourage participants to speak candidly, the proceedings at the Rutberg "Wireless Influencers" conferences are held away from the prying eyes of journalists. A few interesting ideas popped up during the discussions at the 2008 event over the last two days - but because of the confidentiality rules, I'm not able to name the people who raised these ideas (so I can't give credit where credit is due).

The common theme of these ideas is the clash of openness and regulation - and (in some cases) the attempt to find creative solutions to this clash.

The first example arose during a talk by a representative from a major operator. The talk described the runaway success one of their products was experiencing in a third world country. This product involves the use of mobile phones to transfer money. The speaker said that the main reason this product could not be deployed in more developed countries (to address use cases like simplifying the payment of money to a teenage baby sitter, or transfering cash to your children) is the deadhand of financial regulations: banks aren't keen to allow operators to take over some of the functions that have traditionally been restricted to banks, so operators are legally barred from deploying these applications.

I found this ironic. Normally operators are the companies that are criticised for setting up regulatory systems that have the effect of maintaining their control over various important business processes (and thereby preserving their profits). But in this case, it was an operator who was criticising another part of industry for self-interestedly sheltering behind regulations.

Later in the day, one of the streams at the event discussed whether operators could ever allow users to install whatever applications they want, on their phones. The analogy was made with the world of the PC: the providers of network services for PCs generally have no veto over the applications which users choose to install. On the other hand, in some enterprise situations, a corporate IS department may well wish to impose that kind of control. In other words, for PCs, there is a range of different degrees of openness, depending on the environment. So, could a similar range of different degrees of openness be set up for mobile phones?

The idea here is that several different networks could form. In some, the network operator would impose restrictions on the applications that can be installed on the phones. In others, the network operators would be more permissive. In the second kind of network, users would be told that it was their own responsibility to deal with any unintended consequences from applications they installed.

Ideally, a kind of market would be formed, for networks that had different degrees of openness. Then we could let normal market dynamics determine which sort of network would flourish.

Could such a market actually be formed? Could closed networks and open networks co-exist? It seems worth thinking about.

And here's one more twist - from a keynote discussion on the second day of the event. Rather than a network operator (or some other central certification authority) deciding which applications are suitable for installation on users' phones, how about using the power of community ratings to push bad applications way down the list of available applications?

That's an intriguing Web 2.0 kind of idea. On a network operating with this principle, most users would only see apps that had already received positive reviews. Apps that had bad consequences would instead receive bad reviews - and would therefore disappear off the bottom of the list of apps displayed in response to search queries. "Just like on YouTube".

Sunday, October 26, 2008

The Singularity will go mainstream

The concept of the coming technological singularity is going to enter mainstream discourse, and won't go away. It will stop being something that can be dismissed as freaky or outlandish - something that is of interest only to marginal types and radical thinkers. Instead, it's going to become something that every serious discussion of the future is going to have to contemplate. Writing a long-term business plan - or a long-term political agenda - without covering the singularity as one of the key topics, is increasingly going to become a sign of incompetence. We can imagine the responses, just a few years from now: "Your plan lacks a section on how the onset of the singularity is going to affect the take-up of your product. So I can't take this proposal seriously". And: "You've analysed five trends that will impact the future of our company, but you haven't included the singularity - so everything else you say is suspect."

In short, that's the main realisation I reached by attending the Singularity Summit 2008 yesterday, in the Montgomery Theater in San Jose. As the day progressed, the evidence mounted up that the arguments in favour of the singularity will be increasingly persuasive, to wider and wider groups of people. Whether or not the singularity will actually happen is a slightly different question, but it's no longer going to be possible to dismiss the concept of the singularity as irrelevant or implausible.

To back up my assertion, here are some of the highlights of what was a very full day:

Intel's CTO and Corporate VP Justin Rattner spoke about "Countdown to Singularity: accelerating the pace of technological innovation at Intel". He described a series of technological breakthroughs that would be likely to keep Moore's Law operational until at least 2020, and he listed ideas for how it could be extended even beyond that. Rattner clearly has a deep understanding of the technology of semiconductors.

Dharmendra Modha, the manager of IBM's cognitive computing lab at Almaden, explained how his lab had already utilised IBM super-computers to simulate an entire rat brain, with the simulation running at one tenth of real-time speed. He explained his reasons for expecting that his lab should be emable to simular an entire human brain, running at full speed, by 2018. This was possible as a result of the confluence of "three hard disruptive trends":
  1. Neuroscience has matured
  2. Supercomputing meets the brain
  3. Nanotechnology meets the brain.

Cynthia Breazeal, Associate Professor of Media Arts and Sciences, MIT, drew spontaneous applause from the audience part-way through her talk, by showing a video of one of her socially responsive robots, Leonardo. The video showed Leonardo acting on beliefs about what various humans themselves believed (including beliefs that Leonardo could deduce were false). As Breazeal explained:

  • Up till recently, robotics has been about robots interacting with things (such as helping to manufacture cars)
  • In her work, robotics is about robots interacting with people in order to do things. Because humans are profoundly social, these robots will also have to be profoundly social - they are being designed to relate to humans in psychological terms. Hence the expressions of emotion on Leonardo's face (and the other body language).

Marshall Brain, founder of "How Stuff Works", also spoke about robots, and the trend for them to take over work tasks previously done by humans: MacDonalds waitresses, Wal-Mart shop assistants, vehicle drivers, construction workers, teachers...

James Miller, Associate Professor of Economics, Smith College, explicitly addressed the topic of how increasing belief in the likelihood of an oncoming singularity would change people's investment decisions. Once people realise that, within (say) 20-30 years, the world could be transformed into something akin to paradise, with much greater lifespans and with abundant opportunities for extremely rich experiences, many will take much greater care than before to seek to live to reach that event. Interest in cryonics is likely to boom - since people can reason their bodies will only need to be vitrified for a short period of time, rather than having to trust their descendants to look after them for unknown hundreds of years. People will shun dangerous activities. They'll also avoid locking money into long-term investments. And they'll abstain from lengthy training courses (for example, to master a foreign language) if they believe that technology will shortly render as irrelevant all the sweat of that arduous learning.

Not every speaker was optimistic. Well-known author and science journalist John Horgan gave examples of where the progress of science and technology has been, not exponential, but flat:

  • nuclear fusion
  • ending infectious diseases
  • Richard Nixon's "war on cancer"
  • gene therapy treatments
  • treating mental illness.

Horgan chided advocates of the singularity for their use of "rhetoric that is more appropriate to religion than science" - thereby risking damaging the standing of science at a time when science needs as much public support as it can get.

Ray Kurzweil, author of "The Singularity is Near", responded to this by agreeing that not every technology progresses exponentially. However, those that become information sciences do experience significant growth. As medicine and health increasingly become digital information sciences, they are experiencing the same effect. Although in the past I've thought that Kurzweil sometimes overstates his case, on this occasion I thought he spoke with clarity and restraint, and with good evidence to back up his claims. He also presented updated versions of the graphs from his book. In the book, these graphs tended to stop around 2002. The slides Kurzweil showed at the summit continued up to 2007. It does appear that the rate of progress with information sciences is continuing to accelerate.

Earlier in the day, science fiction author and former maths and computing science professor Vernor Vinge gave his own explanation for this continuing progress:

Around the world, in many fields of industry, there are hundreds of thousands of people who are bringing the singularity closer, through the improvements they're bringing about in their own fields of research - such as enhanced human-computer interfaces. They mainly don't realise they are advancing the singularity - they're not working to an agreed overriding vision for their work. Instead, they're doing what they're doing because of the enormous incremental economic plus of their work.

Under questioning by CNBC editor and reporter Bob Pisani, Vinge said that he sticks with the forecast he made many years ago, that the singularity would ("barring major human disasters") happen by 2030. Vinge also noted that rapidly improving technology made the future very hard to predict with any certainty. "Classic trendline analysis is seriously doomed." Planning should therefore focus on scenario evaluation rather than trend lines. Perhaps unsurprisingly, Vinge suggested that more forecasters should read science fiction, where scenarios can be developed and explored. (Since I'm midway through reading and enjoying Vinge's own most recent novel, "Rainbows End" - set in 2025 - I agree!)

Director of Research at the Singularity Institute, Ben Goertzel, described a staircase of potential applications for the "OpenCog" system of "Artificial General Intelligence" he has been developing with co-workers (partially funded by Google, via the Google Summer of Code):

  • Teaching virtual dogs to dance
  • Teaching virtual parrots to talk
  • Nurturing virtual babies
  • Training virtual scientists that can read vast swathes of academic papers on your behalf
  • And more...

Founder and CSO of Innerspace Foundation, Pete Estep, gave perhaps one of the most thought-provoking presentations. The goal of Innerspace is, in short, to improve brain functioning. In more detail, "To establish bi-directional communication between the mind and external storage devices." Quoting from the FAQ on the Innerspace site:

The IF [Innerspace Foundation] is dedicated to the improvement of human mind and memory. Even when the brain operates at peak performance learning is slow and arduous, and memory is limited and faulty. Unfortunately, other of the brain's important functions are similarly challenged in our complex modern world. As we age, these already limited abilities and faculties erode and fail. The IF supports and accelerates basic and applied research and development for improvements in these areas. The long-term goal of the foundation is to establish relatively seamless two-way communication between people and external devices possessing clear data storage and computational advantages over the human brain.

Estep explained that he was a singularity agnostic: "it's beyond my intellectual powers to decide if a singularity within 20 years is feasible". However, he emphasised that it is evident to him that "the singularity might be near". And this changes everything. Throughout history, and extending round the world even today, "there have been too many baseless fantasies and unreasonable rationalisations about the desirability of death". The probable imminence of the singularity will help people to "escape" from these mind-binds - and to take a more vigorous and proactive stance towards planning and actually building desirable new technology. The singularity that Estep desires is one, not of super-powerful machine intelligence, but one of "AI+BCI: AI combined with a brain-computer interface". This echoed words from robotics pioneer Hans Moravec that Vernor Vinge had reported earlier in the day:

"It's not a singularity if you are riding the curve. And I intend to ride the curve."

On the question of how to proactively improve the chances for beneficial technological development, Peter Diamandis spoke outstandingly well. He's the founder of the X-Prize Foundation. I confess I hadn't previously realised anything like the scale and the accomplishment of this Foundation. It was an eye-opener - as, indeed, was the whole day.

Saturday, October 25, 2008

"Symbian too old" - a mountain worth climbing

In case I had forgotten how little mindshare Symbian has in many parts of North America, the recent Robert X. Cringely piece "Why Windows Mobile will die" contained yet another stark reminder.

As usual with Cringely, the piece mixes potential insight with a lot of conjecture and then some fancy. Most of the article discusses Windows Mobile, iPhone, and Android. But it squeezes in a dismissive paragraph about Symbian:

...donning flameproof clothing: Symbian is simply too old. The OS is getting slower and slower with each release. The GUIs are getting uglier and are not user-friendly. The development environment is particularly bad, which wouldn't hurt if there weren't others that are so much better. Symbian C++, for example, is not a standard C++. There is little momentum in the Symbian developer community, maybe because coding for Symbian is a pain. Yes, there are way more Symbian phones in circulation, but those phones will be gone 18 months from now, probably replaced by phones with a different OS. Lately, Symbian's success has been primarily based on the high quality of Nokia hardware, on the loyalty of NTT DoCoMo, and now on the lure of being recently made open source and therefore free. But if open source developers don't flock now to Symbian (they aren't as far as I can see -- at least not yet) then the OS is doomed.
And if that weren't a sufficiently strong reminder of Symbian's lack of mindshare, I found scant encouragement in the 65 comments posted (so far) to Cringely's piece.

Allow me a few moments to respond to individual points in this paragraph, before I return to the bigger picture.

"Symbian is simply too old" - but it has been undergoing a constant internal renewal, with parts of the architecture and code being refactored and replaced with each new point release. Just a few examples: we introduced a new kernel in v8.1b, a new security architecture in v9.0, new database (SQL) architecture in v9.3, new Bluetooth in v9.4, substantially revised graphics architecture and networking architecture in v9.5, and so forth.

"The OS is getting slower and slower with each release" - on the contrary, many parts of the operating system are humming much quicker in the newer releases, as a result of a specific and pervasive focus on performance across the whole system. Deliverables include speed ups due to smart incorporation of demand paging, file system caching, data scalability improvements, and wider adoption of separation of activity into three planes (data plane, control plane, and management plane).

"The GUIs are getting uglier and are not user-friendly" - but the UI system is increasingly flexible, which allows customers to experiment with many different solutions (whilst retaining API compatibility). New developments such as the S60 Fifth Edition touch interface, and the recently announced support for Qt on Symbian OS, take things further in the user-friendly direction.

"The development environment is particularly bad" - but documentation and tools for Symbian OS have markedly improved over the last two years.

"Symbian C++, for example, is not a standard C++" - but watch out for our forthcoming annoucements about EUserHL that go a long way to address this particular gripe.

"There is little momentum in the Symbian developer community" - but that's not the impression given by the media reports from people who attended the Symbian Smartphone Show last week.

"Yes, there are way more Symbian phones in circulation, but those phones will be gone 18 months from now, probably replaced by phones with a different OS" - but I beg to differ, based on my knowledge of development projects underway at phone manufacturers across the world. For just one example, consider the recent remarks from Li Jilin, Huawei Communications Vice President (note: Huawei has previously not been a user of Symbian OS):

"Huawei is excited by the plans for the Symbian Foundation. We look forward to participating in the work of the Symbian Foundation and using the foundation's platform to deliver a portfolio of devices for mobile network operators around the world. We believe that the Symbian Foundation ecosystem will enable innovation which will benefit users and drive increased customer satisfaction."
"If open source developers don't flock now to Symbian (they aren't as far as I can see -- at least not yet) then the OS is doomed" - but this is far too impatient. It's too early to make this judgement. You can't expect the open source developers to flock to us before more plans are published for the roadmap to put our source code into open source.

As for the bigger picture: despite the above individual points of fact, I don't expect significant changes in mindset (except among the far-sighted) until there are more Symbian devices in the hands of North Americans.

It was the amazing array of devices at the partner showcase stands at the Smartphone Show last week that caused the biggest buzz of all - bigger than the announcements from the keynote hall next door. Thankfully, AT&T have publicly mentioned their "plan to introduce more Symbian phones". North American users shouldn't have too long to wait. And there are encouraging signs of independently-minded North American writers actually (shock horror) liking the latest Symbian phones. For example, the renowned software essayist Joel Spolsky called the Nokia E71 "the best phone I’ve ever had - I’m loving it".

In the meantime, the Symbian Foundation has a big mountain to climb, in public perception. But it's a mountain well worth climbing!

Friday, October 24, 2008

Smartphones and the recession

"Symbian Smartphone Show - Recession? what recession?" That was the title of the characteristically perceptive summary of this week's Smartphone Show prepared by analyst Richard Windsor of Nomura Securities.

In his summary, Richard made a series of positive comments about the show:
  • The vision of the Symbian foundation was put forward by all its members to an audience that has certainly grown in numbers compared to last year.
  • By putting Symbian and s60 together with the elimination of UIQ and MOAPs, it is hoped that licensees will have one system upon which to develop phones and applications.
  • At the same time the new structure means that the access to the software will be much more even, giving everyone a better chance at effectively competing.
However, he also noted:
  • The floor was abuzz with the prospect of the growing opportunity for smartphones but seemed oblivious to the possibility that an economic recession could materially dent growth.
  • While the talk is all about growth and the new opportunity, very little was said about the coming recession and the effect that ever increasing hardware specification will have on the ability for smartphones to continue getting cheaper and cheaper.
  • We see a negative impact from two sides:
  • First the fact that consumers have less disposable income to spend on high end devices.
  • Second, the pressure to compete with Apple, whose iPhone volumes has over taken RIM, is causing more technology to be crammed into phones earlier.
  • This has the effect of increasing the cost to build smartphones which means that price declines to consumers will slow or stop entirely.
So, is there any possible justification for paying so little attention to the likely onset of economic hard times?

Here's one argument to consider. The Symbian Foundation is about, not the next ten months, but the next ten years. The general buzz at the show derives from expectation of a potentially huge long-term payback, rather than any evaluation of short-term rewards. Just as the original vision of Symbian, on its formation in 1998, contemplated up to ten years into the future, the creation of the Symbian Foundation likewise has an eye on market evolution up to 2018. Any recession between now and then is a lesser effect.

Well, I'm sympathetic to that view. However, it gives us no reason to breezily overlook the likely pain and disruption caused to the smartphone industry by an economic downturn.

After all, I vividly remember the distress inside Symbian during the crash of the dot com bubble. During that time, smartphone phone projects were being cancelled thick and fast - even when the projects looked to be full of real promise. These projects were cancelled on account of lack of finances to back more speculative developments. Development resources in our customers (and also in our customers' customers) became focused instead on "safe bets" rather than on innovative high-risk high-reward projects. Symbian faced a real crisis. It took several long years to put that time behind us.

Could the same happen again? Perhaps. But I see several key differences:
  • This time, it will be the Symbian projects that are viewed as the safe bets: the Symbian software system is much more stable and proven than before
  • This time, many of the Symbian projects will be the lower costs ones - because of reduced needs to integrate large swathes of new functionality (beyond that already provided in the core offering), and also because of the lower hardware requirements of the high-performance Symbian software
  • This time, consumers are already familiar with smartphone technology, and are increasingly enamoured with it, rather than this technology only appealing to a narrow segment of the population.

I hope this doesn't make me look complacent. Believe me, I'm not. I know it's going to be hard, to turn the above prognosis into reality. But the enthusiasm and skills of the Symbian ecosystem (as manifest at the show) give me grounds for optimism.

In short, consumers may tend to prefer lower-cost smartphones, and this will benefit Symbian. Even though the Symbian phones will be comparatively inexpensive, they will deliver enough features (and a sufficiently good user experience) to win over end-users.

Wednesday, October 22, 2008

Winners of Symbian student essay contest

At the Smartphone Show yesterday, Symbian announced the results of our first Student Essay Contest, and called for entrants to a new contest - with an entry submission deadline of 31st January 2009.

The theme for the 2008 contest was "The next wave of smartphone innovation". The prize winners are as follows (listed in alphabetical order of surname):

  • Benoît Delville, Ecole Centrale de Lille, France: The hardware tech of smartphones. Benoît’s essay examines four factors which threaten to prevent the fuller adoption of smartphones.
  • Alexander Erifiu, University of Applied Sciences, Hagenberg, Austria: New interaction concepts in mobile games. Alexander’s essay describes a project the author carried out with some colleagues to increase the suitability of smartphones for certain types of games.
  • Andreas Jakl, Johannes Kepler University, Linz, Austria: Optical translator: word spotting and tracking on smartphones. Andreas’s essay considers some developments that will enable advanced new applications that take advantage of the high quality camera technology that is currently widely available on smartphones.
  • Florian Lettner, University of Applied Sciences, Hagenberg, Austria: Smartphones in home automation. Florian’s essay investigates the possible use of smartphones in a number of practical situations, including several in the home.
  • Pankaj Nathani, Bhavnagar University, Gujarat, India: Improved development and delivery methodologies. Pankaj’s essay focuses on the fact that developers can face many challenges in developing and delivering novel or evolved services on smartphones.
  • Milen Nikolov, The College at Brockport, State University of New York, Brockport, USA: Exploiting social and mobile ad hoc networking to achieve ubiquitous connectivity. Milen’s essay examines a particular example of what is known as a ‘Mobile Ad hoc Network’ (MANET) involving smartphones.
  • Aleksandra Reiss, Petrozavodsk State University, Russia: The next waves of smartphone innovation. Aleksandra’s essay is targeted at discovering what new functionality can be added to smartphones in the near future.
  • Sudeep Sundaram, University of Bristol, UK: Situation aware maintenance mate. Sudeep’s essay reviews possible uses of a smartphone in coordination with a head mounted display, where for example, a user could see the positioning of electrical wires in a wall and carry out diagnostics.
  • Iftekhar Ul Karim, BRAC University, Dhaka, Bangladesh: Opportunities with smartphone technologies for the base of the pyramid. Iftekhar’s essay challenges readers to consider novel uses of smartphones for users in the so-called ‘base of the pyramid’ – the four billion poorest people on the planet.
  • Alejandro Vicente-Grabovetsky, University of Cambridge, UK: The smartphone of the future: A powerhouse or a mere terminal? Alejandro’s essay explores the potential for the smartphone to act as a ‘social computer’ as opposed to merely copying features from the ‘personal computer’.

My congratulations to the prizewinners! There are thought-provoking elements in all of the winning essays. For extracts and summaries, see developer.symbian.com/essays.

The contest received many other essays that also contained interesting and valuable observations. My recommendations to entrants of future contests is that that essays are more likely to be awarded prizes if they:

  • Concentrate on making a small number of points well, rather than on trying to cover a large number of different points;
  • Address specific issues, rather than describing abstract theories;
  • Have a clear structure and a logical flow of argument;
  • Back up their claims by providing evidence (for example, references).

My goal for the 2008 contest were threefold:

  1. To encourage university students to carry out research on topics of interest to Symbian, its wider community and the mobile industry;
  2. To find out where the most interesting research was being carried out;
  3. To stimulate interest in Symbian’s emerging University Research Relations programme.

Following the success of our 2008 contest, we're repeating it in 2009. The deadline for submission for the next Symbian Student Essay Contest is 31st January 2009. The overall theme for this new contest is “Architectures to enable breakthroughs for mobile converged devices.” Students are encouraged to address one or more of the following topics in their essays:

  1. Software development that takes best advantage of multiple processor cores
  2. Allocation of responsibilities between managed code and native code
  3. Delivering maximum power from the hardware and the networks to applications
  4. Security and privacy concerns in mobile device architectures
  5. Taming the complexity of mobile system architecture: the role of open source
  6. Enabling devices, applications and services that appeal to huge new groups of users
  7. The role of system architecture in significantly improving consumer experience.

Winners of the 2009 contest will receive £1,000 with runners up earning special commendations. For the rules of this contest, see www.symbian.com/universities.

Tuesday, October 21, 2008

Open Source: necessary but not sufficient

Building the software system for complex smartphone products is one of the toughest engineering challenges on the planet. The amount of software in a high-end phone has been roughly doubling, each year, for the last ten years. In order to reap market success, smartphone software has to meet demanding requirements for performance, reliability, and usability. What’s more, to avoid missing a fast-moving market window, the software needs to pass through its integration process and reach maturity in a matter of months (not years). As I said, it’s a truly tough problem.

[Author's note: a version of this article is appearing in print today, to mark the first day of the Symbian Smartphone Show. I thought that people might like to read it online too.]

In broad terms, there are two ways in which a company can seek to solve this kind of tough problem:
  1. Seek to keep careful control of the problem, and rely primarily on resources that are under the tight direction and supervision of the company;
  2. Seek to take advantage of resources that are outside the control of the company.

The attraction of the first approach is that it’s easier to manage. The attraction of the second approach is that, in principle, the company can take better advantage of the potential innovation created by users and developers that are outside the company.

Writers and academics who study how innovation works in industry sometimes use the terms “closed innovation” and “open innovation” to describe these two approaches. In his pioneering book “Open innovation, the new imperative for creating and profiting from technology”, Henry Chesbrough lists the following contrasts between open innovation and closed innovation:

The “closed innovation” mindset:

  1. The smart people in our field work for us
  2. To profit from R&D we must discover it, develop it, and ship it ourselves
  3. If we discover it ourselves, we will get to the market first
  4. The company that gets an innovation to market first will win
  5. If we create the most and the best ideas in the industry, we will win
  6. We should control our IP, so that our competitors don’t profit from our ideas.

The “open innovation” mindset:

  1. Not all the smart people work for us. We need to work with smart people inside and outside our company
  2. External R&D can create significant value; internal R&D is needed to claim some portion of that value
  3. We don’t have to originate the research to profit from it
  4. Building a better business model is better than getting to market first
  5. If we make the best use of internal and external ideas, we will win
  6. We should profit from others’ use of our IP, and we should buy others’ IP whenever it advances our own business model.

In the modern world of hyper-complex products, easy communication via the Internet and other network systems, and the “Web 2.0” pro-collaboration zeitgeist, it is easy to understand why the idea of open innovation receives a lot of support. It sounds extremely attractive. However, the challenge is how to put these ideas into practice.

That’s where open source enters the picture. Open source removes both financial and contractual barriers that would otherwise prevent many users and external developers from experimenting with the system. For this reason, open source can boost open innovation.

However, in my view, there’s a lot more to successful open innovation than putting the underlying software platform into open source. We mustn’t fall into the trap of thinking that, because both these expressions start with the same adjective (“open”), the two expressions are essentially equivalent. They’re not.

Indeed, people who have studied open innovation have reached the conclusion that there are three keys to making open innovation work well for a firm (or platform):

  • Maximising returns to internal innovation
  • Incorporating external innovation in the platform
  • Motivating a supply of external innovations.

Let’s dig more deeply into the second and third of these keys.

Incorporating external innovation in the platform

The challenge here isn’t just to stimulate external innovation. It is to be able to incorporate this innovation into the codelines forming the platform. That requires the platform itself to be both sufficiently flexible and sufficiently stable. Otherwise the innovation will fragment the platform, or degrade its ongoing evolution.

It also requires the existence of significant skills in platform integration. Innovations offered by users or external developers may well need to be re-engineered if they are to be incorporated in the platform in ways that meet the needs of the user community as a whole, rather than just the needs of the particular users who came up with the innovation in question.

  • This can be summarised by saying that a platform needs skills and readiness for software codeline management, if it is to be able to productively incorporate external innovation.

Codeline management in turn depends on skills in:

  • Codeline gate-keeping: not accepting code that fails agreed quality criteria – no matter how much political weight is carried by the people trying to submit that code
  • Reliable and prompt code amalgamation: being quick to incorporate code that does meet the agreed criteria – rather than leaving these code submissions languishing too long in an in-queue
  • API management, system architecture, and modular design – to avoid any spaghetti-like dependencies between different parts of the software
  • Software refactoring – to be able to modify the internal design of a complex system, in the light of emerging new requirements, in order to preserve its modularity and flexibility – but without breaking external compatibility or losing roadmap momentum.

Motivating a supply of external innovations

The challenge here isn’t just to respond to external innovations when they arise. It is to give users and external developers sufficient motivation to work on their ideas for product improvement. These parties need to be encouraged to apply both inspiration and perspiration.

  • Just as the answer to the previous issue is skilled software codeline management, the answer to this issue is skilled ecosystem management.

Ecosystem management involves a mix of education and evangelism. It also requires active listening (also known as “being open-minded”), and a willingness by the platform providers to occasionally tweak the underlying platform, in order to facilitate important innovations under consideration by external parties. Finally it requires ensuring that third parties can receive suitable rewards for their breakthroughs – whether moral, social, or financial. This involves the mindset of “growing the pie for mutual benefit” rather than the platform owner seeking to dominate the value for its own sake.

But neither software codeline management nor ecosystem management comes easy. Neither fall out of the sky, ready for action, just by virtue of a platform being open source. Nor can these skills be acquired overnight, by spending lots of money, or hiring lots of intrinsically smart people.

Conclusion: On account of a legacy of more than ten years of trial and error in building and enhancing both a mobile platform and an associated dynamic ecosystem, the Symbian Foundation should come into existence with huge amounts of battle-hardened expertise in both software codeline management and ecosystem management. On that basis, I expect the additional benefits of open source will catalyse a significant surge of additional open innovation around the Symbian Platform. In contrast, other mobile platforms that lack this depth of experience are likely to find that open source brings them grief as much as it brings them potential new innovations. For these platforms, open source may result in divisive fragmentation and a dilution of ecosystem effort.

Footnote: For more insight about open innovation, I recommend the writings of Henry Chesbrough (mentioned above), Wim Vanhaverbeke, and Joel West.

Saturday, October 11, 2008

Serious advice to developers in tough times

As I mentioned in my previous article, the FOWA London event on "The Future of Web Apps" featured a great deal of passion and enthusiasm for technology and software development systems. However, as I watched the presentations on Day Two, I was repeatedly struck by a deeper level of seriousness.

For example, AMEE Director Gavin Starks urged the audience to consider how changes in their applications could help reduce CO2 emissions. AMEE has exceptionally large topics on its mind: the acronym stands for "Avoiding Mass Extinctions Engine". Gavin sought to raise the aspiration level of developers: "If you really want to build an app that will change the world, how about building an app that will save the Earth?" But this talk was no pious homily: it contained several dozen ideas that could in principle act as possible starting points for new business ventures.

On a different kind of serious topic, Mahalo.com CEO Jason Calacanis elicited some gasps from the audience when he dared to suggest that, if startups are really serious about making a big mark in the business world, they should consider firing, not only their "average" employees, but also their "good" employees - under the rationale that "good is the enemy of the great". The resulting audience Q&A could have continued the whole afternoon.

But the most topical presentation was the opening keynote by Sun Microsystems Distinguished Engineer Tim Bray. It started with a bang - with the words "I'm Scared" displayed in huge font on the screen.

With these words, Tim announced that he had, the previous afternoon, torn up the presentation he was previously planning to give - a presentation entitled "What to be Frightened of in Building A Web Application".

Tim explained that the fear he would now address in his talk was about global economic matters rather than about usage issues with the likes of XML, Rails, and Flash. Instead of these technology-focused matters, he would instead cover the subject "Getting through the tough times".

Tim described how he had spent several days in London ahead of the conference, somewhat jet lagged, watching lots of TV coverage about the current economic crisis. As he said, the web has the advantage of allowing everyone to get straight to the sources - and these sources are frightening, when you take the time to look at them. Tim explicitly referenced http://acrossthecurve.com/?p=1830, which contains the following gloomy prognosis:

...more and more it seems likely that the resolution of this crisis will be an historic financial calamity. Each and every step which central banks and regulators have taken to resolve the crisis has been met with failure. In the beginning, the steps would produce some brief stability.

In the last several days, the US Congress (belatedly) passed a bailout bill, the Federal Reserve has guaranteed commercial paper and in unprecedented coordination central banks around the globe slash base lending rates. Listen to the markets respond.

The market scoffs as Libor rises, stocks plummet and IBM is forced to pay usurious rates to borrow. There is no stability and no hiatus from the pain. It continues unabated in spite of the best efforts of dedicated people to solve it.

We are in the midst of an unfolding debacle. It is happening about us. I am not sure how or when it ends, but the end, when it arrives, will radically alter the way we live for a long time.

Whoever wins the US election and takes office in January will need prayers and divine intervention.

As Tim put it: "We've been running on several times the amount of money that actually exists. Now we're going to have to manage on nearer the amount of money that does exist." And to make things even more colourful, he said that the next few days could be like the short period of time in New Orleans after hurricane Katrina had passed, but before the floods struck (caused by damage brought about by the winds). For the world's economy, the hurricane may have passed, but the flood is still to come.

The rest of Tim's talk was full of advice that sounded, to me, as highly practical, for what developers should do, to increase their chances of survival through these tough times. (There's a summary video here.) I paraphrase some highlights from my notes:

Double down and do a particularly good job. In these times, slack work could put your company out of business - or could cause your employer to decide your services are no longer necessary.

Large capital expenditures are a no-no. Find ways to work that don't result in higher management being asked to sign large bills - they won't.

Waterfalls are a no-no. No smart executive is going to commit to a lengthy project that will take longer than a year to generate any payback. Instead, get with the agile movement - pick out the two or three requirements in your project that you can deliver incrementally and which will result in payback in (say) 8-10 weeks.

Software licences are a no-no. Companies will no longer make large commitments to big licences for the likes of Oracle solutions. Open source is going to grow in prominence.

Contribute to open source projects. This is a great way to build professional credibility - to advertise your capabilities to potential new employers or business partners.

Get in the cloud. With cloud services, you only pay a small amount in the beginning, and you only pay larger amounts when traffic is flowing.

Stop believing in technology religions. The web is technologically heterogeneous. Be prepared to learn new skills, to adopt new programming languages, or to change the kinds of applications you develop.

Think about the basic needs of users. There will be less call for applications about fun things, or about partying and music. There will be more demand for applications that help people to save money - for example, the lowest gas bill, or the cheapest cell phone costs.

Think about telecomms. Users will give up their HDTVs, their SUVs, and their overseas holidays, but they won't give up their cell phones. The iPhone and the Android are creating some great new opportunities. Developers of iPhone applications are earning themselves hundreds of thousands of dollars from applications that cost users only $1.99 per download. Developers in the audience should consider migrating some of their applications to mobile - or creating new applications for mobile.
The mention of telecomms quickened my pulse. On the one hand, I liked Tim's emphasis on the likely continuing demand for high-value low-cost mobile solutions. On the other hand, I couldn't help noticing there were references to iPhone and Android, but not to Symbian (or to any of the phone manufacturers who are using Symbian software).

Then I reflected that, similarly, namechecks were missing for RIM, Windows Mobile, and Palm. Tim's next words interrupted this chain of thought and provided further explanation: With the iPhone and Android, no longer are the idiotic moronic mobile network operators standing in the way with a fence of barbed wire between developers and the people who actually buy phones.

This fierce dislike for network operator interference was consistent with a message underlying the whole event: developers should have the chance to show what they can do, using their talent and their raw effort, without being held up by organisational obstacles and value-chain choke-points. Developers dislike seemingly arbitrary regulation. That's a message I take very seriously.

However, we can't avoid all regulation. Indeed - to turn back from applications to economics - lack of regulation is arguably a principal cause of our current economic crisis.

The really hard thing is devising the right form of regulation - the right form of regulation for financial markets, and the right form of regulation for applications on potentially vulnerable mobile networks.

Both tasks are tough. But the solution in each case surely involves greater transparency.

The creation of the Symbian Foundation is intended to advance openness in two ways:
  1. Providing more access to the source code;
  2. Providing greater visibility of the decisions and processes that guide changes in both the software platform and the management of the associated ecosystem.
This openness won't dissolve all regulation. But it should ensure that the regulations evolve, more quickly, to something that more fully benefits the whole industry.

Thursday, October 9, 2008

In search of software glamour

I keep running into the "glamour question". Scott from Mippin raised it again the other day, in a shrewd comment in response to Roger Nolan's recent analysis "Symbian’s open source challenge":

I think that one inherent disadvantage for Symbian compared to Apple and Android is the glamour factor. This can be demonstrated by looking at the comments stream to this excellent post. If it had been talking about Apple or Android it would have people crawling over themselves to comment. Symbian just does not elicit the same excitement. This means - more meaningfully perhaps - that developers gain more kudos for developing for one of the glamour platforms than for Symbian (despite its market share).
Scott suggests that one reason for the reduced excitement over Symbian lies "the complexity of Symbian. It is just too complex and developers stay away". Previously, I've offered my own list of "Symbian passion killers" that can hinder developers from becoming fully inspired (and therefore fully productive) about creating software for Symbian OS. As I've said before, the plans for "Symbian 2.0" in the wake of the creation of the Symbian Foundation include several important projects to address passion killers.

I heard quite a lot more, today, about developer passion. I was attending Day One of FOWA - the Future of Web Apps expo, taking place at London's ExCeL conference centre. I experienced considerable déjà vu at this event, since the annual Symbian Smartphone Shows were held there from 2002 to 2007. The layout of the keynote hall and the so-called "university sessions" reminded me a lot of similar layouts from bygone Smartphone Shows. The audience seemed of comparable size too. But whereas the motivation of many who attend the Smartphone Show is to make business connections and to promote the success of their companies, the motivation I sensed from many of the FOWA attendees was rather different: it was to explore new technologies, and to exult in new products and new processes.

For example, Edwin Aoki, AOL Technology Fellow, included the following remarks in his keynote speech "Web apps are dead, long live web apps":
What drives developers? It's not just money. It's building out communities. It's building pride. It's dedication and passion, not dollars and pounds.
And I couldn't help noticing how frequently speakers used words like "amazing", "exciting", "awesome", "kickass", and "cool". At first I wondered if they were joking or being ironic, but then I realised they were un-selfconscious. They were simply being enthusiastic.

Blaine Cook, ex Chief Engineer at Twitter, and Joe Stump, Lead Architect of Digg, performed a dynamic two-hander on the subject of "Languages don't scale". Taking turns, they ripped into features of programming languages that, in their words, made the languages "suck". Thus "here's why PHP sucks..." and "here's why Ruby sucks..." and "Python sucks as well...". But this was just a prelude to their main theme, which is that you should beware asking committed developers to switch from one language to another. Language choice is often personal - and often heartfelt. According to the speakers, scale performance issues that sometimes bedevil web applications, only rarely come down to language issues; instead, they usually depend on hardware architecture or network architecture. Hence the advice:
Value happy coders! Happy coders are productive coders. Let them work with the languages they love!
Many of the speakers oozed passion. I was particularly impressed by Francisco Tolmasky, co-founder of 280 North. His presentation title hardly sounded earth-shattering: "Building Desktop Caliber Web Applications with Objective-J and Cappuccino". However, the delivery was captivating and uplifting. (And the technology of their product does look attractive...)

All this brings back to mind the glamour question: To what extent can Symbian's developer events match this kind of enthusiasm - an enthusiasm driven by love of product and love of technology, rather than (just) love of market opportunity and commercial reward? To what extent can Symbian OS become viewed as glamorous and exciting, rather than just some kind of incumbent?

Happily, there's a lot of fascinating technology on Symbian's roadmap. There are also new tools that should appeal to various different kinds of developers. For those who value choice of languages, there's a growing range of language options available for Symbian OS. For those who are interested in the hardware, there are literally scores of new phone models in the pipeline. Some of this will fall under public spotlight in under two weeks' time at the 2008 Smartphone Show.

This year, the show has moved from ExCeL to Earls Court. The more significant change is that, this year, there's a "Mobile Devfest" which is running alongside the main show:

Mobile DevFest is Symbian’s premier conference for developers and has been designed to provide developers with deep technical training and information on building mobile software solutions for the next generation of mobile phones powered by Symbian OS.

Mobile DevFest is the ideal developer event for anyone engaged in building, or interested in building mobile applications on Symbian OS.

Mobile DevFest is the best way to stay ahead of today’s mobile technologies. It provides in-depth technical sessions, delivered by industry experts in the mobile development space.

I'm eagering looking forward to taking part - and to gauging the degree of passion at the show. And in the meantime, if you think your own new product or solution for the Symbian space is particularly exciting, I'll be pleased to hear about it!

Sunday, October 5, 2008

iWoz inspires iMAX

Last Wednesday, Apple co-Founder Steve Wozniak addressed a gathering of several hundred business people in London's large-format IMAX cinema, as part of a series of events organised by the London Business Forum. The theme was "Apple Innovation". Since the IMAX is just 15 minutes walk from Symbian's HQ, this opportunity was too good for me to miss. I hoped Wozniak's account of Apple culture might shed some new light on the all-conquering iPhone. I was not disappointed.

Wozniak spoke for more than an hour, without slides, running through a smorgasbord of anecdotes from his own life history. It was rivetting and inspiring. Later I realised that most of the material has already been published in Wozniak's 2006 book "iWoz: Computer geek to cult icon: How I invented the personal computer, co-founded Apple, and had fun doing it", which was given out at the event.

I warmed to Wozniak early on in his talk, when he described one of his early experiments in software - writing a program to solve the "Knight's tour" around a chessboard. I remembered writing a program to try to solve the same problem while at roughly the same age - and had a similar result. In my case, programs were sent off from school to the local Aberdeen University, where clerical staff typed them in and submitted them on behalf of children in neighbouring schools. This program was returned several days later with the comment that there was no output - operators had terminated it.

A few weeks later, there was a short residential course at the university for sixth form students, which I attended. I modified my program to tackle a 5x5 board instead, and was happy to see computer quickly spitting out numerous solutions. I changed the board size to 6x6 instead and waited ... and waited ... and after around 10 minutes, a solution was printed out. Wozniak's experience was several years before mine. As he describes it, the computer he was using could do one million calculations a second - which sounded like a huge number. So the lack of any output from his program was a big disappointment - until he calculated that it would actually take the computer about 10^25 years to finish this particular calculation!

More than half the "iWoz" book covers Wozniak's life pre-Apple. It's in turn heart-warming and (when describing Wozniak's pranks and phreaking) gob-smacking.

The episode about HP turning down the idea of the Apple I computer was particularly thought-provoking. Wozniak was working at HP before Apple was founded, and being loyal to his company (which he firmly admired for being led by engineers who in turn deeply respected other engineers) he offered them the chance to implement the ideas he had devised outside work time for what would become, in effect, the world's first useful personal computer. Although his managers at HP showed considerable interest, they were not able to set aside their standard, well-honed processes in order to start work on what would have been a new kind of project. Wozniak says that HP turned him down five times, before he eventually resigned from the company to invest his energy full-time into Apple. It seems like a classic example of the Innovator's Dilemma - in which even great companies can fail "by doing everything right": their "successes and capabilities can actually become obstacles in the face of changing markets and technologies".

Via numerous anecdotes, Wozniak describes a set of characteristics which are likely to lead to product success:
  • Technical brilliance coupled with patience and persistence. (Wozniak tells a fascinating account of how he and one helper - Randy Wigginton, at the time still at senior high school - created a brand new floppy disk drive controller in just two weeks, without any prior knowledge of disk drives);
  • A drive for simplicity of design (such as using a smaller number of parts, or a shorter algorithm) and superb efficiency of performance;
  • Users should feel an emotional attachment to the product: "Products should be obviously the best";
  • Humanism: "The human has to be more important than the technology".

There's shades of the iPhone experience in all these pieces of advice - even though the book iWoz was written before the iPhone was created.

There's even stronger shades of the iPhone experience in the following extracts from the book:

The Apple II was easy to program, in both BASIC (100 commands per second) and machine language (1M commands per second)... Within months dozens of companies started up and they were putting games on casette tape for the Apple II; these were all start-up companies, but thanks to our design and documentation, we made it easy to develop stuff that worked on our platform...

... the computer magazines had tons of Apple II product ads for software and hardware. Suddenly the Apple II name was everywhere. We didn't have to buy an advertisement or do anything ourselves to get the name out. We were just out there, thanks to this industry of software programs and hardware devices that sprang up around the Apple II. We became the hot fad of the age, and all the magazines (even in the mainstream press) started writing great things about us. Everywhere you looked. I mean, we couldn't buy that kind of publicity. We didn't have to.

In this way, the Apple II quickly reached sales figures far higher than anyone had dared to predict. One other factor played a vital role:

VisiCalc was so powerful it could only run on the Apple II: only our computer had enough RAM to run it.

But sales bandwaggons can lose their momentum. The iPhone bandwaggon will falter, to the extent that other smartphones turn out to be more successful at running really valuable applications (such as, for example, applications that can run in background, in ways that aren't possible on the iPhone).

Apple also lost some of its momentum in the less reliable Apple III product that followed the Apple II. Wozniak has no doubts about the root causes for the failure of the Apple III: "it was developed by committee, by the marketing dept". This leads on to the disappointing advice that Wozniak gives in the final chapter of his book: "Work alone"!

Here, I part company with Wozniak. I've explained before my view that "design by committee" can achieve, with the right setup, some outstanding results. That was my experience inside Psion. However, I do agree that the process needs to involve some first-class product managers, who have a powerful and authentic vision for the product.

Thursday, October 2, 2008

Open source religion

Roger Nolan, my long-time former colleague from both Psion and Symbian, raised some challenging points in his recent piece "Symbian’s open source challenge" on the VisionMobile blog. As Roger sees it, the challenge for Symbian is to get the best out of open source without becoming so fixated by the idea of open source that we fail to address the burning requirement for improved user experience. The worry is that technological or process considerations will get in the way of creating simply delightful products.

In Roger's own words - comparing the possible future evolution of Symbian software with the history of Nokia's Linux-based "maemo" platform for mobile Internet tablets:
Sadly Maemo is ... driven from a technology soapbox. This time, it’s not a features arms race, it’s open-source-or-die. The Maemo team did not sit down and say “Let’s build a great UI for an internet tablet” they sat down and said “What can we do with open source” - open source is the religion, not ease of use and making great devices that are delightful to use.

As Symbian becomes the Symbian foundation and transitions to an open source model, I hope that the open source community will take some of the burden of implementing every last codec and piece of middle-ware and the Symbian foundation can focus on UIs and ease of use. Unfortunately, I fear that they will be overcome following Maemo’s open-source religion.
In other words, is Symbian going on an free software crusade, or are we adopting open source for solidly pragmatic reasons?

My answer is that it's a bit of both, but with a strong emphasis on the pragmatic side of the scale.
The archetypal free software crusader, of course, is Richard Stallman. The 2002 book "Free as in Freedom" by Sam Williams is a sympathetic, interesting and easy-to-read account of Stallman and his very considerable impact on the world of software - but it's no hagiography.

The early chapters in the book take a friendly approach to Stallman's personal idiosyncracies. Reading these chapters, it's easy to develop a strong liking for this pioneering crusader. To my surprise, I found a lot of resonance between Stallman's life experiences and, in smaller scale, my own; for example, we share backgrounds as prodigious mathematicians who were not afraid to be outsiders. (And it seems we're both interested in life extension.)

The last few chapters provide a kind of balance, by highlighting some of the problems caused within the Free and Open Source movements by Stallman's inflexibility, apparent micro-management, and under-developed project management skills.

The narrative in the book jumps around a lot, moving backwards and forwards in time all over the place. Some readers may find that distracting, but I liked it, since it helps to show the remarkable wholeness and integrity to Stallman's conceptions.

The entire text of this book is available online at http://www.faifzilla.org/. Chapter 8 contains a stark example of the clash between the "quasi-religious" approach and the pragmatic one:
Stallman says competitive performance and price, two areas where free software operating systems such as GNU/Linux and FreeBSD already hold a distinct advantage over their proprietary counterparts, are red herrings compared to the large issues of user and developer freedom.

"It's not because we don't have the talent to make better software," says Stallman. "It's because we don't have the right. Somebody has prohibited us from serving the public. So what's going to happen when users encounter these gaps in free software? Well, if they have been persuaded by the open source movement that these freedoms are good because they lead to more-powerful reliable software, they're likely to say, 'You didn't deliver what you promised. This software's not more powerful. It's missing this feature. You lied to me.' But if they have come to agree with the free software movement, that the freedom is important in itself, then they will say, 'How dare those people stop me from having this feature and my freedom too.' ..."

...the underlying logic of Stallman's argument - that open source advocates emphasize the utilitarian advantages of free software over the political advantages - remains uncontested. Rather than stress the political significance of free software programs, open source advocates have chosen to stress the engineering integrity of the hacker development model. Citing the power of peer review, the open source argument paints programs such as GNU/Linux or FreeBSD as better built, better inspected and, by extension, more trustworthy to the average user...

When an audience member asks if, in shunning proprietary software, free software proponents lose the ability to keep up with the latest technological advancements, Stallman answers the question in terms of his own personal beliefs. "I think that freedom is more important than mere technical advance," he says. "I would always choose a less advanced free program rather than a more advanced nonfree program, because I won't give up my freedom for something like that. My rule is, if I can't share it with you, I won't take it."

Such answers, however, reinforce the quasi-religious nature of the Stallman message. Like a Jew keeping kosher or a Mormon refusing to drink alcohol, Stallman paints his decision to use free software in the place of proprietary in the color of tradition and personal belief. As software evangelists go, Stallman avoids forcing those beliefs down listeners' throats. Then again, a listener rarely leaves a Stallman speech not knowing where the true path to software righteousness lies.
Now the nearest thing to a Symbian religion is the published list of our corporate values: Excellence, Innovation, Passion, Integrity, Collaboration, People. We take these values very seriously - as we do our vision - that Symbian OS will be the most widely used software platform on the planet.

Questions such as the extent of our adoption of open source, or usage of proprietary software, are, in the end, weighed up against that list of values. Open source will lead, we believe, to greater collaboration, and to more innovation. That's a good reason to support it. But it's not an end in itself.

Indeed, adopting selected open source principles is only one of the big change initiatives that are taking place in Symbian. As I've mentioned before, we're also adopting selected principles of enterprise agile. What's more, we're looking forward to significantly closer inter-working between the development teams in Symbian and in S60, which will allow faster delivery of important new technology to the market. And last - but definitely not least - there's a whole series of measures to enable improved user experience on Symbian-powered phones. The UI that's on the just-announced Nokia 5800 XpressMusic device is a significant step in this direction.

Wednesday, October 1, 2008

The student syndrome

Entries for Symbian's 2008 Student Essay Contest have just closed. The deadline for submission of entries was midnight (GMT) on 30 September 2008.

The contest has been advertised since June. What proportion of all the entries do you suppose were submitted in the final six hours before the deadline expired? (Bear in mind that, out of a total competition duration of more than three months, six hours is about 1/400 of the available time.)

I'll give the answer at the end of this article. It surprised me - though I ought to have anticipated the outcome. After all, for many years I've been telling people about "The Student Syndrome".

I became familiar with the concept of the student syndrome some years ago, while reading Eliyahu Goldratt's fine business-oriented novel "The Critical Chain":

Like all Goldratt's novels, Critical Chain mixes human interest with some intriguing ways of analysing business-critical topics. The ideas in these books had a big influence on the evolution of my own views about how to incorporate responsiveness and agility into large software projects where customers are heavily reliant on the software being delivered at pre-agreed dates.

Here's what I said on the topic of "variable task estimates" in the chapter "Managing plans and change" in my own 2005 book "Symbian for software leaders":

A smartphone project plan is made up from a large number of estimates for how long it will take to complete individual tasks. If the task involves novel work, or novel circumstances, or a novel integration environment, you can have a wide range of estimates for the length of time required.

It’s similar to estimating how long you will take to complete an unfamiliar journey in a busy city with potentially unreliable transport infrastructure. Let’s say that, if you are lucky, you might complete the journey in just 20 minutes. Perhaps 30 minutes is the most likely time duration. But in view of potential traffic hold-ups or train delays, you could take as long as one hour, or (in case of underground train derailments) even two hours or longer. So there’s a range of estimates, with the distribution curve having a long tail on the right hand side: there’s a non-negligible probability that the task will take at least twice as long as the individual most likely outcome.

It’s often the same with estimating the length of time for a task within a project plan.

Now imagine that the company culture puts a strong emphasis on fulfilling commitments, and never missing deadlines. If developers are asked to state a length of time in which they have (say) 95% confidence they will finish the task, they are likely to give an answer that is at least twice as long as the individual most likely outcome. They do so because:
  • Customers may make large financial decisions dependent on the estimate – on the assumption that it will be met;
  • Bonus payments to developers may depend on hitting the target;
  • The developers have to plan on unforeseen task interference (and other changes);
  • Any estimate the developers provide may get squashed down by aggressive senior managers (so they’d better pad their estimate in advance, making it even longer).

Ironically, even though such estimates are designed to be fulfilled around 95% of the time, they typically end up being fulfilled only around 50% of the time. This fact deserves some careful reflection. Even though the estimates were generous, it seems (at first sight) that they were not generous enough. In fact, here’s what happens:

  • In fulfilment of “Parkinson’s Law”, tasks expand to fill the available time. Developers can always find ways to improve and optimise their solutions – adding extra test cases, considering alternative algorithms and generalisations, and so forth;
  • Because there’s a perception (in at least the beginning of the time period) of there being ample time, developers often put off becoming fully involved in their tasks. This is sometimes called “the student syndrome”, from the observation that most students do most of the preparation for an exam in the time period just before the exam. The time lost in this way can never be regained;
  • Because there’s a perception of there being ample time, developers can become involved in other activities at the same time. However, these other activities often last longer than intended. So the developer ends up multi-tasking between two (or more) activities. But multi-tasking involves significant task setup time – time to become deeply involved in each different task (time to enter “flow mode” for the task). So yet more time is wasted;
  • Critically, even when a task is ready to finish earlier than expected, the project plan can rarely take advantage of this fact. The people who were scheduled for the next task probably aren’t ready to start it earlier than anticipated. So an early finish by one task rarely translates into an early start by the next task. On the other hand, a late finish by one task inevitably means a late start for the next start. This task asymmetry drives the whole schedule later.

In conclusion, in a company whose culture puts a strong emphasis upon fulfilling commitments and never missing deadlines, the agreed schedules are built from estimations up to twice as long as the individually most likely outcome, and even so, they often miss even these extended deadlines...

This line of analysis is one I've run through scores of times, in discussions with people, in the last four or five years. It feeds into the argument that the best way to ensure customer satisfaction and predictable delivery, is, counter-intuitively, to focus more on software quality, interim customer feedback, agile project management, self-motivated teams, and general principles of excellence in software development, than on schedule management itself.

It's in line with what Steve McConnell says,

  • IBM discovered 20 years ago that projects that focused on attaining the shortest schedules had high frequencies of cost and schedule overruns;
  • Projects that focused on achieving high quality had the best schedules and the highest productivities.

Symbian's experience over many years bears out the same conclusion. The more we've focused on achieving high quality, the better we've become with both schedule management and internal developer productivity.

As for the results of the student syndrome applied to the Symbian Essay Contest:

  • 54% of the essays submitted to the competition were received in the final six hours (approximately the final 1/400 of the time available)
  • Indeed, 16% of the essays submitted were received in the final 60 minutes.

That's an impressively asymmetric distribution! (It also means that the competition judges will have to work harder than they had been expecting, right up to the penultimate day of the contest...)