Friday, November 28, 2008

Why can't we all just get along?

Blogger Tomaž Štolfa asks me, in a comment to one of my previous posts,
I am also wondering why you are not trying to explore a non-os specific scenario?

Developers and service designers do not want to be bound to a single platform when developing a service for the masses. So it would make much more sense to se a bright future with cross-platform standards set by an independent party (W3C?).

If the industry will not agree on standards quickly enough Adobe (or some other company) will provide their own.
It's a good question. I'm actually a huge fan of multi-platform standards. Here's just a few of many examples:
  • Symbian included an implementation of Java way back in v4 of Symbian OS (except that the OS was called "EPOC Release 4" at the time);
  • Symbian was a founder member of the Open Mobile Alliance - and I personally served twice on the OMA Board of Directors;
  • I have high hopes for initiatives such as OMTP's BONDI that is seeking to extend the usefulness of web methods on mobile devices.

Another example of a programming method that can be applied on several different mobile operating systems is Microsoft's .NET compact framework. Take a look at this recent Microsoft TechEd video in which Andy Wigley of Appa Mundi interviews Mike Welham, CTO of Red Five Labs, about the Red Five Labs Net60 solution that allows compact framework applications to run, not only on Windows Mobile, but also on S60 devices.

There's no doubt in my mind that, over time, some of these intermediate platforms will become more and more powerful - and more and more useful. The industry will see increasing benefits from agreeing and championing fit-for-purpose standards for application environments.

But there's a catch. The catch applies, not to the domain of add-on after market solutions, but to the domain of device creation.

Lots of the software involved in device creation cannot be written in these intermediate platforms. Instead, native programming is required - and involves exposure to the underlying operating system. That's when the inconsistencies at the level of native operating systems become more significant:

  • Differences between clearly different operating systems (eg Linux vs. Windows Mobile vs. Symbian OS);
  • Differences between different headline versions of the same operating system (eg Symbian OS v8 vs. Symbian OS v9);
  • Differences between different flavours of the same operating system, evolved by different customers (eg Symbian OS v7.0 vs. Symbian OS v7.0s);
  • Differences between different customisations of the same operating system, etc, etc.

(Note: I've used Symbian OS for most of these examples, but it's no secret that the Mobile Linux world has considerably more internal fragmentation than Symbian. The integration delays in that world are at least as bad.)

From my own experience, I've seen many device creation projects very significantly delayed as a result of software developers encountering nasty subtle differences between the native operating systems on different devices. Product quality suffered as a result of these project schedule slips. The first loser was the customer, on encountering defects or a poor user experience. The second loser was the phone manufacturer.

This is a vexed problem that cannot be solved simply by developing better multi-os standard programming environments. Instead, I see the following as needed:

  1. Improved software development tools, that alert systems integrators more quickly to the likely causes of unexpected instability or poor performance on phones (including those problems which have their roots in unexpected differences in system behaviour); along this line, Symbian has recently seen improvements in our own projects from uses of the visual tools included in the Symbian Analysis Workbench;
  2. A restructuring of the code that runs on the device in order to allow more of that code to be written in standard managed code environments - Symbian's new Freeview architecture for networking IP is one step in this direction;
  3. Where possible, APIs used by aspects of the different native operating systems should become more and more similar - for example, I like to imagine that, one day, the same device driver will be able to run on more than one native operating system
  4. And, to be frank, we need fewer native operating systems; this is a problem that will be solved over the next couple of years as the industry gains more confidence in the overall goodness of a small number of the many existing mobile operating systems.

The question of technical fragmentation is, of course, only one cause of needless extra effort having to be exerted within the mobile industry. Another big cause is that different players in the value chain are constantly facing temptation to try to grab elements of value from adjacent players. Hence, for example, the constant tension between network operators and phone manufacturers.

Some elements of this tension are healthy. But, just as for the question of technical fragmentation, my judgement is that the balance is considerably too far over to the "compete" side of the spectrum rather than the "cooperate" side.

That's the topic I was discussing a few months back with Adam Shaw, one of the conference producers from Informa, who was seeking ideas for panels for the "MAPOS '08" event that will be taking place 9-10 December in London. Out of this conversation, Adam came up with the provocative panel title, "Can’t We All Just Get Along? Cooperation between operators and suppliers". Here's hoping for a constructive dialog!

Sunday, November 23, 2008

Problems with panels

As an audience member, I've been at the receiving end of some less-than-stellar panel discussions at conferences in the last few months. On these occasions, even though there's good reason to think that the individuals on the panels are often very interesting in their own right, somehow the "talking heads" format of a panel can result in low energy and low interest. The panellists make dull statements in response to generic questions and ... interest seeps away.

On the other hand, I've also recently seen some outstandingly good panels, where the assembled participants bring real collective insight, and the audience pulse keeps beating. Here are two examples:

The format of this fine RSA panel was in the back of my mind as I prepared, last Monday, to take part in a panel myself: "What's so smart about Smartphone Operating Systems", at the Future of Mobile event in London. I shared the stage with some illustrious industry colleages: Olivier Bartholot of Purple Labs, Andy Bush of the LiMo Foundation, Rich Miner of Android, James McCarthy of Microsoft, and the panel chair, Simon Rockman of Sony Ericsson. I had high hopes of the panel generating and conveying some useful new insights for the audience.

Alas, for at least some members of the audience, this panel fell into the "less-than-stellar" category mentioned above, rather than the better examples:

  • Tomaž Štolfa, writing in his blog "Funky Karaoke", rated this panel as just 1 out of 5, with the damning comment "a bunch of mobile OS guys, talking about the wrong problems. Where are cross platform standards?!?"; Tomaž gave every other panel or speaker a rating of at least 3 out of 5;
  • Adam Cohen-Rose, in his blog "Expanding horizons", summed up the panel as follows: "This was a rather boring panel discussion: despite Simon’s best attempts to make the panellists squirm, they stayed very tame and non-committal. The best bits was the thinly veiled spatting between Microsoft and Google — but again, this was nothing new…";
  • The Twitter back-channel for the event ("#FOM") had remarks disparaging this panel as "suits" and "monologue" and "big boys".

It's true that I can find other links or tweets that were more complimentary about this panel - but none of these comments pick this panel out as being one of the highlights of the day.

As someone who takes communication very seriously, I have to ask myself, "what went wrong?" - and, even more pertinently, "what should I do differently, for future panels?".

I toyed for a while with the idea that over-usage of Twitter by some audience members diminishes the ability of these audience members to concentrate sufficiently and to pick out what's actually genuinely interesting in what's being said. This is akin to Nicholas Carr's argument that "Google is making us stupid":

"Over the past few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn’t going—so far as I can tell—but it’s changing. I’m not thinking the way I used to think. I can feel it most strongly when I’m reading. Immersing myself in a book or a lengthy article used to be easy. My mind would get caught up in the narrative or the turns of the argument, and I’d spend hours strolling through long stretches of prose. That’s rarely the case anymore. Now my concentration often starts to drift after two or three pages. I get fidgety, lose the thread, begin looking for something else to do. I feel as if I’m always dragging my wayward brain back to the text. The deep reading that used to come naturally has become a struggle..."

After all, I do think that I said something interesting when it was my turn to speak - see the script I prepared in advance. But after more reflection, I gave up on the idea of excusing the panel's poor rating by that kind of self-serving argument (which blames the audience rather than panellists). That was after I remembered my own experience as being on the receiving end of lots of uninspiring panels - as I mentioned earlier. Further, I remembered that, when these panels started to become boring, my own attention would wander ... so I would miss anything more interesting that was said later on.

So on reflection, here are my conclusions, for avoiding similar problems with future panels:

  1. Pre-prepared remarks are fine. There's nothing wrong in itself with having something prepared to say, that takes several minutes to say it. These opening comments can and should provide better context for the Q&A part of the panel that follows;
  2. However, high energy is vital; especially with an audience where people might get distracted, I ought to be sure that I speak with passion, as well as with intellectual rigour; this may be hard when we're all sitting down (that's why sofa panels are probably the worst of all), but it's not impossible;
  3. The first requirement is actually to be sure the audience is motivated to listen to the discussion - the panel participants need to ensure that the audience recognise the topic as sufficiently relevant. On reflection, our "mobile operating systems" panel would have been better placed later on in the agenda for the day, rather than right at the beginning. That would have allowed us to create bridges between problems identified in earlier sessions, and the solutions we wanted to talk about;
  4. "Less is more" can apply to interventions in panels as well as to product specs (and to blogs...); instead of trying to convey so much material in my opening remarks, I should have prioritised at most two or three soundbites, and looked to cover the others during later discussion.

These are my thoughts for when I participate as a panellist on someone else's panel. When I am a chair (as I'll be at the Symbian Partner Event next month in San Francisco) I'll have different lessons to bear in mind!

Friday, November 21, 2008

Emulating the human brain

Artificial Intelligence (AI) already does a lot to help me in my life:
  • The real-time route calculation (and re-calculation) capabilities of my TomTom satnav system are extremely handy;
  • The automated language translation functionality inside Google web-search, whilst far from perfect, often allows me to understand at least the gist of webpages written in languages other than English;
  • The intelligent recommendation engine of Amazon frequently brings books to my attention that I am glad to investigate further.
On the other hand, the field of general AI has failed to progress as quickly as some of its supporters over the years had hoped. The Wikipedia article on the History of AI lists some striking examples of significant over-optimism among leading AI researchers:
  • 1958, H. A. Simon and Allen Newell: "within ten years a digital computer will be the world's chess champion" and "within ten years a digital computer will discover and prove an important new mathematical theorem."
  • 1965, H. A. Simon: "machines will be capable, within twenty years, of doing any work a man can do."
  • 1967, Marvin Minsky: "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved."
  • 1970, Marvin Minsky (in Life Magazine): "In from three to eight years we will have a machine with the general intelligence of an average human being."
Prospects for fast progress with general AI remain controversial. As we gather more and more silicon power into smartphones and other computers, will this mean these devices become more and more intelligent? Or will they simply be fast rather than generally intelligent?

In this context, one interesting line of analysis is to consider a separate but related question: to what extent will it be possible to create a silicon emulation of the brain itself (rather than to focus on algorithms for intelligence)?

My friend Anders Sandberg, Neuroethics researcher at the Future of Humanity Institute, Oxford University, will be addressing this question in a presentation tomorrow afternoon (Saturday 22nd November) in Central London. The presentation is entitled "Emulating brains: silicon dreams or the next big thing?"

Anders describes his talk as follows:
The idea of creating a faithful copy of a human brain has been a popular philosophical thought experiment and science fiction plot for decades. How close are we to actually doing it, how could it be done, and what would the consequences be? This talk will trace trends in computing, neuroscience, lab automaton and microscopy to show how whole brain emulation could become feasible in the mid term future.
The talk is organised by the UKTA. Last weekend, at the Convergence08 "unconference" in Mountain View, California, Anders gave an earlier version of the same talk. George Dvorsky blogged the result:

Convergence08: Anders Sandberg on Whole Brain Emulation

The term 'whole brain emulation' sounds more scientific than it does science fiction like, which may bode well for its credibility as a genuine academic discipline and area for inquiry.

Sandberg presented his whole brain emulation roadmap which had a flowchart like quality to it -- which he quipped must be scientific because it was filled with arrows.

Simulating memory could be very complex, possibly involving chemical transference in cells or drilling right down to the molecular level. We may even have to go down to the quantum level, but no neuroscientist that Anders knows takes that possibility seriously...

As Anders himself told me afterwards,
...interest was high but time limited - I got a lot of useful feedback and ideas for making the presentation better.
I'm expecting a fascinating discussion.

Wednesday, November 19, 2008

New mobile OSes mean development nightmares

Over on TechRadar, Dan Grabham has commented on one of the themes from Monday's Future of Mobile event in the Great Hall in High Street Kensington, London:
The increase in mobile platforms caused by the advent of the Apple iPhone and Google's Android are posing greater challenges for those who develop for mobile. That was one of the main underlying themes of this week's Future of Mobile conference in London.

Tom Hume, Managing Director of
developer Future Platforms, picked up on this theme, saying that from a development point of view things were more fragmented. "It's clear that it's an issue for the industry. I think it's actually got worse in the last year or so."

Indeed, many of the panellists representing the major OS vendors said that they expected some kind of consolidation over the coming years as completion in the mobile market becomes ever fiercer.
The theme of collaboration vs. competition was one that I covered in my own opening remarks on this panel. Before the conference, the panel chairman, Simon Rockman of Sony Ericsson, had asked the panellists to prepare a five minute intro. I'll end this posting with a copy of what I prepared.

Before that, however, I have another comment on the event. One thing that struck me was the candid comments from many of the participants about the dreadful user experience that mobile phones deliver. So the mobile industry has no grounds for feeling pleased with itself! This was particularly emphasised during the rapid-fire "bloggers 6x6 panel", which you can read more about from Helen Keegan's posting - provocatively entitled "There is no future of mobile". By the way, Helen was one of the more restrained of that panel!

So, back to my own remarks - where I intended to emphasise that, indeed, we face hard problems within our industry, and need new solutions:

This conference is called the Future of Mobile – not the Present Day of Mobile – so what I want to talk about is developments in mobile operating systems that will allow the mobile devices and mobile services of, say, 5 years time – 2013 – to live up to their full potential.

I believe that the mobile phones of 2013 will make even the most wonderful phones of today look, in comparison, jaded, weak, slow, and clunky. It’s my expectation that the phones used at that time, not just by technology enthusiasts and early adopters, but also by mainstream consumers, will be very considerably more powerful, more functional, more enchanting, more useful, more valuable, and more captivating than today’s smartphones.

To get there is going to require a huge amount of sophisticated and powerful software to be developed. That’s an enormous task. To get there, I offer you three contrasts.

The first contrast is between cooperation and competition.

The press often tries to portray some kind of monster, dramatic battle of mobile operating systems. In this battle, the people sitting around this table are fierce competitors. It’s the kind of thing that might sell newspapers. But rather than competition, I’m more interested in collaboration. The problems that have to be solved, to create the best possible mobile phone experiences of the next few years, will require cooperation between the people in the companies and organisations represented around this table – as well as with people in those companies and organisations that don’t have seats here at this moment, but which also play in our field. Instead of all of us working at odds with each other, spreading our energies thinly, creating incomplete semi-satisfactory solutions that are at odds with each, it would be far better for us to pool more of our energies and ideas.

I’m not saying that all competition should be stopped – far from it. An element of competition is vital, to prevent a market from becoming stale. But we’ve got too much of it just now. We’ve got too many operating systems that are competing with each other, and we’ve got different companies throughout the value chain competing with each other too strongly.

Where the industry needs to reach is around 3 or 4 major mobile operating systems – whereas today the number is somewhere closer to 20 – or closer to 200, if you count all the variants and value-chain complications. It’s a fragmentation nightmare, and a huge waste of effort.

As the industry consolidates over the next few years, I have no doubt that Symbian OS will be one of the small number of winning platforms. That brings me to my second contrast – the contrast between old and new – between past successes and future successes.

Last year, Symbian was the third most profitable software company in the UK. We earned licensing revenues of over 300 million dollars. We’ve been generating substantial cash for our owners. We’re in that situation because of having already shipped one quarter of a billion mobile phones running our software. There are at present some 159 different phone models, from 7 manufacturers, shipping on over 250 major operator networks worldwide. That’s our past success. It grows out of technology that’s been under development for 14 years, with parts of the design dating back 20 years.

But of course, past success is no guarantee of future success. I sometimes hear it said that Symbian OS is old, and therefore unsuited to the future. My reply is that many parts of Symbian OS are new. We keep on substantially improving it and refactoring it.

For example, we introduced a new kernel with enhanced real-time capabilities in version 8.1b. We introduced a substantial new platform security architecture in v9.0. More recently, there’s a new database architecture, a new Bluetooth implementation, and new architectures for IP networking and multi-surface graphics. We’re also on the point of releasing an important new library of so-called “high level” programming interfaces, to simplify developers’ experience with parts of the Symbian OS structure that sometimes pose difficulty – like text descriptors, active objects, and two-phase object construction and cleanup. So there’s plenty of innovation.

The really big news is that the pace of innovation is about to increase markedly – for three reasons, all tied up with the forthcoming creation of the Symbian Foundation:

  1. The first reason is a deeper and more effective collaboration between the engineering teams in Symbian and S60. This change is happening because of the acquisition of Symbian by Nokia. By working together more closely, innovations will reach the market more quickly.
  2. The second reason is because of a unification of UI systems in the Symbian space. Before, there were three UI systems – MOAP in Japan, UIQ, and S60. Now, given the increased flexibility of the latest S60 versions, the whole Symbian ecosystem will standardise on S60.
  3. The third reason is because of the transition of the Symbian platform – consisting of Symbian OS together with the S60 UI framework and applications – into open source. By adopting the best principles of open source, Symbian expects to attract many more developers than before to participate in reviewing and improving and creating new Symbian platform code. So there will be more innovation than before.
This brings me to the third of the three contrasts: openness vs. maturity.

Uniquely, the Symbian platform has a stable, well-tested, battle-hardened software base and software discipline, that copes well with the hard, hard task of large-scale software integration, handling input from many diverse and powerful customers.

Because of that, we’ll be able to cope with the flood of innovation that open source will send our way. That flood will lead to great progress for us, whereas for some other software systems, it will probably lead to chaos and fragmentation.

In summary, I see the Symbian platform as being not just one of several winners in the mobile operating system space, but actually the leading winner – and being the most widely used software platform on the planet, shipping in literally billions of great mobile devices. We’ll get there, because we’ll be at the heart of a huge community of impassioned and creative developers – the most vibrant developer ecosystem on the planet. Although the first ten years of Symbian’s history has seen many successes, the next ten years will be dramatically better.

Footnote: For other coverage of this event, see eg Tom Hume, Andrew Grill, Vero Pepperrell, Jemima Kiss, Dale Zak, and a very interesting Twitter channel (note to self: it's time for me to stop resisting Twitter...)

Sunday, November 16, 2008

Schrodinger's Rabbits

Long before I ever heard of smartphones, or the C++ programming language, or even C, I was intrigued by quantum mechanics. In November 1979, as a sophomore undergraduate, I was fascinated to read an article in the latest edition of the Scientific American: "The Quantum Theory and Reality", written by French theoretical physicist Bernard d'Espagnat. As recorded in the Wikipedia article on d'Espagnat, this article contains the stunning quote,
The doctrine that the world is made up of objects whose existence is independent of human consciousness turns out to be in conflict with quantum mechanics and with facts established by experiment.
What particularly struck me was the claim that "facts established by experiment" were at odds with common-sense ideas about reality. These experiments involved the now-famous "correlation at a distance" experiments inspired by a paper originally authored in 1935 by Albert Einstein and two co-workers: Boris Podolsky and Nathan Rosen. The initials of the authors - EPR - became synonymous with these experiments. Particularly when viewed through the analysis of John Bell, who devised some surprisingly counter-intuitive inequalities applicable to correlations between results in EPR experiments, these experiments seemed to defy all explanation.

Early in 1980, Professor Mary Hesse of the History and Philosophy of Science department at Cambridge, gave one of the then-frequent lunchtime presentations on mathematical topics, to students (like me) sufficiently interested in such topics to give up their free time in pursuit of greater understanding of mathematics. Prof Hesse chose the philosophical problems of quantum mechanics as her subject for the meeting. I listened carefully, to find out if there were any good rebuttals to the claims made by d'Espagnat. My conclusion was that the whole area was decidedly weird. As months passed, I also asked various maths lecturers about this - but their advice was generally not to think about these questions!

Several years later, I chose Philosophy of Science as the area for my postgraduate studies, with a particular focus on trying to make sense of quantum mechanics. During that time, I even made my first trip to Finland - not to visit Nokia (since I had never heard of them at that time), but to attend a conference in 1985 in pictureseque Joensuu. It was a conference to commemorate 50 years since the publication of the EPR paper. Nathan Rosen, then aged 76, was the guest of honour.

The more I studied the philosophical problems of quantum mechanics, the more I came to respect what initially seemed to be the weirdest and most unlikely solution of all. This is the so-called "Many worlds" interpretation (though, as it turns out, the name is misleading):
  • Originally proposed by Hugh Everett III, in 1957;
  • It refuses to introduce some kind of demarcation between the quantum realm, where superposition ("wavelike behaviour") is allowed, and the classical realm, where things need to be more definite;
  • Instead, it takes very seriously the idea that macroscopically large objects also spread out over a range of diverse states - in a so-called quantum superposition;
  • This includes the shocking and apparently absurd notion that even we humans end up (all the time) in a superposition of different states;
  • For example, although I subjectively feel, as I type these words now, that this is the unique instance of myself, there are countless other instances of myself, spread out in a wider multiverse, all having diverged from this particular instance as a result of cascading quantum interactions;
  • In some of these other instances, I am employed by companies other than Symbian (my employer for the last ten years in this instance); in yet other instances, Symbian was never created, or I remained in academia instead of joining the world of business, or human civilisation was destroyed when the Cuban missile crisis went wrong, or the values of physical constants were not capable of giving rise to complex mater - and so on.

If objections to this idea come to your mind, it's very likely that the same objections came to my mind during the years I pursued my postgraduate studies. For example, to the objection "why don't we feel ourselves splitting", comes the reply given by Hugh Everett himself:

Well, Copernicus made the analysis that the Earth was moving around the sun, undoing thousands of years of belief that the sun was going around the Earth, and people asked him, If the Earth is moving around the sun, then why don't I feel the Earth move?

In time, I deprioritised my postgraduate studies, to take a series of jobs, first as a part-time university supervisor, then as a maths tutor at a sixth form college, and then (from 1988) as a software engineer. But occasionally, I come across a link that re-awakens my fascination with quantum theory and the many worlds interpretation. Recently, there have been quite a lot of these links:

  • The son of Hugh Everett is a reasonably famous singer and guitarist in his own right - Mark Everett, also sometimes known as "Mr E" or just "E";
  • Mark Everett has just released an autobiography "Things the Grandchildren Should Know" which addresses his growing awareness of his father's remarkable thinking (Hugh Everett died, of a heart attack, in 1982, when Mark was just 19);
  • There has also been a PBS documentary on this same topic, "Parallel worlds, parallel lives", which has generated considerable media interest (such as this piece in the Scientific American);
  • Coincidentally, various conferences have taken place in the last year or so, commemorating the fiftieth anniversary of Everett's original thesis;
  • For example, several people I remember from my own postgraduate studies days took part in a conference "Everett at 50" at Oxford.

With this growing abundance of material about Everett's ideas, I'd like to highlight what I believe to be among the best book on the subject. It's "Schrodinger's Rabbits: The Many Worlds of Quantum", written by Colin Bruce. It deserves to be a lot better known:

  • The author has a pleasant writing style, mixing in detective story writing and references to science fiction stories, with analysis of philosophical ideas;
  • There's no complex maths to surmount - though the reader will have to think carefully, going through various passages (the effort is worth it!);
  • Unlike many books which seem to repeat the same few themes spread over many chapters, each chapter in this book introduces important new concepts - which is another reason why it's rewarding to read it;
  • The book highlights some significant difficulties faced by the many worlds theories, but still (in my view) makes it clear that these theories are more likely to be true than false.

Alternatively, for a book that is even wider in its scope (though less convincing in some of its arguments), try "The Fabric of Reality: The Science of Parallel Universes and Its Implications" by David Deutsch - who in addition to breaking new ground in thinking about the philosophy of quantum mechanics, also happens to be a pioneer of the theory of quantum computing.

Finally, for a book that generally leaves readers in no doubt that any "common sense" interpretation of quantum mechanics fails, take a look at the stunningly well-written "Quantum Reality: Beyond the New Physics" by Nick Herbert.

Tuesday, November 11, 2008

Symbian Partner Event, San Francisco, 4th Dec

Historically, admission to Symbian Partner Events has been restricted to signed-up members of Symbian's Partner Network. However, for our event at the Palace Hotel in San Francisco on Thursday 4th December, we're going to open up participation.

Some parts of the day will still be restricted to signed partners. However, most of the proceedings on the day will be open to a wider group of attendees - such as mobile developers, journalists, the open source community, and representatives of companies that may be considering partnering with Symbian.

Space will be limited so anyone thinking of attending should register their interest as soon as possible via the event website.

Full details of speakers, panellists, and other sessions at the event will be published on the event website shortly. In the meantime, here are a few highlights:
  • Keynote presentations from a leading member of the open source community, senior representatives from network operators and phone manufacturers, Symbian executives, and the management of the Symbian Foundation;
  • "Fast Forward" technology seminars
  • An open roundtable discussion on "Succeeding in the US: the key factors"
  • "Symbian Foundation Platform Architecture Overview"
  • "Symbian Foundation Q&A".

There will also be an exhibition of partner products and solutions, as well as ample opportunity to network with movers-and-shakers of the global mobile industry.

Footnote: Here's the LinkedIn entry for this event.

Monday, November 3, 2008

Mobile 2.0 keynote

Earlier today, I had the privilege to deliver the opening keynote at the Mobile 2.0 event in San Francisco. This posting consists of a copy of the remarks I prepared.

The view from 2013

My topic is Open Source, as a key catalyst for Mobile Innovation 2.0.

Let’s start by fast forwarding five years into the future. Imagine that we are gathered for the 2013 “Mobile 2.0” conference – though the name may have changed by that time, perhaps to Mobile 3.0 or even Mobile 4.0 – and perhaps the conference will be taking place virtually, with much less physical transportation involved.

Five year into the future, we may look back at the wonder devices of today, 2008: the apparently all-conquering iPhone, the Android G1, the Nokia E71, the latest and greatest phones from RIM, Windows Mobile, and so on: all marvellous devices, in their own ways. From the vantage point of 2013, I expect that our thoughts about these devices will be: “How quaint! How clunky! How slow! How did we put up with all the annoyances and limitations of these devices?”

This has happened before. When the first Symbian OS smartphones reached the market in 2002 – the Nokia 7650, and the Sony Ericsson P800 – they received rave reviews in many parts of the world. These Symbian smartphones were celebrated at the time as breakthrough devices with hitherto unheard of capabilities, providing great user experiences. It is only in retrospect that expectations changed and we came to see these early devices as quaint, clunky, and slow. It will be the same with today’s wonder phones.

Super smart phones

That’s because the devices of five years time will (all being well) be so much more capable, so much slicker, so much more usable, and so much more enchanting than today’s devices. If today’s devices are smart phones, the devices of 2013 will be super smart phones.

These devices will be performing all kinds of intelligent analysis of data they are receiving through their sensors: their location and movement sensors, their eyes – that is, their cameras – and their ears – that is, their always-on recording devices.

They’ll also have enormous amounts of memory – both on-board and on-network. Based on what they’re sensing, and on what they know, and on their AI (artificial intelligence) algorithms, they’ll be guiding us and informing us about all the things that are important to us. They’ll become like our trusted best friends, in effect whispering insight into our ears.

We can think of these devices as like our neo neo-cortex. Just as primates and especially humans have benefited from the development of the neo-cortex, as the newest part of our brains in evolutionary terms, so will users of super smartphones benefit from the enhanced memory, calculation powers, and social networking capabilities of this connected neo neo-cortex.

In simple terms, these devices can be seen as adding (say) 20 points to our IQs – perhaps more. If today’s smartphones can make their users smarter, the super smartphones of 2013 can make their users super smart.

Solving hard problems

That’s the potential. But the reality is that it’s going to be tremendously hard to achieve that vision. It’s going to require an enormously sophisticated, enormously capable, mobile operating system.

Not everyone shares my view that operating systems are that important. I sometimes hear the view expressed that improvements in hardware, or the creation of new managed code environments, somehow reduce the value of operating systems, making them into a commodity.

I disagree. I strongly disagree. It's true that improvements in both hardware and managed code environments have incredibly important roles to play. But there remain many things that need to be addressed at the operating system level.

Here are just some of the hard tasks that a mobile operating system has to solve:
  • Seamless switching between different kinds of wireless network – something that Symbian’s FreeWay technology does particularly well;

  • Real-time services, not only handling downloads of extremely large quantities of data, but also manipulating that data in real time – decompressing it, decrypting it, displaying it on screen, storing it in the file system, etc;

  • All this must happen without any jitter or delay – even though there may be dozens of different applications and services all talking at the same time to several different wireless networks;

  • All this rich functionality must be easily available to third party developers;

  • However, that openness of developer access must coexist with security of the data on the device and the integrity of the wireless networks;

  • And, all this processing must take place without draining the batteries on the device;

  • And without bamboozling the user due to the sheer scale and complexity of what’s actually happening;

  • And all this must be supported, not just for one device, but in a way that can be customised and altered, supporting numerous different form factors and usage models, without fragmenting the platform.

Finally, please note that all this is getting harder and more complex: every year, the amount of software in a top-range phone approximately nearly doubles. So in five years, there’s roughly up to 32 times as much software in the device. In ten years, there could be 1000 times as much software.

Engaging a huge pool of productive developers

With so many problems that need to be solved, I will say this. The most important three words to determine the long-term success of any mobile operating system are: Developers, Developers, Developers. To repeat: the most important three words for the success of the Symbian platform are: Developers, Developers, Developers.

We need all sorts and shapes and sizes of developers – because, as I said, there are so many deep and complex problems to be solved, as the amount of software in mobile phone platforms grows and grows.

No matter how large and capable any one organisation is, the number of skilled developers inside that organisation is only a small fraction of the number outside. So it comes down to enabling a huge pool of productive and engaged developers, outside the organisation, to work alongside the original developers of the operating system – with speed, creativity, skill, and heartfelt enthusiasm. That’s how we can collectively build the successful super smart phones of the future.

Just two weeks ago, the annual Symbian Smartphone Show put an unprecedented focus on developers. We ran a Mobile DevFest as an integral part of the main event. We announced new developer tools, such as the Symbian Analysis Workbench. We will shortly be releasing new sets of developer APIs (application programming interfaces) in new utility libraries, to simplify interactions with parts of the Symbian programming system that have been found to cause the most difficulty – such as text descriptors, two phase object construction and cleanup, and active objects.

The critical role of open source

But the biggest step to engage and enthuse larger numbers of developers is to move the entire Symbian platform into open source.

This will lower the barriers of entry – in fact, it will remove the barriers of entry. It will allow much easier study of the source code, and, critically, much easier participation in the research and creation of new Symbian platform software. We are expecting a rapid increase in collaboration and innovation. This will happen because there are more developers involved, and more types of developers involved.

That’s why the title of my talk this morning is “Open Source: Catalyst for Mobile Innovation 2.0”. The “2.0” part means greater collaboration and participation than ever before: people not just using the code or looking at the code, but recommending changes to it and even contributing very sizeable chunks of new and improved code. The Open Source part is one of the key enablers for this transformation.

Necessary, but not sufficient

However, Open Source is only one of the necessary enablers. Open Source is a necessary but not sufficient ingredient. There are two others:

  1. A stable and mature software base, with reliable processes of integration, which I’ll talk more about in a moment;

  2. Mastery of the methods of large-scale Agile development, allowing rapid response to changing market needs.
Fragmentation inside an operating system

Here’s the problem. Fragmentation is easy, but Integration is hard.

Fragmentation means that different teams or different customers pull the software in different directions. You end up, in the heat of development in a fast-moving market, with different branches, that are incompatible with each other, and which can’t be easily joined together. The result is that solutions created by developers for one of these branches, fail to work on the other branches. A great deal of time can be wasted debugging these issues.

Here, I speak from bitter experience. During the rapid growth days of Symbian, we lost control of aspects of compatibility in our own platform – despite our best efforts. For example, we had one version called 7.0s and another called 7.0, but lots of partners reported huge problems moving their solutions between these two versions. Because of resulting project delays, major phones failed to come to the market. It was a very painful period.

Nowadays, in the light of our battle-hardened experience, Symbian OS is a much more mature and stable platform, and it is surrounded and supported by an ecosystem of very capable partners. In my view, we have great disciplines in compatibility management and in codeline management.

The result is that we have much better control over the integration of our platform. That puts us in a better position to handle rapid change and multiple customer input. That means we can take good advantage of the creativity of open source, rather than being pulled apart by the diverse input of open source. Other platforms may find things harder. For them, open source may bring as many problems as it brings solutions.

Fragmentation across operating systems

This addresses the fragmentation inside a single operating system. But the problem remains of fragmentation across different operating systems.

Although competition can be healthy, too many operating systems result in developers spreading their skills too thinly. The mobile industry recognises that it needs to consolidate on a smaller number of mobile operating systems moving forwards. The general view is that there needs to be consolidation on around three or (at the most) four advanced mobile operating systems. Otherwise the whole industry ends up over-stretched.

So, which will be the winning mobile operating systems, over the next five years? In the end, it will come down to which phones are bought in large quantities by end users. In turn, the choices offered to end users are strongly influenced by decisions by phone manufacturers and network operators about which operating systems to prefer. These companies have four kinds of issues in their minds, which they want to see mobile operating systems solve:

  • Technical issues, such as battery life, security, and performance, as well as rich functionality;

  • Commercial issues, such as cost, and the ability to add value by differentiation;

  • Political issues, in which there can be a perception that the future evolution of an operating system might be controlled by a company or organisation with divergent ulterior motivations;

  • Reliability issues, such as a proven track record for incrementally delivering new functionality at high quality levels in accordance with a pre-published roadmap.

A time for operating systems to prove themselves

Again, which will be the winning operating systems, over the next five years? My answer is that is slightly too early to say for sure. The next 12-18 months will be a time of mobile operating systems proving themselves. Perhaps three or four operating systems will come through the challenge, and will attract greater and greater support, as customers stop hedging their bets. Others will be de-selected (or merged).

For at least some of the winning smartphone operating systems, there will be an even bigger prize, in the subsequent 2-3 years. Provided these operating systems are sufficiently scalable, they will become used in greater and greater portions of all phones (not just smartphones and high-end feature phones).

Proving time for the Symbian Foundation platform

Here’s how I expect the Symbian platform to prove itself in the next 12-18 months. Our move to open source was announced in June this year, and we said it could take up to two years to complete it. Since then, planning has been continuing, at great speed. Lee Williams, current head of the S60 organisation in Nokia, and formerly of Palm Inc and Be Inc, has been announced as the Executive Director of the Symbian Foundation.

The Foundation will make its first software release midway through the first half of 2009. Up till that point, access to internal Symbian OS source code is governed by our historical CustKit Licence and DevKit Licence. There’s a steep entry price, around 30,000 dollars per year, and a long contract to sign to gain access, so the community of platform developers has been relatively small. From the first Symbian Foundation release, that will change.

The source code will be released under two different licenses. Part will be open source, under the Eclipse Public Licence. This part has no licence fee, and is accessible to everyone. The other part will be community source, under an interim Symbian Foundation Licence. This is also royalty free, but there is a small contract that companies have to sign, and a small annual fee of 1,500 dollars. I expect a large community to take advantage of this.

This interim community source part will diminish, in stages, until it vanishes around the middle of 2010. By then, everything will be open source. We can’t get there quicker because there are 40 million lines of source code altogether, and we need to carry out various checks and cleanups and contract renegotiations first. But we’ll get there as quickly as we can.

There’s one other important difference worth highlighting. It goes back to the theme of reducing fragmentation. Historically, there have been three different UIs for Symbian OS: S60, UIQ, and MOAP(S) used in Japan. But going forwards, there will only be one UI system: S60, which is nowadays flexible enough to support the different kinds of user models for which the other UI systems were initially created.

To be clear, developers don’t have to wait until 2010 before experimenting with this software system. Software written to the current S60 SDK will run fine on these later releases. We’ll continue to make incremental compatible releases throughout this time period.

What you should also see over this period is that the number of independent contributors will increase. It won’t just be Nokia and Symbian employees who are making contributions. It will be like the example of the Eclipse Foundation, in which the bulk of contributions initially came from just one company, IBM, but nowadays there’s a much wider participation. So also for the Symbian Foundation, contributions will be welcome based on merit. And the governance of the Foundation will also be open and transparent.

The view from 2013

I’ll close by returning to the vision for 2013. Inside Symbian, we’ve long had the vision that Symbian OS will be the most widely used software platform on the planet. By adopting the best principles of open source, we expect we will fulfil this vision. We expect there will in due course be, not just 100s of millions, but billions of great devices, all running our software. And we’ll get there because we’ll be at the heart of what will be the most vibrant software ecosystem on the planet – the mobile innovation ecosystem. Thank you very much.