Sunday, June 29, 2008

The enhancement of the dream

Did this week's announcements about the Symbian Foundation herald "The end of the dream", as Michael Mace suggests?

No matter how it works out in the long run, the purchase of Symbian by Nokia marks the end of a dream -- the creation of a new independent OS company to be the mobile equivalent of Microsoft. Put a few beers into former Symbian employees and they'll get a little wistful about it, but the company they talk about most often is Psion, the PDA company that spawned Symbian. ...

What makes the Psion story different is that many of the Psion veterans had to leave the UK, or join non-UK companies, in order to become successful. Some are in other parts of Europe, some are in the US, and some are in London but working for foreign companies. This is a source of intense frustration to the Psion folks I've talked with. They feel like not only their company failed, but their country failed to take advantage of the expertise they had built.

I understand the thrust of this argument, but I take a different point of view. Rather than seeing this week's announcement as "the end of the dream", I see it as enabling "the enhancement of the dream".

During the second half of 2007, Symbian's executive team led a company-wide exercise to find a set of evocative, compelling words that captured what we called "The Symbian Story". Some of the words we came up with were new, but the sentiment they conveyed was widely recognised as deriving from the deep historic roots of the company. Here are some extracts:
  • The world is seeing a revolution in smarter mobile devices
  • Convergence is real, happening now and coming to everyone, everywhere
  • Our mission is to be the OS chosen for the converged mobile world
  • No one else can seize it like we can
  • Our talented people, building highly complex software, have established a smartphone OS that leads the industry
  • We welcome rapid change as the way to stay ahead
  • We'll work together to fulfill our potential to be the most widely used software on the planet, at the heart of an inspiring, exciting and rewarding success story.

This story - which we might also call a dream, or a vision - has by no means ended with this week's announcements. On the contrary, these steps should accelerate the outcome that's been in our minds for so long. There will be deeper collaboration and swifter innovation - making it even more likely that the Symbian platform will become in due course the most widely used on the planet.

But what about the dream that Symbian (or before it, Psion) could be "the next Microsoft"?

In terms of software influence, and setting de facto standards, this dream still holds. In terms of boosting the productivity and enjoyment of countless people around the world, through the careful deployment of smart software which we write, the dream (again) still holds. In terms of the founders of the company joining the ranks of the very richest people in the world, well, that's a different story, but that fantasy was never anything like so high in our motivational hierarchy.

What about the demise of "British control" over the software? Does the acquisition of UK-based Symbian by Finland-based Nokia indicate yet another "oh what might have been" for the United Kingdom plc?

Once again, I prefer to take a different viewpoint. In truth, the software team has long ago ceased to be dominated by home-bred British talent. The present Symbian Leadership Team has one person from Holland and one from Norway. 50% of the Research department that I myself head were born overseas (in Russia, Greece, and Canada). And during the Q&A with Symbian's Nigel Clifford and Nokia's Kai Oistamo that took place in London at all-hands meetings of Symbian employees on the 24th of June, questions were raised using almost every accent under the sun. So rather than Symbian being a British-run company, it's better to see us as a global company that happens to be headquartered in London, and which benefits mightily from talent born all over the world.

Not only do we benefit from employees born worldwide, we also benefit (arguably even more highly) from our interactions with customers and partners the world over. As Symbian morphs over the next 6-9 months into a new constellation of organisations (including part that works inside Nokia, and part that has an independent existence as the Symbian Foundation), these collaborative trends should intensify. That's surely a matter for celebration, not for remorse.

The five laws of fragmentation

As discussion of the potential for the Symbian Foundation gradually heats up, the topic of potential fragmentation of codelines keeps being raised. To try to advance that discussion, I offer five laws of fragmentation:

1. Fragmentation can have very bad consequences

Fragmentation means there's more than one active version of a software system, and that add-on or plug-in software which works fine on one of these versions fails to work well on other versions. The bad consequences are the extra delays this causes to development projects.

Symbian saw this with the divergence between our v7.0 and v7.0s releases. (The little 's' was sometimes said to stand for "special", sometimes for "strategic", and sometimes for "Series 60".) UIQ phones at the time were based on our v7.0 release. However, the earliest Series 60 devices (such as the Nokia 7650 "Calypso") had involved considerable custom modifications to the lower levels of the previous Symbian OS release, v6.1, and these turned out to be incompatible with our v7.0. As a pragmatic measure, v7.0s was created, that had all of the new technology features introduced for v7.0, but which kept application-level compatibility with v6.1.

On the one hand, v7.0s was a stunning success: it powered the Nokia 6600 "Calimero" which was by far the largest selling Symbian OS phone to that time. On the other hand, the incompatibilities between v7.0 and v7.0s caused no end of difficulties to developers of add-on or plug-in software for the phones based on these two versions:
  • The incompatibilities weren't just at the level of UI - UIQ vs. Series 60
  • There were also incompatibilities at many lower levels of the software plumbing - including substantial differences in implementation of the "TSY" system for telephony plug-ins
  • There were even differences in the development tools that had to be used.

As a result, integration projects for new phones based on each of these releases ran into many delays and difficulties.

Symbian OS v8 was therefore designed as the "unification release", seeking as much compatibility as possible with both of the previous branches of codeline. It made things considerably better - but some incompatibilities still remained.

As another example, I could write about the distress caused to the Symbian partner ecosystem by the big change in APIs moving from v8 to v9 (changes due mainly to the new PlatSec system for platform security). More than one very senior manager inside our customer companies subsequently urged us in very blunt language, "Don't f****** break compatibility like that ever again!"

Looking outside the Symbian world, I note the following similar (but more polite) observation in the recent Wall Street Journal article, "Google's Mobile-Handset Plans Are Slowed":

Others developers cite hassles of creating programs while Android is still being completed [that is, while it is undergoing change]. One is Louis Gump, vice president of mobile for Weather Channel Interactive, which has built an Android-based mobile weather application. Overall, he says, he has been impressed by the Google software, which has enabled his company to build features such as the ability to look up the weather in a particular neighborhood.

But he says Weather Channel has had to "rewrite a few things" so far, and Google's most recent revision of Android "is going to require some significant work," he says.

2. Open Source makes fragmentation easier

If rule 1 was obvious (even though some open source over-enthusiasts seem to be a bit blind to it), rule 2 should be even clearer. Access to the source code for a system (along with the ability to rebuild the system) makes it easier for people to change that software system, in order to smooth their own development purposes. If the platform doesn't meet a particular requirement of a product that is being built from that platform, hey, you can roll up your sleeves and change the platform. So the trunk platform stays on v2.0 (say) while your branch effectively defines a new version v2.0s (say). That's one of the beauties of open source. But it can also be the prelude to fragmentation and all the pain which will ensue.

The interesting question about open source is to figure out the circumstances in which fragmentation (also known as "forking") occurs, and when it doesn't.

3. Fragmentation can't be avoided simply by picking the right contract

Various license contracts for open source software specify circumstances in which changes made by users of an open source platform need to be supplied back into the platform. Different contracts specify different conditions, and this can provoke lengthy discussions. However, for the moment, I want to sidestep these discussions and point out that contractual obligations, by themselves, cannot cure all fragmentation tendencies:

  • Even when users of a platform are obligated to return their changes to the platform, and do so, it's no guarantee that the platform maintainers will adopt these changes
  • The platform maintainers may dislike the changes made by a particular user, and reject them
  • Although a set of changes may make good sense for one set of users, they may involve compromises or optimisations that would be unacceptable to other users of the platform
  • Reasons for divergence might include use of different hardware, running on different networks, the need to support specific add-on software, and so on.

4. The best guarantee against platform fragmentation is powerful platform leadership

Platform fragmentation has some similarities with broader examples of fragmentation. What makes some groups of people pull together for productive collaboration, whereas in other groups, people diverge following their own individual agendas? All societies need both cooperation and competition, but when does the balance tilt too far towards competition?

A portion of the answer is the culture of the society - as reflected in part in its legal framework. But another big portion of the answer is in the quality of the leadership shown in a society. Do people in the group believe that the leaders of the group can be relied on, to keep on "doing the right thing"? Or are the leaders seen as potentially misguided or incompetent?

Turning back to software, users of a platform will be likely to stick with the platform (rather than forking it in any significant way) if they have confidence that the people maintaining the trunk of the platform are:

  1. well-motivated, for the sake of the ecosystem as a whole
  2. competent at quickly and regularly making valuable new high quality releases that (again) meet the needs of the ecosystem as a whole.

Both the "character" (point 1) and the "competence" (point 2) are important here. As Stephen Covey (both father and son) have repeatedly emphasised, you can't get good trust without having both good character and good competence.

5. The less mature the platform the more likely it will be to fragment, especially if there's a diverse customer base

If a platform is undergoing significant change, users can reason that it's unlikely to coalese any time soon into a viable new release, and they'll be more inclined to carry on working with their own side version of the platform, rather than waiting for what could be a long time for the evolving trunk of the platform to meet their own particular needs.

This tendency is increased if there are diverse customers, who each have their own differing expectations and demands for the still-immature software platform.

In contrast, if the core of the platform is rock-solid, and changes are being carefully controlled to well-defined areas within the platform, customers will be more likely to want to align their changes with the platform, rather than working independently. Customers will reason that:

  • The platform is likely to issue a series of valuable updates, over the months and years ahead
  • If I diverge from the platform, it will probably be hard, later on, to merge the new platform release material into my own fork
  • That is, if I diverge from the platform, I may gain short-term benefit, but then I'll likely miss out on all the good innovation that subsequent platform releases will contain
  • So I'd better work closely with the developers of the trunk of the platform, rather than allowing my team to diverge from it.

Footnote: Personally I see the Symbian Foundation codeline to be considerably more mature (tried and tested in numerous successful smartphones) than the codeline in any roughly similar mobile phone oriented Linux-based foundation. That's why I expect that the Symbian Foundation codeline will fall under less fragmentation pressure. I also believe that Symbian's well-established software development processes (such as careful roadmap management, compatibility management, system architecture review, modular design, overnight builds, peer reviews, and systematic and extensive regression testing) are set to transfer smoothly into this new and exciting world, maintaining our track record of predictable high-quality releases - further lessening the risks of fragmentation.

Friday, June 27, 2008

Aubrey de Grey's preposterous campaign to cure aging


At first sight, Aubrey de Grey is clearly preposterous. Not only does he look like a relic of the middle ages, with his huge long beard, but his ideas on potentially "curing aging" within the present generation apparently run counter to many well-established principles of science, society, philosophy, and even religion. So it's no surprise that his ideas arouse some fervent opposition. See for example a selection of the online comments to the article about him, "The Fight to End Aging Gains Legitimacy, Funding", in today's Wired:

Guess what, jackasses... we're supposed to die! Look up the 2nd law of thermodynamics, you might learn something. We've even evolved molecular mechanisms to make sure our cells can't reproduce beyond a certain point... check out "Hayflick limit" on Wikipedia. The stark biological reality is that we are here to pass along our genes to our progeny and the DIE. What the hell, wasn't this settled back in the 1800s? Why are we debating this stupidity?
and

Aging and death is an evolutionary response to cancer in mammals. You'll have to resolve the cancer issue (and remember kids - cancer is actually a whole lot of different but related diseases) before you can resolve the aging and death issue.
However, first appearances can be deceptive. I had my own first serious discussions with Aubrey at the "Tomorrow's People" conference in Oxford in March 2006. Not only did I pose my own questions, I listened and observed with increasing admiration as Aubrey addressed issues posed by other audience members, and during many coffee breaks as the conference progressed. Later that year in August, at Transvision 2006 in Helsinki (by the way, as well as being home to the world's leading mobile phone manufacturer, Finland hosts a disproportionate number of self-described transhumanists; perhaps both reflect an unusually pragmatic yet rational approach to life), I had the chance to continue these discussions and observations. I saw that Aubrey has good, plausible answers to his critics. You can find many of these answers on his extensive website.

Since that time, I've been keen to take the opportunity to watch Aubrey speak whenever it arises. Unfortunately, I'll miss the conference that's happening at UCLA this weekend: "AGING: The Disease - The Cure - The Implications" - which has a session this afternoon (4pm West Coast time) that's open to the general public. However, I'm eagerly looking forward to some good debate at the July 12 meeting of the UKTA, at Birkbeck College in London, where Aubrey will be one of the speakers on the topic, "Living longer and longer yet healthier and healthier: realistic grounds for hope?". (If you're interested to attend that, and you Facebook, you can indicate your interest and RSVP here.)

As I've come to see it, addressing aging by the smart and imaginative uses of technology fits well with the whole programme of medicine (which constantly intervenes to prevent nature taking its "natural toll" on the human body). It also has some surprising potential cost-saving benefits, as aging-related diseases are responsible for a very significant part of national health expenditure. But that's only the start of the argument. To help explore many of the technical byways of this argument, I strongly recommend Aubrey's 2007 book, "Ending Aging: The rejuvenation breakthroughs that could reverse human aging in our lifetime".

In terms of disruptive technology trends (some of which I study in my day job), this is about as big as it gets.

I'll end by quoting from today's Wired article:

"In perhaps seven or eight years, we'll be able to take mice already in middle age and treble their lifespan just by giving them a whole bunch of therapies that rejuvenate them," de Grey said. "Gerontologists all over, even my most strident critics, will say yes, Aubrey de Grey is right."

Even as he imagines completing Gandhi's fourth step, de Grey always keeps his eye on the ultimate prize -- the day when the aging-as-disease meme reaches the tipping point necessary to funnel really big money into the field.

"The following day, Oprah Winfrey will be saying, aging is a disease and let's fix it right now," de Grey said.

Wednesday, June 25, 2008

A tale of two meetings

In the past, I've enjoyed several meetings of the London Skeptics in the Pub ("SitP"). More than 100 people cram into the basement meeting space of Penderel's Oak in Holborn, and listen to a speaker cover a contentious topic - such as alternative medicine, investigating the paranormal, the "moon landings hoax". What's typically really enjoyable is the extended Q&A session in the second half of the meeting, when the audience often dissect the speaker's viewpoint. Attendee numbers have crept up steadily over the nine years the group has existed. It's little surprise that the group was voted into the Top Ten London Communities 2008 by Time Out.

Last night, the billed speaker was the renowned (many would say "infamous") climate change denier, Fred Singer. The talk was advertised as follows:
Global Warming: Science, Economics, and some Moral Issues: What Al Gore Never Told You.

The science is settled: Evidence clearly demonstrates that carbon dioxide contributes insignificantly to Global Warming and is therefore not a 'pollutant.' This fact has not yet been widely recognized, and irrational Global Warming fears continue to distort energy policies and foreign policy. All efforts to curtail CO2 emissions, whether global, federal, or at the state level, are pointless -- and in any case, ineffective and very costly. On the whole, a warmer climate is beneficial. Fred will comment on the vast number of implications.
Since this viewpoint is so far removed from consensus scientific thinking, I was hoping for a cracking debate. And indeed, the evening started well. Singer turned out to be a better speaker than I expected. Even though he's well into his 80s, he spoke with confidence, courtesy, and good humour. And he had some interesting material:
  • A graph that seemed to show that global temperature has not been rising over the last ten years (even though atmospheric CO2 has incontrovertibly been rising over that time period)
  • A claim that all scientific models of atmospheric warming are significantly at variance with observed data (and therefore, we shouldn't give these models much credence)
  • Suggestions that global warming is more strongly influenced by cosmic rays than by atmospheric CO2.

(The contents of the talk were similar to what's in this online article.)

So I eagerly anticipated the Q&A. But oh, what a disappointment. I found myself more and more frustrated:

  • Quite a few of the audience members seemed incapable of asking a clear, relevant, concise question. Instead, they tended to go off on tangents, or went round and round in circles. (To my mind, the ability to identify and ask the key question, without distraction, is an absolutely vital skill for the modern age.)
  • Alas, the speaker could not hear the questions (being, I guess, slightly deaf from his advanced age); so they had to be repeated by the meeting moderator, who was standing at the front next to the speaker
  • The moderator often struggled to capture the question from what the audience member had said, so there were several iterations here
  • Then the speaker frequently took a LONG time to answer the question. (He was patient and polite, but he was also painstakingly SLOW.)

Result: lots of time wasted, in my view. No one landed anything like a decisive refutation of the speaker's claims. There were lots of good questions that should have been asked, but time didn't allow it. I also blamed myself, for not having done any research prior to the meeting (but I had been pretty busy on other matters for the last few days), and for not being able to do my usual trick of looking up information on my smartphone during a meeting (via Google, Wikipedia, etc) because network reception was very poor in the part of the basement where I was standing. In conclusion, although the discussion was fun, I don't think we got anything like the best possible discussion that the speakers' presentation deserved.

I mention all this, not just because I'm deeply concerned about the fearsome prospects of runaway global warming, but also because I'm interested in the general question of how to organise constructive debates that manage to reach to the heart of the matter (whatever the matter is).

As an example of a meeting that did have a much better debate, let me mention the one I attended this evening. It was hosted by Spiked, and was advertised as follows:

Nuclear power: what's the alternative? The future of energy in Britain

As we seek to overcome our reliance on fossil fuels, what are the alternatives? Offshore turbines and wind farms are often cited as options but can they really meet more than a fraction of the UK’s energy needs? If not, is nuclear power a viable alternative? Public anxieties about nuclear plants’ safety, their susceptibility to terrorist attacks, and the problem of safely disposing of radioactive waste persist. But to what extent are these concerns justified? Is the real issue the public’s perception of both the risks and potential of nuclear energy? Ultimately, does nuclear energy, be it the promise of fusion or the reality of fission, finally mean we can stop guilt-tripping about energy consumption?

Instead of just one speaker, there were five, who had a range of well-argued but differing viewpoints. And the chairperson, Timandra Harkness (Director of Cheltenham Science Festival's Fame Lab) was first class:

  • She made it clear that each speaker was restricted to 7 minutes for their opening speech (and they all kept to this limit, with good outcomes: focus can have wonderful results)
  • Then there were around half a dozen questions from the floor, asked one after the other, before the speaker panel were invited to reply
  • There were several more rounds of batched up questions followed by responses
  • Because of the format, the speakers had the option of ignoring the (few) irrelevant questions, and could concentrate on the really interesting ones.

For the record, I thought that all the speakers made good points, but Keith Barnham, co-founder of the solar cell manufacturing company Quantasol, was particularly interesting, with his claims for the potential of new generation photovoltaic concentrator solar cells. (This topic also featured in a engrossing recent Time article.) He recommended that we put our collective hope for near-future power generation "in the [silicon] industry that gave us the laptop and the mobile phone, rather than the industry that gave us Chernobyl and Sellafield". (Ouch!) Advances in silicon have time and again driven down the prices of mobile phones; these benefits will also come quickly (Barnham claimed) to the new generation solar cells.

But the conclusion I want to draw is that the best way to ensure a great debate is to have a selection of speakers with complementary views, to insist on focus, and to chair the meeting particularly well. Yes, collaboration is hard - but when it works, it's really worth it!

Footnote: the comparision between the Skeptics in the Pub meeting and the Spiked one is of course grossly unfair, since the former is run on a shoestring (there's a £2 charge to attend) whereas the latter has a larger apparatus behind it (the entry charge was £10, payable in advance; and there's corporate sponsorship from Clarke Mulder Purdie). But hey, I still think there are valid learnings from this tale of two different meetings - each interesting and a good use of time, but one ultimately proving much more satisfactory than the other.

Tuesday, June 24, 2008

Symbian 2-0

Months of planning culminated this morning with the announcement of an intended dramatic evolution for Symbian – an evolution that should decisively advance the Symbian platform toward its long-anticipated status of being the most widely used software platform on the planet.

The announcement of the Symbian Foundation comes on the very first day of the second decade of Symbian’s existence. It also sets the scene for a much wider participation by companies and individuals in the development and deployment of novel services and applications for all sorts of new and improved Symbian-powered mobile devices. Because this second decade of Symbian’s history should witness radically greater collaboration than before, the designation “Symbian 2.0” seems doubly apt.

Subject to the deal receiving regulatory approval, I envision a whole series of far-reaching changes to take place in the months and years ahead:

  • It will become possible for the best practices of Open Source Software to be applied in and around the software platform that is the most suited to smartphones

  • Closer working relations between personnel from Symbian and S60 teams will result in more efficient development, accelerating the rate at which the overall platform improves

  • The lower barriers to usage of the Symbian platform should mean that the number of customers and partners will rocket

  • The unification of the formerly separate UI systems will further increase the attractiveness of the new platform

  • The platform will be royalty free – which will be another factor to boost usage

  • Because of increased adoption of the platform, the ecosystem will also grow, through the OS-ES volume-value virtuous cycle mechanism

  • For all these reasons, smartphone innovation should jump forward in pace, to the potential benefit of all participants in the ever expanding, ever richer, converged mobile industry
    Customers and partners alike – both newcomers and old-timers – will be on the lookout for fresh options for differentiation and support

  • In short, there will be lots of new opportunities for people with knowledge of the Symbian platform.

Great credit is due to Symbian’s shareholders, and especially to Nokia, for enabling and driving this bold and powerful initiative.

Of course, with such a large change, there’s considerable uncertainty about how everything will work out. Many people will be unsure exactly where they, personally, will end up in this new world. Lots of details remain to be decided. But the basic direction is clear: participants in the Symbian 2.0 ecosystem will be part of a much bigger game than before. It’s going to be exciting – and no doubt somewhat scary too. As Symbian’s first CEO, Colly Myers, used to say, “Let’s rock and roll!”

Postscript: For a clear rationale of some key aspects of the Symbian Foundation plan, take a look at what my Symbian colleague John Forsyth has to say, here.

Monday, June 23, 2008

Fragmentation is easy, integration is hard

The Wall Street Journal reports today that "Google's Mobile-Handset Plans Are Slowed". The Inquirer picks up the story and adds a few choice morsels of its own: "Depressing news as Google's Android delayed":
However, life’s little crises just kept getting the Android down and now apparently some mobile network operators like Sprint Nextel, have abandoned any attempt to get an Android on the market until 2009. This is purportedly because the majority of Google's attention and resources have been going to Sprint’s competitor T-Mobile USA, who still hope to have an Android mobile out by the end of Q4. We have it on good authority (from un-named sources of course) that Sprint actually asked Google “Do you want me to sit in the corner and rust, or just fall apart where I'm standing?”...

Director of mobile platforms at Google, Andy Rubin, gloomily noted that trying to develop software while the company’s irritating partners kept pushing for new features, was a time-consuming task. "This is where the pain happens", he sighed.

I recognise this pain. It's something that has occurred many times during Symbian's history. That's why I've emphasised a dilemma facing Android: Fragmentation is easy, but integration is hard. Coping with multiple forceful customers at the same time, while your codebase is still immature, is a really tough challenge. Glitzy demos of v2 features don't help matters: they drive up interest that needs to be deflated, as you have to explain to customers that, no, these features aren't ready to include in the software package for their next phones, despite looking brilliant on YouTube. Instead, the focus needs to go on the hard, hard task of integration.

Sunday, June 22, 2008

Reasons why humans will be smarter by 2020

Alvis Brigis has published a provocative article on FutureBlogger, "How smart will humans be by 2020?" The article looks at technology and social trends which can provide so-called IA - "Intelligence Amplification". (IA is sometimes expanded, instead, to "Intelligence Augmentation".)

Alvis produces a compelling list of intelligence-amplifying trends:

  • Widening bandwidth (Faster internet connections, pervasive WiFi...)
  • Growing global information
  • Evolving social media (including Wikipedia...)
  • Video-to-video chat
  • Evolving 3D and immersive media (including Second Life, Google Earth, and GTA4)
  • Better search
  • New interface products (including touchscreens, mini-projectors, haptic feedback...)
  • Improved portable battery power
  • Time-savers (such as robots and more efficient machines)
  • Translators (akin to the Babelfish of HHGG)
  • Rising value of attention (including more relevant targeted ads)
  • Direct brain-computer interfaces
  • Health benefits (from advances in nanotech, biology, pharma, etc).

One reason I'm personally very positive about smartphones is that I believe in their potential to "make their users smarter". I've often said, only half-joking, that I view my Psion Series 5mx PDA as my "second brain", and my current smartphone as my "third brain". Convenient and swift access to information from any location, whenever the need arises, is only part of the benefit. The organiser capabilities can play a big role too - as does the connectivity to people and communities (rather than just to information stores). So in my mind, the potential of smartphones includes people who increasingly:

  • Know what's important
  • Know what they want to achieve in life
  • Know how to get it.

PS For wider thoughts about the reasons for improved intelligence, see this recent interview by Alvis Brigis of James Flynn (the discoverer of what has come to be known as the "Flynn effect")

PPS I'd like to include the FutureBlogger posts in my blogroll, right, but everytime I feed Blogger the URL http://www.memebox.com/futureblogger to include in the blogroll, it gets changed into a link to a different blog. Does anyone know how to fix this?

Saturday, June 21, 2008

Open minds about open source

There’s been a surprising amount of heat (not to mention vitriol) in the responses to recent blog postings from Ari Jaaksi of Nokia on the topic of the potential mutual benefits of a constructive encounter between Open Source developers and the companies who make money from mobile telephony.

Ari’s message (in "Some learning to do?", and again in "Good comments from Bruce") is that there’s a need for two-way learning, and for open minds. To me, that seems eminently sensible. This topic has so many angles (and is changing so quickly) that we shouldn’t expect anyone to have a complete set of answers in place. But quite a few online responses take a different stance, basically saying that there’s nothing for Open Source developers to learn – they know it all already – and that any movement must be on the side of the mobile phone business companies. The mountain will have to come to Mohammed.

At the same time as I’ve been watching that debate (with growing disbelief), I’ve been thumbing my way through the 500+ page book “Perspectives on Free and Open Source Software”. This book contains 24 chapters (all written by different authors), one introduction (by the joint editors of the book: Joseph Feller, Brian Fitzgerald, Scott Hissam, and Karim Lakhani), one foreword (by Michael Cusumano), and one epilogue (by Clay Shirky). The writers range in their attitudes toward Open Source, all the way from strong enthusiasm to considerable scepticism. They’ve all got interesting things to say. But they have several things in common (which sets them apart from the zealotry in the online blog responses):

  • An interest to find and then examine data and facts
  • A willingness to engage in dialog and debate
  • A belief that Open Source is now well established, and won’t be disappearing – but also a belief that this observation is only the beginning of the discussion, rather than the end.

Another thing I like about the book is the way the Introduction sets out a handy list of questions, which readers are asked to keep in their minds as they review the various chapters. This makes it clear, again, that there’s still a lot to be worked out, regarding the circumstances in which Open Source is a good solution to particular technical challenges.

It’s a bit unfair to try to summarise 500+ pages in just a few paragraphs, but the following short extracts give a good flavour in my view. From Michael Cusumano’s introduction:

Most of the evidence in this book suggests that Open Source methods and tools resemble what we see in the commercial sector and do not themselves result in higher quality. There is good, bad, and average software code in all software products. Not all Open Source programmers write neat, elegant software modules, and then carefully test as well as document their code. Moreover, how many “eyeballs” actually view an average piece of Open Source code? Not as many as Eric Raymond would have us believe.

After reading the diverse chapters in this book, I remain fascinated but still skeptical about how important Open Source will be in the long run and whether, as a movement, it is raising unwarranted excitement among users as well as entrepreneurs and investors…

The conclusion I reach … is that the software world is diverse as well as fascinating in its contrasts. Most likely, software users will continue to see a co-mingling of free, Open Source, and proprietary software products for as far as the eye can see. Open Source will force some software products companies to drop their prices or drop out of commercial viability, but other products and companies will appear. The business of selling software products will live on, along with free and Open Source programs.

And from Clay Shirky’s epilogue:

Open Source methods can create tremendous value, but those methods are not pixie dust to be sprinkled on random processes. Instead of assuming that Open Source methods are broadly applicable to the rest of the world, we can instead assume that they are narrowly applicable, but so valuable that it is worth transforming other kinds of work, in order to take advantage of the tools and techniques pioneered here.

If I have one complaint about the book, it is that it is already somewhat dated, despite having 2005 as its year of publication. Most of the articles appear to have been written a couple of years earlier than the publication date, and sometimes refer in turn to research done even before that. Five or six years is a long time in the fast-moving world of Open Source.

Thursday, June 19, 2008

Seven principles of agile architecture

Agile software methodologies (associated with names like "Scrum" and "eXtreme Programming") have historically been primarily adopted within small-team projects. They've tended to fare less well on larger projects.

Dean Leffingwell's book "Scaling Software Agility: Best practices for large enterprises" is the most useful one that I've found, on the important topic of how best to apply the deep insights of Agile methodologies in the context of larger development projects. I like the book because it's clear (easy to read) as well as being profound (well worth reading). I liked the book so much that I invited Dean to come to speak at various training seminars inside Symbian. We've learned a great deal from what he's had to say.

As an active practitioner who carries out regular retrospectives, Dean keeps up a steady stream of new blog articles that capture the evolution of his thinking. Recently, he's been publishing articles on "Agile architecture", including a summary article that lists "Seven principles of agile architecture":
  1. The teams that code the system design the system
  2. Build the simplest architecture that can possibly work
  3. When in doubt, code it out
  4. They build it, they test it
  5. The bigger the system, the longer the runway
  6. System architecture is a role collaboration
  7. There is no monopoly on innovation.
Dean says he's working on an article that pulls all these ideas together. I'm looking forward to it!

Wednesday, June 18, 2008

The dangers of fragmentation

My comments on mobile Linux fragmentation at the Handsets World event in Berlin were picked up by David Meyer ("Doubts raised over Android fragmentation") and prompted a response by Andy Rubin, co-founder of Google's Android team. According to the reports,

On a recent comment by Symbian's research chief, David Wood, that Android would eventually end up fragmented, Rubin said it's all part of the open source game.

Raising the example of a carrier traditionally having to wait for a closed platform developer to release the next version of its software to "enable" the carrier to offer new services, Rubin said carriers could just hire a developer internally to speed up that process without waiting any longer.

"If that fragmentation is what [Wood] is talking about, that's great--let's do it," said Rubin.


Assuming these reports are accurate, they fall into the pattern of emphasising the short-term benefits of fragmentation, but de-emphasising the growing downstream compatibility problems of a platform being split into different variants. They make fragmentation sound like fun. But believe me, it's not!

I noticed the same pattern while watching a panel on Open Source in Mobile at one of the Smartphone Summit events that take place the day before CTIA. The panel moderator posed the question, "Is fragmentation a good or bad thing?" The first few panellists were from consultancies and service providers. "Yes", they said, smiling, "Fragmentation gives more opportunity for doing things differently - and gives us more work to do." (I paraphrase, but only slightly.) Then came the turn of a VP from one of the phone manufacturers who have struggled perhaps more than most with the variations and incompatibilities between different mobile Linux software stacks. "Let's be clear", came the steely response, "fragmentation is a BAD thing, and we have to solve that problem".

Luigi Licciardi, EVP of Telecom Italia, made similar remarks at the ‘Open source in Mobile’ conference in Madrid in September 2007. He said that one thing his network operator needs in terms of software platforms which they would consider using in the mobile phones, is ‘backwards compatibility’ - in other words, a certain level of stability. (This sounds simple, but I know from my own experience that backwards compatibility requires deep skill and expertise in the midst of a rapidly changing marketplace.) Moreover, the software platform has to be responsive to the needs of the individual operators: the operator needs to be able to go and talk to a company and say “give us these changes and modifications”. He also said that the platform needs to be open to applications for network connections and end users, but has to be closed to malware. In other words it has got to have a very good security story. (Incidentally, I believe Symbian has uniquely got a very strong security story, with platform security built deep into the operating system.) Finally, he emphasised that “a fragmented Linux is of no interest to operators”.

This topic deserves more attention. Let me share some analysis from a transcript of a talk I gave at the Olswang "Open Source Summit" in London last November:

The point is that there is a great tendency in the mobile phone space for mobile Linux variants to fragment and split. This was first drawn to my attention more than two years ago by Avi Greengart who is a US-based analyst. He said that mobile Linux is the new Unix, meaning that despite the best intentions of all involved, it keeps on going its own separate ways.

So why is that happening? It is happening first of all because fragmentation is easy. This means that you can take the code and do whatever you like with it. But will these changes be brought back inside the main platform? Well I claim that, especially in a fast moving market such as smartphones, integration is hard. The changes tend to be incompatible with each other. Therefore it is my prediction that, on average, mobile Linux will fragment faster than it unifies.

It is true that there are many people who say it is very bad that there are all these different mobile Linux implementations. It is very bad because it has caused great problems for developers: they have to test against so many stacks. These people ask themselves, “Can't we unify things?” And every few months there is a new group that is formed and says, in effect, “Right, we are going to make a better job of unifying mobile Linux than the last lot, they weren’t doing it fast enough, they weren’t doing it seriously enough, so we are going to change that.” But I see the contrary, that there is a greater tendency to fragment in this space than to unify, and here’s why.

It is always easier and quicker to release a device-specific solution than to invest the extra effort to put that functionality into a reusable platform, and then on into a device. In other words, when you are racing to market, when the market leaders are rushing away from you and leaving more and more clear blue water between you and them, it is much more tempting to say, “well I know we are supposed to be thinking in terms of platform, but just for now I am going to serve the needs of my individual product.”

Interestingly we had the same problem in the early days of Symbian. One of the co-founders of Symbian, Bill Batchelor, coined the phrase “the Symbian Paradox”, which is that we found it hard to put functionality into the platform, rather than just serve it out to eager individual customers via consultancy projects. But we gradually learned how to do that, and we gradually put more and more functionality into the platform, suitable for all customers, and therefore more and more customer projects benefited more widely.

So why is mobile Linux fragmenting in a way that perhaps open source in other areas isn’t fragmenting? First, it is an intense, fast moving industry. Symbian as the market leader, together with our customers, is bringing out significant new functionality every three or four months. So there is no time for other people to take things easy and properly invest in their platforms. They are tempted to cut corners – to the detriment of the platform.

Second, if you look at how some of these consortia are meant to work, they are meant to involve various people contributing source code. If you look at some of their architecture diagrams, you might get one company in say Asia, which is contributing one chunk of software, which is meant to be used by other companies the world over. Well guess what happens in a real life project? Another company, let’s say a company trying to ship a Linux based phone in America, and surprise, surprise, the software doesn’t work, it fails to get FCC approval, it doesn’t meet the network operators’ needs or there are bugs that only show up on the network operators in America. So what do they say? They say to the first group (the people out in Asia) “would you mind stopping what you are doing and come and fix this, we are desperate for this fix for our software”. The group in Asia say, “well we are very sorry, we are struggling hard, and we are behind as well, we would rather prioritise our own projects, if you don’t mind, shipping our own software, debugging it on different networks”.

At this point you may raise the question: isn’t open source meant to be somewhat magical in that you can all just look at it and fix it anyway; you don’t need to get the original person to come and fix it? But here we reach the crux of the matter. The problem is there is just too much code, there are such vast systems, it is not just a few hundred lines, or even a few thousand lines of code, there are hundreds of thousands or even millions of lines of code in these components and they are all interfacing together. So somebody looks at the code and they think, “Oh gosh, it is very complicated”, and they look and they look and they look and eventually they think, “Well if I change this line of code, it will probably work”, but then without realising it they have broken something else. And the project takes ages and ages to progress..

Compare this to the following scenario: some swimmers are so good they can actually swim across the English Channel, they go all the way from England to France. Suppose they now say, “Yes I have cracked swimming, what will I do next? Oh I will swim all the way across the Atlantic, after all, it is just an ocean and I have already swum one ocean, so what is different about another ocean?” Well it is the kind of difference between the places where open source has been doing well already and the broader oceans with all the complications of full telephony in smartphones.

So what happens next in this situation? Eventually one company or another may come up with a fix to the defects they faced. But then they try and put it back in the platform, and the first company very typically disagrees, saying “I don’t like what you have done with our code, you have changed it in a very funny way, it isn’t the way we would have done it”. And so the code fragments – one version with the original company, and the other in the new company. That is how it ends up that mobile Linux fragments more than it unifies.

I say this firstly, because I have contacts in the industry who lead me to believe that this is what happens. Secondly, we have the same pressures inside Symbian, but we have learned how to cope with it. We often get enhancements coming back from one customer engagement project and at first it doesn’t quite fit into the main OS platform, but we have built up the highly specialist skills how to do this integration.

As I mentioned, integration is hard. You need a company that is clearly responsible and capable for that. This company needs to be independent and trustworthy, being motivated - not by any kind of ulterior motive, but by having only one place in the value chain, doing one job only, which is that it is creating large scale customer satisfaction by volume sales of the platform.

Monday, June 16, 2008

Anticipating the next ten years of smartphone innovation

This June, Symbian is celebrating its tenth anniversary. As someone who has been a core member of Symbian’s executive management team throughout these ten roller-coaster years, I’d like to share some of my personal reflections on the remarkable smartphone innovations that have taken place over that time – and, in that light, to consider what the next ten years may bring.

It was on 24 June 1998 that the formation of Symbian was announced to the world. The industry’s leading phone manufacturers were to cooperate to fund further development of the operating system known at the time as EPOC32 (this name dates from the inception of the OS, four years earlier, inside the UK-based PDA manufacturer Psion). The funding would enable the operating system to power numerous diverse models of advanced mobile phones – known, in virtue of their rich programmability, as “smartphones”. The news echoed far and wide. In time, the funding repaid investors handsomely: more than 200 million Symbian-based smartphones have already been sold, earning our customers substantial profits. It’s not just our direct customers that have benefited: a fertile ecosystem of partner companies is sharing in an ongoing technological and market success.

But there have been many road bumps along the way – and many surprises. Perhaps the biggest surprise was the degree of difficulty in actually bringing smartphones to market. We time and again under-estimated the complexity of the entire evolving smartphone software system – mistakenly thinking that it would take only around 12 months for significant new products to mature, whereas in reality the effort required was often considerably higher. To our dismay, numerous potential world-beating products were cancelled, on account of lengthy gestation periods. Or, when they did reach the market, their window of opportunity had passed, so their sales were disappointing. For each breakthrough Symbian-based phone that set the market alight, there were almost as many others that were shelved, or failed to live up to expectations. For this reason, incidentally, when I see commentators becoming highly excited about the prospects of possible new smartphone operating systems, I prefer to reserve my judgement. I know that, just because an industry giant is behind a new smartphone solution, it does not follow that early expectations will be translated into tangible unit sales. With ever-increasing feature requirements, operator specifications, and usability demands, smartphone software keeps on growing in complexity. It requires tremendous skill to integrate an entire software stack to meet a rapidly evolving target. If you pick a sub-optimal smartphone OS as your starting point, you’ll be storing up more trouble for yourself.

Another surprise was in some of the key characteristics of successful smartphones. In 1998, we failed to anticipate that most mobile phones would eventually contain a high quality digital camera. It was only after several years that we realised that the “top secret” (and therefore rarely discussed) features of forthcoming products from different customers were actually the same – namely an embedded camera application. More recently, the prevalence of smartphones with embedded GPS chips has also been a happy surprise. Mapping and location services are in the process of transforming mobile phones, today, in similar way to their earlier transformation by still and then video cameras. This observation strengthens my faith in the critical importance of openness in a smartphone operating system: the task of the OS provider isn’t to impose a single vision about the future of mobile phones, but is to enable different industry players to experiment, as easily as possible, with bringing their different visions for mobile phones into reality.

As a measure of the progress with smartphone technology, let’s briefly compare the specs of two devices: the Ericsson R380, which was the first successful Symbian-powered smartphone (on sale from September 2000 – and a technological marvel in its day), and the recent best-seller, Nokia’s N95 8GB:
  • The R380 had a black and white touch screen, whereas the N95 screen has 16 million colours
  • The R380 ran circuit switched data over GSM (2G), whereas the N95 runs over HSDPA (3.5G)
  • The R380 supported WAP browsing, whereas the N95 has full-featured web browsing
  • The R380 had only a small number of built-in applications: PIM, and some utilities and games
  • The N95 includes GPS, Bluetooth, wireless LAN, FM radio, a 5 mega-pixel camera, and a set of built-in applications that’s far too long to list here!

Another telling difference between these two time periods is in the number of Symbian smartphone projects in progress (each with significant resources allocated to them). During the first years of Symbian’s existence, the number of different projects could be counted on the fingers of two hands. In contrast, at the end of March 2008, there were no less than 70 distinct smartphone models under development, from all the leading phone manufacturers. That’s a phenomenal pipeline of future market-leading products.

Although smartphones have come a long way in the last ten years, the next ten years are likely to witness even more growth and innovation:

  • Component prices will continue to fall – resulting in smartphones at prices to suit all pockets
  • Quality, performance, and robustness will continue to improve, meaning that the appeal of smartphones extends beyond technology aficionados and early adopters, into the huge mainstream audience of “ordinary users” for whom reliability and usability have pivotal importance
  • Word of mouth will spread the news that phones can have valuable uses other than voice calls and text messages: more and more users are discovering the joys of mobile web interaction, mobile push email, mobile access to personal and business calendars and information, and so on
  • The smartphone ecosystem will continue to devise, develop, and deploy interesting new services for smartphones, addressing all corners of human life and personal need
  • The pipeline of forthcoming new smartphone models will continue to strengthen.

It is no wonder that analysts talk about a time, not so far into the future, when there will be one billion smartphones in use around the world. The software that is at the heart of the majority of these devices will have a good claim to being the most widely used software on the planet. Symbian OS is in the pole position to win that race, but of course, nothing can be taken for granted.

Symbian’s understanding of the probable evolution of smartphones over the decade ahead is guided, first and foremost, by the extraordinary insight we gain from the trusted relationships we have built up and nurtured over many years with the visionaries, leaders, gurus, and countless thoughtful foot soldiers in our customer and partner companies. As the history of Symbian has unfolded, these relationships of “customer intimacy” have deepened and flourished: our customers and partners have seen that we treated their insights and ideas with respect and with due confidentiality – and that has prompted them to share even more of their thinking (their hopes and their fears) about the future of smartphones. In turn, this shapes our extensive roadmap of future enhancements to Symbian OS technology.

To provide additional checks on our thinking about future issues and opportunities for smartphones, Symbian is inaugurating an essay contest, which is open to entries from students at universities throughout the world. Up to ten essays will win a prize of £1000 each – essays need to be submitted before the end of September, and winners will be announced at the Symbian Smartphone Show in October. Essays should address the overall theme of “The next wave of smartphone innovation”. For details of how to enter the contest, see http://www.symbian.com/news/essaycontest/.

As a guide for potential entrants, Symbian has announced a set of six research sub-themes, which are also areas that Symbian believes deserve further investigation in universities or other research institutions:

  1. Device evolution / revolution through 2012-2015: The smartphones of the future are likely to be significantly different from those of today. Although today’s smartphones have tremendous capability, those reaching the market in 2012-2015 are likely to put today’s devices into the shade. What clues are there, about the precise characteristics of these devices?
  2. Improved development and delivery methodologies: The dramatically increasing scale and complexity of smartphone development projects mean that these projects tend to become lengthy and difficult – posing significant commercial challenges.
  3. Success factors for mobile applications and mobile operating systems: What are the factors that significantly impact adoption of mobile software? What can be done to address the factors responsible for low adoption?
  4. Possible breakthrough applications and markets: The search for “killer apps” for smartphones continues. Are there substantial new smartphone application markets waiting to be unlocked by new features at the operating system level?
  5. Possible breakthrough technology improvements: Smartphone applications and services depend on underlying technology, which will come under mounting stress due to increased demands from data, processing, throughput, graphics, and so on.
  6. Improved university collaboration methods: What are the most effective and efficient ways for universities and Symbian to work together?

For lists of questions for each of these sub-themes, see www.symbian.com/news/essaycontest/topics/.

The evolution of the “smartphone” concept itself is particularly important. Whereas successful smartphones have mainly been portrayed so far as “phones first” and as “communications-centric devices”, they are nowadays increasingly being appreciated and celebrated for their computer capabilities. Some of our customers have already been emphasising to end users that their latest devices are “multimedia computers” or even instances of “computer 2.0”. Personally I prefer the name “iPC” (short for “inter-personal computers”) as a likely replacement for “smartphone”. Whereas Symbian’s main technology challenges in the last ten years tended to involve telephony protocols, our main technology challenges of the next ten years will tend to involve concepts from contemporary mainstream computing.

The scale of the future opportunity for iPCs dwarfs that for smartphones, just as the scale of the opportunity for smartphones dwarfed that of the original PDAs. But there’s nothing automatic or easy about this. We’ll have to work just as hard and just as smart in the next ten years, to solve some astonishingly difficult problems, as we’ve ever worked in the past. We’ll need all our wisdom and ingenuity to navigate some radical transitions in both market and technology. Here are just some of the ways in which devices of 2018 will differ from those of 2008.

  • From the WWW to the WWC: Nicholas Carr has written one of the great technology books of 2008. It’s called “The big switch: rewiring the world, from Edison to Google”. With good justification, Carr advances the phrase “world wide computer” to describe what the WWW (world wide web) is becoming: a hugely connected source of massive computing power. Terminals – both PCs and iPCs – are increasingly becoming like sockets, which connect into a grid that provides intelligent services as well as rich data. The consequences of this are hard to foretell, but there will be losers as well as winners. The local intelligence on the iPC will act as a smart portal into a much mightier intelligence that lives on the Internet.
  • Harvesting power from the environment: Efficient usage of limited battery power has been a constant hallmark of Symbian software. With ever greater bandwidth and faster processing speeds, the demands on batteries will become even more pressing. Future iPCs might be able to sidestep this challenge by drawing power from their environment. For example, the BBC recently reported how a contraption connected to a person’s knee can generate enough electricity, reasonably unobtrusively, from just one minute of walking, to power a present-day mobile phone for 30 minutes. Ultra-thin nano-materials that convert ambient light into electricity are another possibility.
  • New paradigms of usability: Given ever larger numbers of applications and greater functionality, no system of hierarchical menus is going to be able to provide users with an “intuitive” or “obvious” guide to using the device. It’s like the way the original listing “Jerry’s Guide to the World Wide Web” – which formed a hierarchically organised set of links, known as “Yahoo” – became replaced by search engines as the generally preferred entry point to the ever richer variety of web pages. For this reason, UIs on iPCs look likely to become driven by intelligent front-end search engines, which respond to user queries by offering seamless choices between both offline and online functionality on their devices. Smart search will be supported by smart recommendations.
  • Short-cutting screen and keyboard: Another drawback of present day smartphones is the relatively fiddly nature of screen and keyboard. How much more convenient if the information in the display could somehow be conveyed directly to the biological brain of the user – and likewise if detectors of brain activity could convert thought patterns into instructions transmitted to the iPC. It sounds mind-boggling, and perhaps that’s what it is, in a literal sense. Nano-technology could make this a reality sooner than we imagine.

If some of these thoughts sparked your interest, I suggest that you bookmark the dates 21-22 October in your diary. That’s when Symbian will bring a host of ecosystem experts together, at the free-to-attend Symbian Smartphone Show, in London. It will be your chance to hear 10 keynote presentations from major industry figures and over 60 seminars led by marketplace experts. You’ll be able to network with over 4000 representatives from major handset vendors, content providers, network operators, and developers. To register, visit smartphoneshow.com. Much of the discussion will focus on the theme, “The next wave of smartphone innovation”. Your contributions will be welcome!

Accelerating Future

Michael Anissimov writes a blog called "Accelerating Future". I keep finding well-written articles in it. Here's just three examples:
  1. A recent piece giving an upbeat, well-supported argument in favour of the transformative potential of molecular nanotechnology - responding (and step-by-step refuting) to a more skeptical assessment by Richard Jones in the recent IEEE special report on The Singularity;

  2. Another recent piece that gently and thoughtfully chides some of the less careful advocates of Singularity-style thinking;

  3. An older, introductory piece with a fascinating and provocative list of technologies that have enormous potential to significantly enhance human life and human society.

For this reason, I'd recommend Michael's blog as a great watering hole, for anyone who (like me) is interested in the the thoughtful development and application of technology to significantly enhance human mental and physical capabilities.

Friday, June 13, 2008

It was twenty years ago, today


13th June 1988 - twenty years ago today - was the day I started work at Psion. I arrived at the building at 17 Harcourt Street, with its unimpressive facade that led most visitors to wonder whether they had come to the wrong place. When the photo on the left was taken, the premises were used by Symbian, and a "Symbian" sign had been affixed outside. But on my first visits, I noticed no signage at all - although I later discovered the letters of the word "Psion" barely visible in faded yellow paint.

Unimpressive from the outside, the building looked completely different on the inside. Everyone joked about the "tardis effect" - since it seemed impossible for such a small exterior to front a considerably larger interior. In fact, Psion had constructed a set of offices running behind several of the houses in the street - but planning regulations had prevented any change in the house fronts themselves. Apparently, as grade two listed buildings, their original exteriors could not be altered. Or so the story went.

I worked under the guidance of Richard Harrison and Charles Davies on software to be included in a word processor application on Psion's forthcoming "Mobile Computer" laptop device. My very first programming task was an optimised Find routine. After two weeks, I found myself thinking to myself, "Don't these people realise I'm capable of working harder?" But I soon had more than enough tough software tasks to handle, and I've spent the next twenty years very far from a state of boredom. On the contrary, it's been a roller-coaster adventure.

Back in 1988, the software development team in Harcourt Street had fewer than 20 people in it. Eight years later, when Psion Software was formed as a separate business unit, there were 88 in the team - which, by that time, also occupied floors in the nearby Sentinel House. Two more years saw the headcount grow to 155 by the time Psion Software turned into Symbian (24 June 1998). Today, our headcount is around 1600. It's a growth I could not imagine during my first few years of work. Nor could I imagine that descendants of the software from the venerable "Mobile Computer" (MC400) would be powering hundreds of millions of smartphones worldwide.

(You can read more about the long and interesting evolution of Psion's software team, in my book "Symbian for software leaders: principles of successful smartphone development projects".)

Thursday, June 12, 2008

Handsets World event, Berlin

The Informa Handsets World series of events tend to bring together a knowledgeable crowd of speakers and attendees. The latest one, in Berlin this week, was no exception.

I gave a couple of presentations, which seemed to go down well:
  • "Hardware and software enabling powerful devices: mobile power without heat and without confusion"
  • "Refuting the claim that value in handset software is evaporating (it's actually growing!)".

Here's what I saw as some of the highlights from the event:

1.) Ari Jaaksi, VP of Devices R&D at Nokia, gave a upbeat yet pragmatic account of "Nokia's Vision for Wireless Handsets", focusing on growing practical collaboration between open source advocates and people who understand "ugly business realities". See zdnet for a write-up.

2.) Toshio Miki, Associate Senior VP & Managing Director of NTT DoCoMo, speculated that the MOAP platform which runs on NTT DoCoMo phones in Japan would before long support Android apps, running on top of an Android environment sitting in turn on top of MOAP, in parallel to (a.) Java apps and (b.) native apps. "This is my personal prediction", he said. (There are two variants of MOAP: one is powered by Linux, and the other is powered by Symbian OS.)

3.) Daniel Meredith, Head of Handset and Device Marketing at T-Mobile, said that the most important change he would like to see in the mobile industry is to "remove all closed OSes". In response to the same question, Guido Arnone, Director of Terminals, Vodafone, asked the industry to "improve out-of the-box usability". And Simon Rockman, Head of Requirements and Applications at Sony Ericsson, asked the industry to realise that users of lower cost phones in different parts of the world typically wished for applications and features that are NOT the same as cut-down versions of higher-specced phones: for example, in India, there's a requirement for mobiles to be able to receive AM radio broadcasts.

4.) Aditya Kaul, Senior Analyst at Pioneer Consulting, gave a fascinating report on how phone manufacturers were looking at taking advantage of nanotechnology in forthcoming wireless devices. He covered possible uses of carbon nanotubes, quantum dots, spintronics, and gave a special mention to MEMS.

5.) Morten Grauballe, EVP at Red Bend, urged ISVs to realise that "developing software" was only the first of three problems that need to be solved. The other two are "deploying software" and "managing software".

6.) Francis MacDougall, Founder and CTO of GestureTek, showed some impressive videos of the new kinds of user interaction which are enabled when phones can sense motion and gestures (either via accelerometers, or via clever analysis of the camera viewfinder image).

Wednesday, June 11, 2008

Technology and the risks of global catastrophe

I'm a passionate enthusiast about the capabilities of technologies. But at the same time, I'm keenly aware of the potential for technology to wreak havoc and destruction. So I'm eagerly looking forward to the UKTA technology debate on Saturday (14th June):

Technology risks and the survival of humanity: Is emerging technology more likely to destroy human civilisation or to radically enhance it?

This is taking place in Birkbeck College, central London, from 2pm-4pm. Everyone is welcome to attend - there's no charge. (If you Facebook, you can RSVP here.)

This event will in some ways be a preview of a considerably longer event taking place in Oxford during July: The Future of Humanity Institute's conference on "Global Catastrophic Risks". Speakers at this later conference include:
  • Professor Jonathan Wiener, current President of the Society of Risk Analysis;
  • Professor Steve Rayner, Director of the James Martin Institute for Science and Civilisation;
  • Professor William Potter, Nonproliferation Studies at the Monterey Institute of International Studies;
  • Sir Crispin Tickell, a leading authority on the interaction between science and global governance, and an advisor on climate change to successive British Prime Ministers;
  • Eliezer Yudkowsky, Research Fellow, Singularity Institute for Artificial Intelligence;
  • Mike Treder , co-founder and Executive Director, Center for Responsible Nanotechnology;
  • Professor Bill Napier , Honorary Professor, Institute for Astrobiology, Cardiff University.

I'm anticipating a lot of thought-provoking discussion. For this conference, advance registration is essential. There's about a week left before registration closes.

Tuesday, June 10, 2008

Symbian Insight

From Sept 2005 to Nov 2006 I wrote 13 articles under the heading "Symbian Insight" which were published on http://www.symbian.com/.

These are still available at http://www.symbian.com/symbianos/insight/index.html.