Sunday, August 31, 2008

Intellectual property and open source

I've just finished reading a third book, within two months, on the topic of open source licensing. The three books are:
  1. Heather Meeker's "The Open Source Alternative: Understanding Risks and Leveraging Opportunities" - which I reviewed here;
  2. Lawrence Rosen's "Open Source Licensing: software freedom and intellectual property law" - which I reviewed here;
  3. Van Lindberg's "Intellectual property and open source: a practical guide to protecting code".

My headline summary is that all three books are well worth reading. They overlap to an extent, but they come at their shared subject from very different viewpoints, so each book has lots of good material that you won't find in the others.

Van Lindberg targets his book at software engineers. He uses many analogies between legal concepts and deeply technical software engineering concepts. For example (to give a flavour of many of the clever pieces of writing in the book):

"One way to think about private goods is to analogize them to locks or mutexes in a multithreaded program. A number of different threads may want to use a protected resource, but control of the lock around the resource is rivalrous..."

Somewhat unexpectedly, the first half of the book hardly mentions open source. There's good reason for this. The first seven chapters of the book cover the basic principles of intellectual property (IP), including patents, copyrights, trademarks, trade secrets, licences, and contracts. I found the very first chapter to be particularly engrossing, as it set out the philosophical foundations for IP. Van Lindberg highlighted the utilitarian justification for IP, in terms of legal measures to counter what would otherwise be two sorts of market failures:

  • "The cost of creating knowledge is high, but the cost of consuming it is low.... Therefore there is a societal incentive to not create as much knowledge as we would ideally like to have" (hence the utilitarian rationale for copyright)
  • "Secrets are more valuable to you personally, but shared knowledge is more valuable to society.... The resource is valuable to you because you have a key, but it is worthless to everyone else" (hence the utilitarian rationale for patents).

As I said, the very first chapter was particularly engrossing, but I thought the other early chapters dragged a bit. Although all the material was interesting, there were rather too many details for my liking.

Chapter eight ("The economic and legal foundations of open source software") went back to philosophical principles, in an attempt to pinpoint what makes open source different from proprietary software. The difference, according to Van Lindberg, is that:

  • Proprietary software is driven by corporate business goals (which inevitably involve profit-maximisation, and therefore - he claimed - a tension between what's best for the customers and what's best for the shareholders)
  • Open source software is driven by cooperative goals, in which the goals of the customers have primacy. (Note the difference between the similar-looking words corporate and cooperative.)

This chapter also runs a pretty compelling extended comparison between proprietary software and open source software, on the one hand, and banks and credit unions, on the other hand. Again, the first member of each pair is driven by shareholder goals, whereas the second member of each pair is driven by customer goals (the legal owners are the same people as the customers).

The primary task of open source licences, according to this analysis, is to support cooperation. In more detail, Van Lindberg says that open source licences are intended to solve the "Programmer's Dilemma" version of the famous and well-known "Prisoner's Dilemma" problem from game theory:

"Open source licences serve two functions in a game-theoretic context. First, they allow programmers to signal their cooperative intentions to each other. By placing their code under a licence that allows cooperation, programmers indicate to their peers that they are willing to participate in a cooperative solution. Second... licences are based in copyright law, which allows the original developer to dictate (to some extent) the users and uses of his code. The legal penalties associated with copyright violations change the decision matrix for other programmers, leading to a stable cooperative (and optimal) solution."

This (like everything else in the book) is thought-provoking. But I'm not fully convinced. I think this puts too much importance onto the licence aspect of open source. Yes, picking a good licence is important - but it's insufficient to guarantee the kind of cooperative behaviour that will make an open source project a real success. And as I've argued elsewhere, picking the right licence is no guarantee against the software fragmenting. But despite this quibble, I still think the ideas in this chapter deserve wide readership.

The second half of the book changes gear. With the first eight chapters having carefully outlined the underlying legal framework, the remaining six chapters walk through the kind of real-life IP concerns that will face someone (whether an individual developer, or a company) who wants to become involved in an open source project:

  • Issues with standard employment contracts that probably specify that everything you work on - even in your spare time - belongs to your company, and which you therefore are not free to assign to an open source project
  • General guidelines on choosing between some of the more popular open source licences
  • Legal complications over how to accept patches and other contributions, from outsiders, into your project
  • Particular issues with the GPL
  • Reverse engineering
  • Creating a non-profit organisation or foundation (recommended if your project becomes larger).

There's lots of good advice here. Every chapter of this part of the book has important material - but I was slightly disappointed with some parts. For example, given the careful attention to patents in the first half of the book (where two chapters were devoted to this topic), I was expecting more analysis of how some of the major open source licences differ in their approach to patent licences and patent retaliation clauses. On reflection, that's something that the other two books (ie by Meeker and Rosen) handle better.

The chapter on the issues with the GPL confirmed and extended the opinion about that licence which I'd picked up from my previous reading: the interpretation of the GPL is subject to great uncertainty over ambiguities. The chapter includes a lengthy "Questions and answers" section, to which the answer to nearly every question is "Maybe" or "It depends". (Apart from the last question, which is "Can I depend on the answers in this Q&A to keep me out of trouble?"; the answer to this is "No, this is our best understanding of copyright law as it stands right now, but it could change tomorrow - and nobody really knows...")

Giving more evidence for this view of the ambiguities surrounding the GPL, Van Lindberg mentions an essay by Matt Asay, "A Funny Thing Happened on the Way to the Market". Here's an extract from that essay:

"I asked two prominent representatives of the Free Software Foundation – Eben Moglen, general counsel, and Richard Stallman, founder – to clarify thorny issues of linkage to GPL code, and came up with two divergent opinions on derivative works in specific contexts..."

"...it is telling how widely their responses diverge – there appear to be no definitive answers to the question of what constitutes a derivative work under the GPL, not even from the holders of the licenses in question."

This looks decisive, but it could be argued that this quote from Matt Asay is itself misleading, since Matt's article goes on to state that:

"Fortunately, as I will detail below, this issue has largely gone away, as it has become accepted practice to dynamically link to GPL code [without that code becoming part of the GPL program]. Linus Torvalds helped to build momentum for such a reading of the GPL. While some argue that kernel modules, including device drivers, must be GPL, Torvalds has stated: This [GPL] copyright does *not* cover user programs that use kernel services by normal system calls – this is merely considered normal use of the kernel, and does *not* fall under the heading of 'derived work.'"

However, Van Lindberg seems to be right that the official FAQ about the GPL, maintained by the Free Software Foundation, advocates a stricter interpretation:

"Q: Can I release a non-free program that's designed to load a GPL-covered plug-in?

"A: It depends on how the program invokes its plug-ins. For instance, if the program uses only simple fork and exec to invoke and communicate with plug-ins, then the plug-ins are separate programs, so the license of the plug-in makes no requirements about the main program.

"If the program dynamically links plug-ins, and they make function calls to each other and share data structures, we believe they form a single program, which must be treated as an extension of both the main program and the plug-ins. In order to use the GPL-covered plug-ins, the main program must be released under the GPL or a GPL-compatible free software license, and that the terms of the GPL must be followed when the main program is distributed for use with these plug-ins.

"If the program dynamically links plug-ins, but the communication between them is limited to invoking the ‘main’ function of the plug-in with some options and waiting for it to return, that is a borderline case.

"Using shared memory to communicate with complex data structures is pretty much equivalent to dynamic linking."

Do these ambiguities over the GPL really matter? It's hard to be sure, but I'm personally glad that the Symbian Foundation plans to adopt a licence - the EPL - which avoids these issues.

I'm also glad to have taken the time to read this book - it's helped my understanding grow, in many ways.

Footnote: My thanks go to Moore Nebraska for drawing my attention to the Van Lindberg book.

Saturday, August 30, 2008

Anticipating the singularity

"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make."
The first time I read these words, a chill went down my spine. They were written in 1965 by IJ Good, a British statistician who had studied mathematics at Cambridge University pre-war, worked with Alan Turing and others in the highly secret code-breaking labs at Bletchley Park, and was involved in the creation of the Colossus computer ("the world's first programmable, digital, electronic, computing device").

The point where computers become better than humans at generating new computers - or (not quite the same thing) the point where AI becomes better than humans at generating new AI - is nowadays often called the singularity (or, sometimes, "the Technological Singularity"). To my mind, it's a hugely important topic.

The name "Singularity" was proposed by maths professor and science fiction author Vernor Vinge, writing in 1993:

"Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended...

"When greater-than-human intelligence drives progress, that progress will be much more rapid. In fact, there seems no reason why progress itself would not involve the creation of still more intelligent entities -- on a still-shorter time scale...

"From the human point of view this change will be a throwing away of all the previous rules, perhaps in the blink of an eye, an exponential runaway beyond any hope of control...

"I think it's fair to call this event a singularity ("the Singularity" for the purposes of this paper). It is a point where our old models must be discarded and a new reality rules. As we move closer to this point, it will loom vaster and vaster over human affairs till the notion becomes a commonplace. Yet when it finally happens it may still be a great surprise and a greater unknown..."
If Vinge's prediction is confirmed, the Singularity will happen within 30 years of 1993, namely by 2023. (He actually says, in his paper, "I'll be surprised if this event occurs before 2005 or after 2030".)

Of course, it's notoriously hard to predict timescales for future technology. Some things turn out to take a lot longer than expected. AI is a prime example. Progress with AI has frequently turned out to be disappointing.

But not all technology predictions turn out bad. The best technology prediction of all time is probably that by Intel co-founder Gordon Moore. Coincidentally writing in 1965 (like IJ Good mentioned above), Moore noted:

"The complexity for minimum component costs has increased at a rate of roughly a factor of two per year... Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000. I believe that such a large circuit can be built on a single wafer..."

For more than forty years, Moore's Law has held roughly true - with (as revised by Moore himself) the doubling period taking around 24 months instead of 12 months. And it is this persistent growth in computing power that leads other writers - most famously, Ray Kurzweil - to continue to predict the reasonably imminent onset of the singularity. In his 2005 book "The Singularity Is Near: When Humans Transcend Biology", Kurzweil picks the date 2045.

Intel's present-day CTO, Justin Rattner, reviewed some of Kurzweil's ideas in his keynote on the future of technology at the Intel Developer Forum in San Francisco on the 21st of August. The presentation was called "Crossing the chasm between humans and machines".

To check what Justin said, you can view the official Intel video available here. There's also a brief slide-by-slide commentary at the Singularity Hub site, as well as lots of other web coverage (eg here and here). Justin said that the singularity "might be only a few decades away", and his talk includes examples of the technological breakthroughs that will plausibly be involved in this grander breakthrough.

Arguably the biggest unknown in the technology involved in superhuman intelligence is software. Merely improving the hardware doesn't necessarily mean the the software performance increases to match. As has been remarked, "software gets slower, more rapidly than hardware gets faster". (This is sometimes called "Wirth's Law".) If your algorithms scale badly, fixing the hardware will just delay the point where your algorithms fail.

So it's not just the hardware that matters - it's how that hardware is organised. After all, the brains of Neanderthals were larger than those of humans, but are thought to have been wired up differently to ours. Brain size itself doesn't necessarily imply intelligence.

But just because software is an unknown, it doesn't mean that hardware-driven predictions of the onset of the singularity are bound to be over-optimistic. It's also possible they could be over-pessimistic. It's even possible that, with the right breakthroughs in software, superhuman intelligence could be supported by present-day hardware. AI researcher Eliezer Yudkowsky of the Singularity Institute reports the result of an interesting calculation made by Geordie Rose, the CTO of D-Wave Systems, concerning software versus hardware progress:

"Suppose you want to factor a 75-digit number. Would you rather have a 2007 supercomputer, IBM's Blue Gene/L, running an algorithm from 1977, or an 1977 computer, the Apple II, running a 2007 algorithm? Geordie Rose calculated that Blue Gene/L with 1977's algorithm would take ten years, and an Apple II with 2007's algorithm would take three years...

"[For exploring new AI breakthroughs] I will say that on anything except a very easy AI problem, I would much rather have modern theory and an Apple II than a 1970's theory and a Blue Gene."

Another researcher who puts more emphasis on the potential breakthrough capabilities of the right kind of software, rather than hardware, is Ben Goertzel. Two years ago, he gave a talk entitled "Ten years to the Singularity if we really try." One year ago, he gave an updated version, "Nine years to the Singularity if we really really try". Ben suggests that the best place for new AIs to be developed is inside virtual worlds (such as Second Life). He might be right. It wouldn't be the first time that significant software breakthroughs happened in arenas that mainstream society regards as peripheral or even objectionable.

Even bigger than the question of the plausible timescale of a future technological singularity, is the question of whether we can influence the outcome, to be positive for humanity rather than a disaster. That will be a key topic of the Singularity Summit 2008, which will be held in San Jose on the last Saturday of October.

The speakers at the summit include five of the people I've mentioned above:

(And there are 16 other named speakers - including many that I view as truly fascinating thinkers.)

The publicity material for the Singularity Summit 2008 describes the event as follows:

"The Singularity Summit gathers the smartest people around to explore the biggest ideas of our time. Learn where humanity is headed, meet the people leading the way, and leave inspired to create a better world."

That's a big claim, but it might just be right.

Wednesday, August 27, 2008

Sympathy for the operators

It's not just Apple and the iPhone that are the subject of some extreme views (particularly in North America). Network operators also provoke some red-hot far-out responses. But whereas the iPhone tends to provoke unduly strong admiration, the operators tend to provoke unduly strong opprobrium.

For example, here's some verbatim comments that came up in a private piece of research conducted in and around Silicon Valley earlier this year:

"Everyone in tech has rope burns around their necks from doing business with the carriers. They hung themselves trying to do carrier deals."

"The operator is an adversary, not a partner."

"The basic problem with mobile is that the operators are in the way".

In London, the sentiment is less blatant, but it's still present. During almost every Mobile Monday London event that I've attended, sooner or later some question from the audience takes a semi-joking, semi-serious pot shot at network operators, blaming them for one or other aspect of lack of openness. I find these comments uncomfortable - first, because I count many friends among employees of network operators, and second, because it seems to me that the issue is considerably more nuanced than this kind of easy scape-goating suggests.

It was for this reason that I deeply enjoyed discovering and reading the recent article "Open=Beta?" by former Qualcomm SVP Jeffrey Belk. The article started by recounting some of the the usual criticisms:
"The application community, and the Venture Community that finances them, are rightly tired of what can be perceived as a small global cadre of folks in the carrier community, as well as egregious application qualification processes, putting a chokehold on the deployment of innovation in the applications space. And the chokehold is stifling innovation and growth of the wireless data applications business".
However, as Jeffrey goes on to say,

"But as usual, the truth is not so simple..."

"One REALLY cool company got a trial with a few operators. A small problem: Their application bricked (i.e. killed dead, dead, dead) the phones of some of the trial users. Other applications, usually written by folks that are accustomed to the massive memory and hard drives of PCs or Mac, are WAAAY too resource intensive (memory, processing power) for anything but the top tier of smart phones, and even with that tiny market, performance of the apps is often suspect. Another application (fixed now), kept a persistent data connection up between the phone and carrier network. This is a huge issue, as a lot of users still don’t have unlimited data, and if an app is sucking data without them knowing it, it could be costing them a fortune, let along putting an anchor on the data performance of the operator’s network..."

"For the operators, the simplistic reality is that the bottom line is the bottom line, and they CANNOT allow applications to either 1) raise their costs structures or 2) damage their brand/customer base, because when things go wrong with an application or phone, people blame the operator. Every piece of research I have seen (or conducted) over the past decade makes that point clear. And just why do operators need to protect their network? A few months ago, I saw a CEO of a major operator speak. In the Q&A, he mentioned as an answer to a question that it costs him a minimum of $8 per phone call to answer a support call. That’s not counting the pissed off customer factor and dilution of the brand that he’s spent billions of dollars or euros building..."

"At a recent developers conference, an operator was telling a story of a section of a city where service parameters all went to hell, cells shrinking, customers losing service. They tracked it back to an enterprise customer that had gotten permission to test a new piece of hardware, and that hardware was causing nasty network effects. Bad. Another example, not as brutal, is when several applications that, when I asked about some trial metrics, with persistence (or even no persistence but frequent network access) were impeding customers' ability to make or receive voice calls. Very bad, again -- something operators just ain’t gonna allow because when these things start to happen, customers are going to look down at their phone, look at the logo on the phone, and get pissed at their operator. And pissed off customers churn. And churn makes operators' metrics look bad. And bad metrics make financial markets unhappy. And unhappy financial markets make operator executives lose their jobs. So it just won’t happen."

In between the paragraphs I've quoted, there's lots more interesting analysis. There's no easy answer, Jeffrey suggests, except for some first-class development work by applications providers, who need to take the time to understand the special complications of mobile. Applications won't be tolerated on the network, even with the label "Open", if they are in reality only beta quality (or "pre-beta"), and risk significant network and support costs.

Is that the end of the story? Unfortunately, there is one more twist. Application developers often perceive that network operators have an additional motivation, lying behind their defensible motivation to preserve the quality of the network. That additional motivation is less defensible: it's to block the kind of innovative services that could divert some of the network operator's highly valued revenues to alternative services. Presumably that's part of the reason why network operators appear to dislike open phones that support wireless VoIP.

That looks like another issue for which there's no easy answer. It's an issue that transcends mobile operating systems.

Sunday, August 24, 2008

Market share is no comfort

In the discussion of whether Symbian and Nokia are fundamentally threatened (or even “irrelevant”) in the face of the huge market buzz around the Apple iPhone, I take no comfort in the fact that Symbian’s share of the global smartphone market is an order of magnitude larger than that of the iPhone. Therefore I disagree with those replies to my previous blog post that highlighted Symbian’s very considerable market share lead, worldwide (but admittedly not in the USA), over the iPhone.

At first sight, strong market leadership should count for a lot. It should trigger a virtuous cycle effect. More phones should attract more developers (who are interested in their apps running on large numbers of phones) which should result in more software tailored to that platform, which should in turn increase the attractiveness of these phones to end users. And that should result in even more phones being sold, and so on – virtuous cycle.

And in reality, a powerful virtuous cycle effect does exist. An experienced and sophisticated ecosystem (“ES”) has grown up around the Symbian operating system (“OS”) and is continuously adding more value to this platform. The OS-ES virtuous cycle does work. However, it’s not invulnerable.

The history of the technology industry is full of examples of companies who were in similar leadership positions to that currently held by Symbian, but whose markets were transformed by disruptive new entrants. Harvard Business School professor Clayton Christensen is deservedly applauded for his description and analysis of how market disruption takes place:
  • Celebrated examples include how the leading providers of mini-computers, such as DEC, Data General, Wang, Nixdorf, and Prime, failed to appreciate the significance of the initially small market that grew up around fledgling personal computers. These manufacturers saw little profit in that market. But when PC technology improved and the surrounding ecosystem matured, it was too late for these erstwhile computing giants to take leading roles in the new industry (despite lots of effort which they eventually but unsuccessfully expended on that new cause).
  • An earlier example, also told by Christensen (in "Seeing what's next: using theories of innovation to predict industry change"), concerns the disruption caused by the invention of the telephone to the communications industry of that era (1870s): market leader Western Union evaluated the new technology created by Alexander Graham Bell, but concluded it lacked the power to handle the long-range business communications from which the company made most of its profits. Again, technology improved and new business relationships formed, faster than Western Union could respond - with Western Union being plunged into decline as a result.
And there’s more. MIT professor James Utterback elegantly recounts many intriguing and salutary examples in his book “Mastering the dynamics of innovation: How Companies Can Seize Opportunities in the Face of Technological Change”. The book shows how familiar technologies such as refrigeration, electrical lighting, and plate glass, were all clear underdogs at the time of their initial market introduction, and faced serious competition from entrenched industrial alliances whose technologies (such as large-scale ice transportation, or gas lighting) themselves appeared to be regularly improving.

Could the iPhone fit into a similar pattern? It might. There are possible futures in which, say, more than half of all phones sold in the world have iPhone technology inside them. I don’t see that as the most likely future – far from it! – but it does have a certain logic to it:
  1. The iPhone is in many ways a simpler product proposition than existing smartphones (just as PCs were simpler than mini-computers). There are considerably fewer applications built into the iPhone than you can find in a standard S60 phone. That relative simplicity means that some feature-focused users will decide not to use the device. But the device taps into a new market that is arguably underserved by previous offerings. This is the very considerable market of users who don’t need every bell and whistle in feature-packed smartphones, but who are ready for a better experience than can be had from ordinary phones.
  2. The iPhone uses physical components that “break the rules” regarding cost: they’re considerably more expensive to manufacture than most other smartphones, and this makes the device more expensive to purchase. However, again, it may be that now is the right time to break this rule: a greater number of users may be willing to bear this additional cost (in view of the additional benefits that buys them).
  3. The iPhone isn't growing its ecosystem from scratch; it can benefit from a crossover effect from various components that were already in place in Apple's pre-iPhone product offerings. Principally, the highly-evolved iTunes distribution mechanism plays a big part in ensuring a good end-user experience with the iPhone.
  4. The iPhone has put special emphasis upon a number of usability aspects, including the graphics "wow", the UI itself, the mobile web browsing experience, and the discovery and installation of new applications. Users have been drawn to these aspects of the device, even though the device lacks other aspects that are present (and well-evolved) in other smartphones.
  5. Despite what some critics have said, these innovations aren't (all) easy for other companies to copy. The "look" can be mimicked, but the "feel" is the result of countless small design and implementation details, that are anchored in a sophisticated underlying software system.

For another analogy, the iPhone is similar to the initial Palm Pilot devices, which fared much better in the market than earlier attempts at pen-input handheld devices. The Palm Pilot delivered less than these other devices (such as the Apple Newton, the Casio Zoomer, and the General Magic "Magic Cap") but provided a much more usable experience.

So, let's evaluate this scenario. Do disruptive new market entrants always succeed in reaching market leadership position? Of course not. Although it is difficult for market leaders to respond to this kind of change of rules in their industry, it's not impossible.

Here's one counter-example: Microsoft and the Internet. Initally, it did look as though Netscape was succeeding in building an impregnable position by bringing a compelling new product to market in an area that Microsoft had previously ignored - an Internet browser. But Microsoft managed to turn around the situation, by dint of two measures:

  1. Clear internal recognition, from the highest leadership, of the fundamentally changing market landscape
  2. Swift and effective execution, continued over many years.

I'm loathe to compare Nokia/Symbian to Microsoft, but in this case the comparison has merits.

What's more, I expect that it will become clear, over the next year or so, just how much the Symbian Foundation is itself changing the rules of the mobile industry - and (crucially) enabling companies who use this software to change the rules even further. If you think the iPhone is innovative, you're right, but you ain't seen nothing yet.

Tuesday, August 19, 2008

Nokia and the valley iPhone super-fans

"Nokia's Software Problem", proclaimed an article in Forbes yesterday, that gave voice to excited Silicon Valley adulation over the can't-do-anything-wrong iPhone.

The article contained a report on a recent roundtable organised by Michael Arrington. Arrington himself is quoted in the article as pronouncing,
"I believe that Nokia and Symbian are irrelevant companies at this point."
Part of the problem, apparently, is that:
"Nokia sells hundreds of phone models and supports three different operating systems. No two phones work exactly the same way. Simple models like Nokia's 2610 aren't compatible with the Symbian software used on Nokia's best handsets, such as the N95. Applications written for the iPhone, by contrast, will run on every iPhone."
Now there's such a thing as being a fan of the iPhone. That's understandable. Indeed, there are many great features to the iPhone. It's proved to be an impressive device. What's much less understandable is when this fanship extends into super-fanship of the type reported in this article, which makes people blind to:
  • the genuine merits of devices from other manufacturers (such as Nokia);
  • the likelihood that these manufacturers will come out with impressive new devices.
(I almost used a less polite word than "super-fanship" here, but hey, let's try to be objective.)

Let's get real. Of course there are big differences between different Nokia phones. Nokia supplies phones catering to very wide varieties of taste, usage model, and pocket. It's no surprise that different software is used to power these different devices. In contrast, up till now, there's really only one kind of iPhone. That makes it relatively easy for developers to write apps that work on (err) every kind of iPhone. However, the current iPhone isn't to everyone's taste. Some people love the big screen form factor, and are happy that there's no keyboard. Others would definitely prefer different form factors and UI mechanisms. Others again would prefer a far less expensive phone. If/when Apple produce a variety of phones comparable to that produced by Nokia, it will be interesting to see exactly how portable the different applications remain.

I have another reservation about the arguments in the Forbes article. The email capabilities of the N95 are criticised as being less immediately usable than those of the iPhone. However, a fairer comparison in this case would be with those Nokia phones that specialise in email connectivity. (Remember, there is more than one kind of Nokia phone...). The recently released E66 and E71 would be better comparators. (See eg here for one review of the E71.)

It's true that we can anticipate very interesting times, as forthcoming new Nokia phones reach the market in the months ahead. Naturally there will be impressive new smartphones from several other suppliers too (running both Symbian and non-Symbian operating systems). We can expect new kinds of user interface models, as different manufacturers build and riff on the innovations produced by their competitors - and bring out some totally new ideas of their own. In achieving these new effects, Symbian-powered phones can take advantage of the following features that are missing (so far) from the iPhone stable: Flash, Java, and the new ScreenPlay graphics architecture.

Looking slightly further afield, the new levels of openness enabled by the Symbian Foundation should have the additional benefits of providing new routes to market for Symbian technology, as well as more rapid collaborative development. If that's a "software problem", it's a problem of the most attractive sort!

Thursday, August 14, 2008

Who says that "design by committee" must always be bad?

Writing in EE Times yesterday, comparing the prospects of different mobile operating systems, Rick Merritt has a bit of illicit fun complaining about what he sees as inevitable slowness in the operation of the forthcoming Symbian Foundation:
Trailing behind

...it will take developers as long as two years to meld all the pieces of the unified open-source platform Nokia plans.

The environment will combine the best elements of the UIQ and MOAP(S) environments created by Sony Ericsson and Docomo, respectively. But it will not run existing apps written for those environments.

Worse, the unified Symbian will be defined by a complex set of interworking groups at the Symbian Foundation—including separate councils on feature road mapping, user interface and architecture—drawn from the dozen companies that make up the new foundation. What's the Finnish term for "design by committee"?

This description of the intended collaborative design and review mechanism of the Symbian Foundation implies that such a process is bound to be ineffective. The underlying idea is that committees (or "councils", to use the word from the Symbian Foundation whitepaper) are for losers, and never produce anything good.

It's true that many committees struggle. They can degenerate into talking shops. But not all committees have such a fate.

Indeed, what's proposed for the Symbian Foundation isn't something brand new. It's an evolution of a collaborative design and review mechanism that is already in place, and which has been working well for many years, guiding the evolution of Symbian OS up till now.

For example, Symbian TechCom ("Technology Committee") has met three or four times each year since 1998, and successfully performs many of the tasks slated for the new councils. Symbian TechCom membership includes leading technical specialists and senior product managers from phone manufacturers around the world. What makes TechCom work so well is:
  • A series of processes that have evolved over the ten years of TechCom's existence
  • Continuity of high-calibre personnel attending the meetings, who have learned how to work together effectively
  • Skilled management by the Symbian personnel responsible for the operation of this body
  • Excellent preparation before each meeting - and good follow up afterwards.

The skills of running an effective committee may sound boring, but believe me, if done right, they enable better decisions and a new level of combined buy-in to the conclusions eventually reached.

As another example, from further back in my professional career, I remember countless long discussions over aspects of the Psion EPOC suite of software. How should the Agenda app operate in such-and-such a circumstance, exactly what APIs should the OPL scripting language provide, which software features should be centralised to libraries and which left in individual apps...? These questions (and many many others) were decided by a process of debate and eventual consensus. It can be truly said that this software system was "designed by committee". Some people might think that's a criticism. On the contrary, it's a great strength.

In short, collaboration is hard, but when you've got the means to make it work, the outcome will be better (for complex problems) than if strong individuals work independently.

Let me briefly comment on the other two paragraphs from the above extract:

The [Symbian Foundation] environment will combine the best elements of the UIQ and MOAP(S) environments created by Sony Ericsson and Docomo, respectively. But it will not run existing apps written for those environments.

However, it will run existing apps written for the S60 environment - which is the majority implementation of Symbian OS.

it will take developers as long as two years to meld all the pieces of the unified open-source platform Nokia plans.

But there's no need to wait for two years before working with this software. That software will represent a smooth evolution of existing Symbian OS. Symbian OS will continue to be updated regularly, with new releases continuing to appear roughly two or three times each year. Any effort applied by developers to create solutions for these existing and forthcoming releases will be well worth it:

  • These solutions will run on the smartphone platform that has by far the largest marketshare
  • These solutions will also run on devices running the version of Symbian OS released in due course by the Symbian Foundation.

In conclusion, I don't agree with any implication that the Symbian Foundation is going to result in slower software development. On the contrary, the outcome will be deeper collaboration and swifter innovation.

Wednesday, August 13, 2008

There’s more to Open Innovation than Open Source

Here’s the challenge: How best to capitalise on the potential innovation that could in theory be created by users and developers who are based outside of the companies that are centrally responsible for a product platform?

This is the question of how best to make Open Innovation work. Recall the following contrasts between Open Innovation and so-called Closed Innovation - taken from the pioneering book by Henry Chesbrough, “Open innovation: the new imperative for creating and profiting from technology”:

The “closed innovation” mindset:
  1. The smart people in our field work for us
  2. To profit from R&D we must discover it, develop it, and ship it ourselves
  3. If we discover it ourselves, we will get to the market first
  4. The company that gets an innovation to market first will win
  5. If we create the most and the best ideas in the industry, we will win
  6. We should control our IP, so that our competitors don't profit from our ideas.

The “open innovation” mindset:

  1. Not all the smart people work for us. We need to work with smart people inside and outside our company
  2. External R&D can create significant value; internal R&D is needed to claim some portion of that value
  3. We don't have to originate the research to profit from it
  4. Building a better business model is better than getting to market first
  5. If we make the best use of internal and external ideas, we will win
  6. We should profit from others' use of our IP, and we should buy others' IP whenever it advances our own business model.
In the modern world of hyper-complex products, easy communication via the Internet and other network systems, and the “Web 2.0” pro-collaboration zeitgeist, it is easy to understand why the idea of Open Innovation receives a lot of support. The challenge, as I said, is how to put these ideas into practice.

It’s tempting to answer that the principal key to successful Open Innovation is Open Source. After all, Open Source removes both financial and contractual barriers that would otherwise prevent many users and external developers from experimenting with the system. (What’s more, “Open Innovation” and “Open Source” share the prefix “Open”!)

However, in my view, there’s a lot more to successful Open Innovation than putting the underlying software platform into Open Source.

To see this, it’s useful to review some ideas from the handy summary presentation by leading Open Innovation researcher Joel West, “Managing Open Innovation through online communities”. Joel makes it clear that there are three keys to making Open Innovation work best for a firm (or platform):
  1. Maximising returns to internal innovation
  2. Incorporating external innovation in the [platform]
  3. Motivating a supply of external innovations.

Let's dig more deeply into the second and third of these keys.

Incorporating external innovation in the platform

The challenge here isn’t just to stimulate external innovation. It is to be able to incorporate this innovation into the platform. That requires the platform itself to be both sufficiently flexible and sufficiently stable. Otherwise the innovation will fragment the platform, or degrade its ongoing evolution.

It also requires the existence of significant skills in platform integration. Innovations offered by users or external developers may well need to be re-engineered if they are to be incorporated in the platform in ways that meet the needs of the user community as a whole, rather than just the needs of the particular users who came up with the innovation in question.

  • This can be summarised by saying that a platform needs skills and readiness for software management, if it is to be able to productively incorporate external innovation.

Motivating a supply of external innovations

The challenge here isn’t just to respond to external innovations when they arise. It is to give users and external developers sufficient motivation to work on their ideas for product improvement. These parties need to be encouraged to apply both inspiration and perspiration.

  • Just as the answer to the previous issue is software management, the answer to this issue is ecosystem management.

But neither software management nor ecosystem management comes easy. Neither fall out of the sky, ready for action, just by virtue of a platform being Open Source. Nor can these skills be acquired overnight, by spending lots of money, or hiring lots of intrinsically smart people.

Ecosystem management involves a mix of education and evangelism. It also requires active listening, and a willingness by the platform providers to occasionally tweak the underlying platform, in order to facilitate important innovations under consideration by external parties. Finally it requires ensuring that third parties can receive suitable rewards for their breakthroughs – whether moral, social, or financial.

Conclusion: On account of a legacy of more than ten years of trial and error in building and enhancing both a mobile platform and an associated dynamic ecosystem, the Symbian Foundation will come into existence with huge amounts of battle-hardened expertise in both software management and ecosystem management. On that basis, I expect the additional benefits of Open Source will catalyse a dramatic surge of additional Open Innovation around the Symbian Platform. In contrast, other mobile platforms that lack this depth of experience are likely to find that Open Source brings them grief as much as it brings them potential new innovations.

Tuesday, August 12, 2008

Audacious goals

Martin Sauter asks: Which BHAGs are held by companies in the wireless space?

BHAG (Big Hairy Audacious Goal) is a memorable term introduced by Jim Collins and Jerry Poras in their watershed book, “Built to last: successful habits of visionary companies”. This book was widely read (and debated) within Psion in the mid 1990s. I vividly remember Psion CEO David Potter giving an internal talk on themes from that book relevant to Psion. That talk had a lasting effect.

As Martin mentions, Symbian has been driven for many years by the audacious idea that, one day, Symbian OS will be the most widely used software platform on the planet. But that’s only one of several BHAGs in my mind.

Personally I prefer to say that Symbian’s goal is to be the most widely used and most widely liked software platform on the planet. That’s because I see the latter element as being a key contributor towards the former element. My vision is that people of all dispositions and from all social groups the world over will have good reason to want to use devices running this software – and will be able to afford them.

Here’s another BHAG. Looking towards the activities of the Symbian Foundation (assuming that the regulatory authorities approve the deal that creates this foundation), I envision a time when the ten or so principal package owners for the Symbian Platform will be among the most widely admired and respected software engineers on the planet. Books and articles will frequently write about each of these principal package owners and their finely honed skills in software architecture, software quality, software usability, and large-scale software integration. These articles will celebrate the different backgrounds and different sponsor-companies of these principal package owners (and will no doubt also delve into the multi-faceted inter-personal relationships among this group of world-striding individuals). These individuals will be the pin-up superstars who inspire new generations of emerging world-class software engineers.

I have other large-scale aspirations concerning the future of the Symbian Foundation, but it’s not appropriate to talk about these for the moment. However, what I am happy to share is some audacious ideas for the evolution of the products that I expect to be created, based on Symbian OS, in the 15-25 years ahead:
  • The human-computer interaction will sooner or later evolve to become a far more efficient brain-computer interaction. Instead of device owners needing to type in requests and then view the results on a physical screen, it will be possible for them to think requests and then (in effect) intuit the results via inner mental vision. (Just as we all had to learn to type, we’ll have to learn to think anew, to use these improved interfaces, if you see what I mean.) So the rich information world of the internet and beyond will become available for direct mental introspection;
  • The smartphone devices of the future will be more than information stores and communications pathways; they will have powerful intelligence of their own. Take the ideas of a spell-checker and grammar-checker and magnify them to consider an idea-checker and an internal coach. So the smartphone will become, for those who wish it, like a trusted best friend;
  • Adding these two ideas together, I foresee a time when human IQ and EQ are both radically boosted by the support of powerful mobile always-connected electronic brains and their nano-connections into our biological brains. To be clear, such devices ought to make us wiser as well as smarter, and kinder as well as stronger. For a glimpse of what this might mean, I suggest you take the time to find out what happens to one of the key characters in Kevin Bohacz’s awkwardly titled but engrossing and audacious (I think that’s the right word in this context) novel “Immortality”.

There’s more. In addition to far-reaching ideas about the products that the operation of the Symbian Foundation will eventually enable, it’s also worth considering some far-reaching ideas about the problem-solving capabilities of the robust yet transparent open collaborative methods expected to be deployed by the Symbian Foundation (methods that build on best practice established in the first ten years of Symbian’s history). In other words, the potential benefits of richly skilled open collaboration go far beyond the question of how to create world-beating smartphones. As highlighted in the tour-de-force “The upside of down” by the deeply thoughtful Canadian researcher Thomas Homer Dixon, the profound structural issues facing the future of our society (including climate change, energy shortage, weapons proliferation, market instability, fundamentalist abdication of rationality, and changing population demographics) are so inter-twined and so pervasive that they will require a new level of worldwide collaboration to solve them. Towards the end of his book, Homer-Dixon points to the transformative potential of open-source software mechanisms for inspiration for how this new level of collaboration can be achieved. It’s an intriguing analysis. Can open source save the world? Watch this space.

Footnote: Having the right BHAG is an important first step towards a company making a dent in the universe. But it’s only one of many steps. Although “Built to last” is a fine book, I actually prefer Jim Collin’s later work, “From good to great: why some companies make the leap ... and others don't”. In effect, “From good to great” is full of acutely insightful ideas on how companies can make progress towards their BHAGs.

Monday, August 11, 2008

Connectivity failure

Sometimes you only really appreciate the value of a service when you can no longer access it.

For around the last four years, I've enjoyed having my corporate email pushed onto my smartphone. That means, wherever I've been, I've had a good idea of the items waiting for me when I open my email application on my PC - and I've very often been able to answer emails from my phone (and/or write new emails), without needing to trouble my PC. It's been a great productivity boost.

However, yesterday I crossed over the border from Peru to Bolivia, as part of a long anticipated two-week long family holiday in South America, and all the GPRS connectivity to my phone ceased. I've been in a state of mild shock ever since. I've not been able to access the services that I've come to take for granted in previous holidays and business travel around the world. That includes BlackBerry push email connectivity, frequent mobile access to information sites such as the BBC, Wikipedia, Google, and Facebook, as well as interaction with various mobile forums or discussion groups.

I'm sure I'll enjoy the sights in and around La Paz over the next 48 hours. For example, there's the ruins of Tiwanaku, a pre-Inca city which is said to have been, around 700AD, the largest city anywhere in the world. But I'm also sure there will be many moments during these 48 hours when I'll be instinctively reaching for my smartphone, ready to look up some information snippet that will provide more context to what I'm seeing with my own eyes or hearing from the tourguide, and then I'll realise that, for the time being, I'm cut off from that richer information world.

Footnote: My internet connectivity is provided by Vodafone UK. None of the network operators that I can see from my phone, here in Bolivia, seem to have working GPRS roaming back to Vodafone. If anyone knows differently, I'll be delighted to hear from you!

Wednesday, August 6, 2008

Two fallacies on the value of software

Software is everywhere. Unfortunately, buggy software is everywhere too.

I'm writing this en route to a family holiday in South America - four countries in 15 days. The holiday starts with a BA flight across the Atlantic. At first sight, the onboard "highlife" entertainment system is impressive. My son asks: do they really have all these music CDs and movies available? "Moore's Law in action" was my complacent reply.

The first sign of trouble was when the flight attendant welcome announcement, along with the usual stuff about "if you sleep, please ensure your fastened seat belt is visible on top of your blanket", contained a lengthy dire warning that no one should try to interact with the video screens in any way, while the system was going through its lengthy startup activity. Otherwise the system would be prone to freeze or some other malfunction.

It seems the warning was in vain. From my vantage point in the very back row of seats on the plane, as the flight progressed I could see lots of passengers calling over the flight attendants to point out problems with their individual systems. Films weren't available, touchscreen interactions were random, etc. The attendants tried resetting individual screens, but then announced that, because so many screens were experiencing problems, the whole system would be restarted. And, by the way, it would take 30 minutes to reboot. All passengers would need to keep their hands off the screen throughout that period of time, even through many tempting buttons advertising features of the entertainment system would be displayed on the screen during that time.

One flight attendant forlornly tried to explain the situation to me: "it's like when you're starting up a computer, you have to wait until it's completely ready before you can start using it".Well, no. If software draws a button on the screen, it ought to cope with a user doing what comes naturally and pressing that button. That's one of the very first rules of GUI architecture. In any case, what on earth is the entire system doing, taking 30 minutes to reboot?

To be fair, BA's inflight entertainment system is hardly alone in having this kind of defect. I've often seen various bizarre technobollocks messages scrolling on screens on the back of aeroplane seats. I also remember a Lufthansa flight in which the software controlling the reclining chairs (I was flying business class on that occasion) was clearly faulty - it would freeze, and all subsequent attempts to adjust the chair position would be ignored. The flight attendants that day let me into the secret that holding down three of the buttons simultaneously for a couple of seconds would forcibly reboot the system. It was a useful piece of knowledge!

And to be fair, when the system does work, it's great to have in-flight access to so much entertainment and information.

But I draw the following conclusion: Moore's Law is not enough. Moore's Law enables enormous amounts of data - and enormous amounts of software - to be stored on increasingly inexpensive storage mediums. But you need deep and wide-ranging skills in software creation if the resulting compex systems will actually meet the expectations of reasonable end users. Software development, when done right, is going to remain as high value add for the foreseeable future.

"Moore's Law" is enough is the first fallacy on the value of software. Hot on its heels comes a second idea, equally fallacious:

The value of software is declining towards zero.

This second fallacy is wrapped up with a couple of ideas:

  1. The apparent belief of some people that all software ought to be sold free-of-charge
  2. The observation that the price of a fixed piece of software does tend to decline over time.

However, the second observation misses the important fact that the total amount of software is itself rapidly increasing - both in terms of bulk, and in terms of functionality and performance. Multiply one function which is slowly declining (the average price of a fixed piece of software) with another one that is booming (the total amount of all software) and you get an answer that refutes the claim that the value of software itself is declining towards zero.

Yes, it's reasonable to expect that individual pieces of software (especially those that have stopped evolving, or which are evolving slowly) will tend to become sold for free. But as new software is made available, and as software keeps on being improved, there's huge scope for value to be made, and for a portion of that value to be retained by first-rate developers.

Footnote: Even after the BA entertainment system restarted, there were still plenty of problems. Fast-forwarding through a film to try to get to the previous location was a very hit-and-miss affair: there was far too much latency in the system. The team responsible for this system should be hanging their heads in shame. But, alas, they're in plenty of company.

Sunday, August 3, 2008

Human obstacles to audacious technical advances

[A] French noblewoman, a duchess in her eighties, ..., on seeing the first ascent of Montgolfier's balloon from the palace of the Tuilleries in 1783, fell back upon the cushions of her carriage and wept. "Oh yes," she said, "Now it's certain. One day they'll learn how to keep people alive forever, but I shall already be dead."

Throughout history, individual humans have from time to time dared to dream that technological advances could free us from some of the limitations of our current existence. Fantastic tales of people soaring into the air, like birds, go back at least as far as Icarus. Fantastic tales of people with lifespans exceeding the biblical "three score years and ten" go back at least as far as, well, the Bible. The French noblewoman mentioned above, in a quote taken from Lewis Lapham's 2003 Commencement speech at St. John's College Annapolis, made the not implausible connection that technology's progress in solving the first challenge was a sign that, in time, technology might solve the second challenge too.

Mike Darwin made the same connection at an utterly engrossing UKTA meeting this weekend. Since the age of 16 (he's now 53), Mike has been trying to develop technological techniques to significantly lower the temperature of animal tissue, and then to warm up the tissue again so that it can resume its previous function. The idea, of course, is to enable the cryo-preservation of people who have terminal diseases (and who have nominally died of these diseases) until reviving them at such time in the future when science now has a cure for that disease.

Mike compared progress with the technology of cryonics to progress with the technology of powered manned flight. Renowned physicist Lord Kelvin had said as late as 1896 that "I do not have the smallest molecule of faith in aerial navigation other than ballooning". Kelvin was not the only person with such a viewpoint. Even the Wright brothers themselves, after some disappointing setbacks in their experiments in 1901, "predicted that man will probably not fly in their lifetime". There were a host of detailed, difficult engineering problems that needed to be solved, by painstakingly analysis. These included three kinds of balance and stability (roll, pitch, and yaw) as well as lift, power, and thrust. Perhaps it is no surprise that it was the Wright brothers, as accomplished bicycle engineers, that first sufficiently understood and solved this nexus of problems. Eventually, in 1903, they did manage one small powered flight, lasting just 12 seconds. Later that day, a flight lasted 59 seconds. That was enough to stimulate much more progress. Only 16 years later, John Alcock and Arthur Brown flew an airplane non-stop across the Atlantic. And the rest is history.

For this reason, Mike is particularly keen to demonstrate incremental progress with suspension and revival techniques. For example, there is the work done by Brian Wowk and Gregory Fahy and others on the vitrification and then reanimation of rabbit kidneys.

However, the majority of Mike's remarks were on topics different from the technical feasibility of cryonics. He spoke for over two hours, and continued in a formal Q&A session for another 30 minutes. After that, informal discussion continued for at least another 45 minutes, at which time I had to make my excuses and leave (in order to keep my date to watch Dark Knight that evening). It was a tour-de-force. It's hard to summarise such a lengthy passionate yet articulate presentation, but let me try:

  1. Cryonics is morally good
  2. Cryonics is technically feasible
  3. By 1968, Cryonics was a booming enterprise, with many conferences, journals, and TV appearances
  4. However, Cryonics has significantly failed in its ambitions
  5. Unless we understand the real reasons for these failures, we can't realise the potential benefits of this program
  6. The failures primarily involve people issues rather than technical issues
  7. In any case, we should anticipate fierce opposition to cryonics, since it significantly disrupts many core elements of the way society currently operates.

The most poignant part was the description of the people issues during the history of cryonics:

  • People who had (shall we say) unclear ethical propriety ("con-men, frauds, and incompetents")
  • People who failed to carry out the procedures they had designed - yet still told the world that they had followed the book (with the result that patients' bodies suffered grievous damage during the cryopreservation process, or during subsequent storage)
  • People who were technically savvy and emotionally very committed yet who lacked sufficient professional and managerial acumen to run a larger organisation
  • People who lacked skills in raising and handling funding
  • People who lacked sufficient skills in market communications - they appeared as cranks rather than credible advocates.

This rang a lot of bells for me. The technology industry as a whole (including the smartphone industry) often struggles with similar issues. The individuals who initially come up with a great technical idea, and who are its first champions, are often not the people best placed to manage the later stages of development and implementation of that idea. The transition between early stage management and any subsequent phase is tough. But it is frequently essential. (And it may need to happen more than once!) You sometimes have to gently ease aside people (ideally at the same time finding a great new role for them) who are your personal friends, and who are deeply talented, but who are no longer the right people to lead a program through its next stage. Programs often grow faster than people do.

I don't see any easy answers in general. I do agree with Mike on the following points:

  • A step-by-step process, with measurable feedback, is much preferable to reliance on (in essence) a future miracle that can undo big mistakes made by imprecise processes today(this is what Mike called "the fallacy of our friends in the future");
  • Feedback on experiments is particularly important. If you monitor more data on what happens during the cryopreservation process, you'll discover more quickly whether your assumptions are correct. Think again about the comparable experiences of the Wright brothers. Think also of the importance of carrying out retrospectives at regular intervals during a project;
  • Practice is essential. Otherwise it's like learning to drive by just studying a book for six months, and then trying to drive all the way across the country the first time you sit in the drivers seat;
  • The quality of the key individuals in the organisations is of paramount importance, so that sufficient energies can be unleashed from the latent support both in the organisation and in wider society. Leadership matters greatly.

Footnote: I first came across the reference to the tale of the venerable French duchess in the commentary to Eliezer Yudkowsky's evocative online reminiscences regarding the death of his 19-year old brother Yehuda Nattan Yudkowsky.

Friday, August 1, 2008

Smartphone Show keynotes looking stronger than ever

If you've been keeping your eye on the Symbian Smartphone Show website, you'll have seen the plans for the keynote sessions taking shape over the last few weeks. The lineup looks particularly strong this year.

Day One (Tuesday 21st Oct) features:
  • Nigel Clifford, Symbian CEO, presenting on "Symbian - 10 years of innovation - the next wave: Symbian Foundation Vision"
  • Ho-Soo Lee, EVP of Mobile Solutions Center, Samsung
  • Rob Shaddock, Corporate VP of Motorola, presenting on "Innovating in an open mobile world".

These individual keynotes will be followed by a panel session, "Symbian Foundation - setting the future of mobile software free", with speakers from the Symbian Foundation board member companies.

Day Two (Wednesday 22nd Oct) features:

  • Kai Öistämö, EVP Devices, Nokia, presenting on "The future of smartphones"
  • Mats Lindoff, CTO of Sony Ericsson, presenting on "Sony Ericsson and the Symbian Foundation: Open to innovation and differentiation"
  • Benoit Schillings, CTO of Trolltech, presenting on "Symbian & Qt: the best of both worlds".

Again, these individual presentations will be followed by a panel session, "Who will win the runtime race":

As the consumer's appetite for increasingly advanced mobile services grows, the decision of choosing which runtime environment to support these services becomes vitally important. With many different leading runtime environments hosted on Symbian OS, both the vendor and developer communities are keeping a close eye on which will emerge as the preferred environment.

The speakers on this second panel cover many of the key mobile runtime environments:

Of course, the keynotes are only one of many reasons to attend this show. For example, see here for the extended agenda for Day One, and here for the extended agenda for Day Two. And that only scratches the surface of the wider set of formal and informal activities that will take place.

It should be fascinating.

I've had the good fortune to be close to the heart of nearly all the major Psion and Symbian expo events, from 1992. The event in 2008 looks like it will top them all.

Footnote: There are likely to be more changes in the keynote lineup, during the whirlwind months in between now and the show itself. Check the official website for updates.