Sunday, December 28, 2008

The best book I read in 2008

I've had the pleasure to read through several dozen fine books in 2008 - here's a partial list of reviews. (One reason this list is "partial" is because I often neglected to assign the label "books" to relevant postings.)

As the year draws to a close, I'm ready to declare one book as being the most memorable and thought-provoking that I've read in the entire year: "The Happiness Hypothesis: Finding Modern Truth in Ancient Wisdom" by University of Virginia Associate Professor Jonathan Haidt. It's a tour de force in positive psychology.

The endorsement printed on the front cover is probably reason enough for anyone to read this book: "For the reader who seeks to understand happiness, my advice is: Begin with Haidt". The endorsement is from Martin Seligman, Professor of psychology, University of Pennsylvania.

The stated purpose of the book is to consider "ten great ideas" about morality and ethics, drawn from Eastern and Western religious and philosophical traditions, and to review these ideas in the light of the latest scientific findings about the human condition. Initially, I was sceptical about how useful such an exercise might be. But the book quickly led me to set aside my scepticism. The result is greater than the sum of the ten individual reviews, since the different ideas overlap and reinforce.

Haidt declares himself to be both an atheist and a liberal, but with a lot of sympathy for what both theists and conservatives try to hold dear. In my view, he does a grand job of bridging these tough divides.

Haidt seems deeply familiar with a wide number of diverse traditional thinking systems, from both East and West. He also shows himself to be well versed in many modern (including very recent) works on psychology, sociology, and evolutionary theory. The synthesis is frequently remarkable. I found myself re-thinking lots of my own worldwide.

Here are some of the age-old themes that Haidt evaluates:

  • The mind is divided against itself - "the spirit is willing but the flesh is weak"
  • Perception is more important than external substance - "Life itself is but what we deem it"
  • Humans tend to be rank hypocrites - we notice the speck in others' eyes, without paying attention to the plank in our own
  • The golden rule of "reciprocity" lies at the heart of all morality
  • Personal fulfilment depends on giving up attachments
  • Personal happiness is best pursued by seeking to cultivate "virtues"
  • Lives need suffering and setbacks to allow people to reach higher states of development
  • Religion plays a unique role in creating cohesive cultures.

To be clear, the evaluation of these themes typically shows both their prevailing strengths and their limitations. (It was a bit of a jolt every time I read a sentence in the book that said something like "What the Buddha failed to appreciate is...")

The ideas that I have taken away from the book include the following:

  • A vivid metaphor of the mind as being a stubborn elephant of automatic desires, with a small conscious rider sat on top of it (as illustrated in the picture on the front cover of at least some editions of the book);
  • In any battle of wills, the elephant is bound to win - but there are mechanisms through which the rider can distract and train the elephant;
  • The most reliable mechanisms for improving our mood are meditation, cognitive therapy, and Prozac;
  • There are hazards (as well as benefits) to promoting self-esteem;
  • Although each person has a "happiness set point" to which their emotional status tends to return after some time, there are measures that people can take to drive their general happiness level higher - this includes the kind of personal relations we achieve, the extent to which we can reach "flow" in our work, and the extent to which different "levels" of our lives "cohere";
  • Alongside the universally recognised human emotions like happiness, sadness, surprise, fear, disgust and anger, that have typically been studied by psychologists, there is an important additional emotion of "elevation" that also deserves study and strengthening;
  • The usual criticisms of religion generally fail to do justice to the significant beneficial feelings of community, purity, and divinity, that participation in religious activities can nurture - this draws upon some very interesting work by David Sloan Wilson on the role of religions as enabling group selection between different human societies.

Despite providing a lot of clarity, the book leaves many questions unresolved. I see that Haidt is working on a follow-up, entitled "The Righteous Mind: Why good people are divided by politics and religion". I'm greatly looking forward to it.

Footnote: "The happiness hypothesis" has its own website, here.

Saturday, December 27, 2008

Revocation infrastructure

In the quest to stop bad applications from doing damage to the data or operation of a phone (or running up large bills, or otherwise adversely impacting the phone network), possible approaches divide into two main routes:
  1. Put the main focus on checking and testing software (and the originator of the software) before it is allowed to be distributed or installed;
  2. Be permissive as regards the initial distribution and installation of software, but withdraw (or "revoke") these permissions if it becomes clear that the software has bad effects.
It seems to be the consensus view that it is impractical (if not impossible) to reliably identify bad software by any prior checking system. These checks will always fail on at least one criteria:
  • The tests will be insufficient to cover all usage conditions; applications which work well on some handsets on some networks may well go wrong on other handsets or other networks;
  • Any attempt to make the tests more reliable will introduce unacceptable time delays and cost.
The best that an application checking system can hope to accomplish is a quick sanity test - to spot significant errors. Inevitably, this means that some bad software will slip through the system. As a result, any anti-malware system on mobile phones needs to consider at least some revocation component.

In principle, here's what revocation could accomplish:
  1. The process of releasing software (including alpha and beta versions) could be relatively quick and painless;
  2. An application that is subsequently found to generate problems on phones could be removed from distribution lists and application stores, to prevent anyone else from installing it;
  3. Messages could be sent to all phones on the network with the effect that users who have already installed the application could be warned about these problems - and given the opportunity to uninstall it;
  4. In more extreme cases, these messages could cause the applications to be automatically uninstalled, without waiting for the approval of the user;
  5. In yet other cases, the developer who signed the application could be barred from signing any more applications - this could be appropriate in cases where the developer has been caught out making pirated zero-cost versions of commercial software.

This picture is attractive. However, we need to be aware that it relies on the existence of a "revocation infrastructure". One part of this infrastructure is the reliable identification of an application. This is accomplished via tamperproof digital signing. However, this is only the start of what's needed for revocation to work.

It was because of the lack of a developed revocation infrastructure that the original Symbian Signed scheme followed route 1 above - Put the main focus on checking and testing software (and the originator of the software) before it is allowed to be distributed or installed - rather than route 2 - Be permissive as regards the initial distribution and installation of software, but withdraw (or "revoke") these permissions if it becomes clear that the software has bad effects.

Here are some of the issues with the mechanics of revocation:
  1. By default, checking at install time for revoked certificates is currently turned off for most (if not all) shipping Symbian phones;
  2. The user would in principle have to pay for the data traffic to check for revocation;
  3. Operators ought ideally to agree on something like a free dedicated access point which is supported across networks while roaming, etc., before it's acceptable to turn this on for the majority of users;
  4. Revocation checking is done on most phones at software install time, there is limited current support for push revocation;
  5. If the revocation checking was defaulted to on, the user could still turn it off for most (if not all) devices;
  6. Software that deliberately or accidentally broke PlatSec partitioning of processes & data could disable the revocation check.
In addition, there are some issues with the policy of revocation:
  1. There is bound to be controversy over who has the authority to decide to revoke a certificate;
  2. Some applications that run without problems on some networks may cause problems to other networks; does this mean that revocation may need to be specific to individual networks?
  3. Some applications that users like and admire may be viewed as malware by other users;
  4. For example, users may have entered considerable amounts of data into an application, that is subsequently forcibly uninstalled due to being revoked; users may complain about no longer have access to their data;
  5. Some application writers may seek to contest decisions to declare their software as malware.
I'm not saying these issues are insurmountable. There are candidate solutions for all these issues. But I do want to point out that revocation has its own costs.

My own view nowadays is that even a partially working revocation would probably still be a better system than the current reliance on centralised testing of applications before they can be distributed.

By "partially working revocation" I mean a system that works by community reviews. Users who notice problems with applications would be encouraged to publicise these issues, so that the community as a whole can weigh up the evidence. Popular application stores would take this information into account in the material provided to describe the applications available for download.

In principle, users would be willing to pay money for a premium service from application stores, as follows:
  • The application store remembers which users have downloaded which applications;
  • If an application is subsequently deemed to be problematic (on, say, particular phones), then relevant users would be sent messages alerting them of this situation.

In some ways, this premium service would be akin to the anti-virus monitoring solutions that are already available from some security specialist companies - although the implementation mechanism would be different.

Note finally that I'm not advocating opening all functionality to all developers, without any vetting. I believe that functionality such as AllFiles, DRM, and TCB, still needs to be carefully controlled, and cannot fall under a system of "use until revoked". One argument in support of this view has already been mentioned (point 6 in the list above of issues with the mechanics of revocation).

Thursday, December 25, 2008

Why good people fail to change bad things

2008 has been a year of great change in the Symbian world. Important change initiatives that were kicked off in previous years have gathered speed.

2008 has also seen change and trauma at many other levels, throughout the mobile industry and beyond. And the need for widespread change still remains. Daily - perhaps hourly - we encounter items that lead us to wonder: Why isn't someone getting this changed? Why isn't someone taking proper care of such-and-such a personal issue, family issue, social issue, organisational issue, political issue, educational issue, environmental issue, operating system issue, ecosystem management issue, usability issue, and so on?

I've attended quite a few "change facilitation workshops" and similar over the last 24 months. One thinker who has impressed me greatly, with his analysis of the causes of failure of change initiatives - even when good people are involved in these initiatives - is Harvard Business School Professor John Kotter. Kotter describes a series of eight steps which he recommends all significant change initiatives to follow:
  1. Build a sense of urgency

  2. Establish an effective guiding coalition

  3. Create a clear, appealing vision

  4. Communicate, communicate, communicate

  5. Remove obstacles (“empower”)

  6. Celebrate small wins

  7. Follow through with wave after wave of change

  8. Embed the change at the cultural level.
Lots of other writers and speakers have their own different ways of describing the processes of successful change initiatives, but I find Kotter's analysis to be the most insightful and inspiring.

The main book that covers this eight stage process is "Leading Change" - a book that must rank high in the list of the most valuable business books ever written.

Subsequently, Kotter used the mechanism of an easily-read "cartoon book", "Our Iceberg Is Melting: Changing and Succeeding Under Any Conditions", in order to provide a gentle but compelling introduction to his ideas. It's a fable about penguins. But it's a fable with real depth. (I noticed it and purchased a copy in the Inverness airport bookshop one day, and had finished reading it by the time my plane south landed at Gatwick. I was already resolved to find my copy of "Leading Change" and re-read it.)

As Kotter emphasises, the steps in the eight-stage change leadership process have mirror images which are the main eight reasons why change initiatives stumble:
  1. Lack of a sufficient sense of urgency;

  2. Lack of an effective guiding coalition for the change (an aligned team with the ability to make things happen);

  3. Lack of a clear appealing vision of the outcome of the change (otherwise it may seem too vague, having too many unanswered questions);

  4. Lack of communication for buy-in, keeping the change in people’s mind (otherwise people will be distracted back to other issues);

  5. Lack of empowerment of the people who can implement the change (lack of skills, wrong organisational structure, wrong incentives, cumbersome bureaucracy);

  6. Lack of celebration of small early wins (failure to establish momentum);

  7. Lack of follow through (it may need wave after wave of change to stick);

  8. Lack of embedding the change at the cultural level (otherwise the next round of management changes can unravel the progress made).
A few months ago, Kotter released yet another book on the subject of change initiatives that go wrong. Like "Our Iceberg Is Melting", this is another slim book - only having 128 pages, and with large typeface, making it another very quick read. But, again, the ideas have real merit. This book is called "A sense of urgency".

As the name implies, this book focuses more fully on the first stage of change initiatives. The biggest reason why significant change initiatives fail, in Kotter's considered view, is because of a lack of:

a real sense of urgency - a distinctive attitude and gut-level feeling that lead people to grab opportunities and avoid hazards, to make something important happen today, and constantly shed low-priority activities to move faster and smarter, now.
Instead, most organisations (and most people) become stuck in a combination of complacency and what Kotter describes as "false urgency":

  • Complacency is frequently fuelled by past successes and time-proven strengths - that may, however, prevent organisations from being fully aware of changes in circumstances, technologies, and markets;

  • False urgency involves more activity than productivity: "It is frenetic. It is more mindless running to protect themselves or attack others, than purposive focus on critical problems and opportunities. Run-run, meet-meet, talk-talk, defend-defend, and go home exhausted."
Kotter provides a helpful list of questions to help organisations realise if they are suffering from over-complacency and/or false urgency:

  • Are critical issues delegated to consultants or task forces with little involvement of key people?

  • Do people have trouble scheduling meetings on important initiatives ("Because, well, my agenda is so full")?

  • Is candour lacking in confronting the bureaucracy and politics that are slowing down important initiatives?

  • Do meetings on key issues end with no decisions about what must happen immediately (except the scheduling of another meeting)?

  • Are discussions very inwardly focused and not about markets, emerging technologies, competitors, and the like? ...

  • Do people run from meeting to meeting, exhausting themselves and rarely if ever focusing on the most critical hazards or opportunities? ...

  • Do people regularly blame others for any significant problems, instead of taking responsibility and changing? ...
The centrepiece of "A sense of urgency" is a set of four tactics to increase a true sense of urgency:
  1. Bring the outside in. Reconnect internal reality with external opportunities and hazards. Bring in emotionally compelling data, people, video, sights, and sounds.

  2. Behave with urgency every day. Never act content, anxious, or angry. Demonstrate your own sense of urgency always in meetings, one-on-one interactions, memos, and email, and do so as visibly as possible to as many people as possible.

  3. Find opportunity in crises. Always be alert to see if crises can be a friend, not just a dreadful enemy, in order to destroy complaceny. But proceed with caution, and never be naive, since crises can be deadly.

  4. Deal with the NoNos. Remove or neutralise all the relentless urgency-killers: people who are not skeptics but who are determined to keep a group complacent or, if needed, to create destructive urgency.
The rest of the book fleshes out these tactics with examples (taken from Kotter's extensive consulting and research experience) and additional checklists. To my mind, there's a great deal to learn from here.

Footnote: Kotter's emphasis on the topic of "real urgency" may seem to fly in opposition to one of the most celebrated messages of the literature on effectiveness, namely the principle that people should focus on matters that are important rather than matters that are merely urgent. In the renowned "first things first" language of Stephen Covey, people ought to prioritise "Quadrant two" (activities which are important but not urgent) over "Quadrant three" (activities with are urgent but not important).

To my mind, both Kotter and Covey are correct. We do need to start out by figuring what are the most important activities. And then we have to ensure that we keep giving sufficient attention to these activities. Kotter's insight is that organisations and people can address this latter task by means of the generation of a sufficient sense of urgency around these activities. In other words, we should drive certain key targets out of Quadrant two into Quadrant one. That way, we'll be more likely to succeed with our key change initiatives.

Wednesday, December 24, 2008

Symbian Signed and pirated applications

In the spirit of "divide and conquer" I'd like to try again to focus on just one out of the many sub-topics that whirl around discussions of Symbian Signed. On this occasion, the particular sub-topic is:
  • Is there merit in using (or modifying) Symbian Signed processes to reduce the prevalence of pirated Symbian applications?
I stated the underlying requirement as follows in "Symbian Signed basics":
c. Reducing the prevalence of cracked software

To make it less likely that users will install “cracked” free versions of commercial applications written by third parties, thereby depriving these third parties of income.
The idea is simple enough:
  • A developer D0 creates an application A0, has it signed, and sells it for a fee
  • To avoid users making and distributing copies of that application, without paying additional fees to the developer, the developer includes an element of copy protection in the application
  • This restricts the application to run on a device identified by (say) an IMSI or an IMEI
  • Some users will be developers in their own right, who possess the programming skills to alter the application to bypass the copy-protection code, creating a cracked version A1
  • In principle, A1 can be copied and will run on a wider number of devices, thereby depriving the developer of additional income
  • However, because A1 is a tampered version of A0, the original signature is no longer valid, so A1 will fail to install.

On the other hand, any developer D1 can access the Symbian Signed mechanism to put a different signature onto the application A1, thereby completing the circumvention of the copy-protection mechanism. The lower the expense of obtaining a signature, and the easier that process becomes (for example, by removing an independent testing phase), the more likely it is that cracked but installable applications (like A1) will circulate.

This is where the requirement to "make it easier for developers to carry out widespread beta testing" comes into tension with the requirement to "reduce the prevalence of cracked software".

OK, having laid out the context, it's time for me to state my own opinion on the matter.

I suspect that piggy-backing on Symbian Signed is probably not the best route for a developer D0 to avoid pirate versions of their application A0 circulating. That's for the following reasons:

  1. It seems inevitable that the Symbian Signed mechanism will continue to become cheaper and easier to operate - in order to address the huge demand to "make it easier for developers to carry out widespread beta testing"
  2. The only kinds of apps which will be difficult for cracker developers D1 to re-sign are those which make use of some high-powered capabilities (like AllFiles or DRM or TCB), which in turn only apply to a small proportion of applications like A0.

So developers D0 ought instead to seek to use other copy-protection mechanisms - such as those involving DRM.

At the same time, the pressure for users to seek free copies of applications will reduce, provided the prices levied for these applications seem reasonable to large numbers of users. In turn, one thing that will allow these prices to remain low is if the population of users buying the applications is large, and if there is an efficient marketplace mechanism (akin to the iPhone AppStore) for users to discover and purchase applications.

(Aside: One more avenue to explore is if mechanisms could be put in place for developers to earn a proportion of ongoing network data or advertising revenues from the use of their application.)

To summarise: I'd like to take the question of "Reducing the prevalence of cracked software" off the Symbian Signed discussion table. (But I remain open to being persuaded otherwise.) That table is already cluttered enough, and the more we can remove from it, the easier it will be to reach a satisfactory consensus view.

Footnote: This posting is #3 out of N I expect to be making about Symbian Signed, where N could become as large as 10.

Sunday, December 21, 2008

Operators and the iPhone

John Strand, independent-minded CEO of Strand Consult, has reached some provocative iconoclastic conclusions about the iPhone.

An edition of "Strand Report" earlier this month was entitled "iPhone: an operator's worst friend". In short, although end-users frequently enjoy using an iPhone, the operators who spend money supporting iPhones on their networks enjoy the experience considerably less.

Since Strand Consult have spent 14 years building up an extensive network of connections among operators worldwide, it's worth taking the time to listen to their opinion on this matter.

Here are a few extracts from the Strand analysis:

Having iPhone customers using large data volumes sounds good, but when data is being sold at a flat rate, a high data consumption results in high production costs without the corresponding increased revenue. You could compare the operators’ attitude towards the iPhone's data consumption with a restaurant owner that has a "all you can eat for 10 Euro” buffet and that is proudest of the customers that eat the most!...

When you examine the iPhone data consumption, you will see that iPhone customers use their browser to view ordinary websites and that they often choose not to view the websites in XHTML - optimised for low bandwidth and mobile phone sized screens. In practice this results in that when an iPhone user browses a typical news site, an ordinary web page will be around 1 MB, while the mobile version of the same page will often be less than 100 Kb. It is significantly cheaper for an operator to produce 100 Kb data than it is to produce 1 MB data and it is much more fun to deliver 100 KB rather than 1 MB when you are selling data at a flat rate...

There are already a number of operators that have issued profit warnings related to their iPhone ventures and our research shows that there is not one single Apple partner in the world among the mobile operators that has increased their overall profit and market share due to the iPhone...

Across the world there is a huge market for unlocked iPhone's. People purchase a phone that has been marketed, sold and subsidised by an operator who thereafter does not receive the data traffic and revenue from that handset. These phones are most often used on other non-Apple partner networks, resulting in the Apple iPhone partner operator ending up with a high SAC, while another non-Apple partner only needs to sell a SIM-only product with a low SAC and attractive voice and data prices...

We know of a great many operators and MVNOs that have done good business on NOT being an Apple and iPhone partner. These operators let other operators subsidise handsets and instead sell SIM cards with inexpensive data traffic at competitive prices. Their low SAC gives them a positive cash flow on the customer far earlier than the Apple partner operators that are subsidising, marketing and selling iPhones...

The conclusion is simple. This is not good business for shareholders of operators that are Apple and iPhone partners - on the contrary it is far better business not to be an Apple and iPhone partner. Operators that choose not to carry iPhone products have an increased probability of serving their shareholders interests over those that move their management’s focus, subsidies, marketing and distribution power on a product that is as beautiful as Paris Hilton, but increases production costs...
Strand Consult return to these themes in their year-end article containing predictions for 2009, "2009 will be the Moment of Truth for many players in the telecoms sector":

Our analyses during 2008 have shown that there is not one operator that has increased their turnover, revenue or improved their market share due to the iPhone. In our latest iPhone analysis LINK we document that a number of operators have issued profit warnings based on the iPhone. We have documented that the closer partnership you have with Apple, the worst business case the iPhone becomes from an operator’s point of view.
I've spent a bit of time searching for substantive rebuttals to this analysis:
  • Some people have said that operators have indeed generated additional revenues from the iPhone - but that's not the same thing as additional profits;
  • Some have commented that the iPhone gives great pleasure to end-users, but that misses the point of the analysis;
  • It's also true that many third party developers have benefited from selling their applications on the iPhone, but, again, that misses the point of the analysis.
I see three possible interpretations:
  1. There are network operators who generate significant additional profits from their support of the iPhone, but they're keeping relatively quiet about this;
  2. The iPhone is indeed better news for developers and end-users than it is for the operators who support it;
  3. We're still in a transitional phase.

I think the third interpretation is the most likely. The mobile industry is in a time of very considerable flux. The iPhone has played an important role of opening people's eyes to the possibilities of smarter mobile devices, but that doesn't mean that operators will continue to be keen to actively support the iPhone. Instead, what I hear is that they're looking for phone platforms that are both complete and highly customisable.

Wednesday, December 17, 2008

Order from open source chaos

Various videos and PDFs from the recent Symbian Partner Event are now available online.

One video that amply repays viewing is Jay Sullivan of Mozilla speaking on "Chaos and order: a Mozilla story". You'll find it on the presentations page of the SPE website.

Mozilla's declared mission - "promote choice and innovation on the Internet" - has a lot in common with what Symbian is trying to do. One size does not fit all. Mozilla's declared methods - involving open source, weak copyleft, and an independent foundation - also resonate with those of the Symbian Foundation. Even the sizes of the organisations are broadly comparable (Jay mentioned that Mozilla has around 175 employees).

Mozilla has been travelling along this particular road a lot longer than Symbian. This helps to explain why many Symbian people in the audience were hanging intently on every word in the presentation.

The questions that the presentation sought to answer included:
  • How can your organisation harness openness (where more and more things happen in public), rather than fight it?
  • How do you get your customers to support each other (peer-to-peer support), rather than always going to the centre for support?
  • How can a comparatively small company take advantage of wide public support to compete with huge existing players?
  • How can 75 developers inside the company leverage 100s of external daily contributors, 1000s of less frequent contributors, 10s of 1000s of overnight testers, and around one million beta testers?

In part, the answer to these questions is to use appropriate tools. For example, Mozilla relies heavily on the Bugzilla bug-tracking database.

In part, the answer comes down to attitude. Mozilla have adopted widespread openness of information sharing: they use wikis and newsgroups, which are almost all publicly accessible. (The exception is a small amount of personnel information.) Another example: Everyone in the world is able to dial into the company weekly status update meeting. (Jay commented: "We know our competition dials in".)

What I personally found most interesting was Jay's analysis of the potential chaos that ensues from this openness. For example, there can be a great deal of "noise" in the online comments from all sorts of people: it's hard to filter postings that are based on reality, from those based on speculation or fantasy. There's a constant trail of chat, with input from all over the world. Everyone can propose changes to the project. In such an environment, how can real work get done? How can you mediate among 50,000 people who all have ideas to improve a particular dialog box in the UI of an application? How to deal with strongly vocal minorities?

The answers were fascinating (and deeply practical):

  • Open doesn't mean democracy
  • Decision-making is messy (but that doesn't mean you should step back from openness)
  • Be prepared to tolerate some messiness
  • Treat disagreements as negotiations
  • Managers of the project need to drive towards definite outcomes - focusing on what is the right outcome rather than who has the right ideas
  • Organise a chorus (rather than a chaos), around local leaders
  • Although anyone can propose changes, you need to earn significant amounts of credibility before you are allowed to implement a change
  • Ensure quality through multiple reviews
  • Review for performance regressions as well as for functionality
  • Educate participants about the vision and the mission of the project, which in turn allows greater micro-level decisions
  • Guide participants towards using the appropriate communication channels for particular topics, and to back up their assertions with research and data
  • Create small focused teams with responsibility for specific areas of product interest
  • Create a common language, to allow discussions to be more productive
  • You still need to have clearly identified decision makers, even though you push as much of the discussion out "to the edge" as possible.

These are good thoughts to keep in mind in the midst of the inevitable turmoil as the Symbian Foundation places 40 million lines of code into open source (and makes corresponding changes in processes) over the next 18 months.

Tuesday, December 16, 2008

Symbian Signed and control

My posting yesterday on "Symbian Signed basics" has attracted more comments (containing lots of thoughtful ideas as well as evident passion) than I can quickly answer.

For now, I'd like to respond to Ian, who raised the following point:

There is no need for signing to ensure safety from malware. That's what (platform) security is for.

Requiring signing without the option of user override is about control, pure and simple.

Can you give me a good reason why people should not have control of their property and why it should be in vendor's hands instead?
The first answer is that, when users purchase a phone, they typically enter into a contract with the supplier, and agree to be bound by the terms of that contract. In cases when the phone is being subsidised or supported by a network operator, the network operator only enters into the relationship on account of a set of assumptions about what the user is going to do with the phone. The network operator can reasonably seek to limit what the user does with the handset - even though the user has paid money for the device.

That's the reason, for example, why T-Mobile stipulated (and apparently received agreement from Google) that no application providing VoIP over cellular data could be installed onto the Android G1. Otherwise, the cost and revenue assumptions of T-Mobile would be invalidated. From Daniel Roth on Wired:

T-Mobile made a big deal about being one of the few carriers embracing open standards and open systems -- which is true. Yet just how open is a (sorry) open question. When I talked to Cole Brodman, the CTO of T-Mobile, after the event about what would stop something like Skype from designing a program that could run on the phone, negating the need for a massive voice plan, he said he had "worked with Google" to make sure Android couldn't run VOIP. "We want to be open in a way that consumers can rely on," is the way Brodman put it to me.
Here's another example. Suppose you spend a lot of money, buying a phone, and two months afterwards, you notice that the battery systematically runs down after only a few hours of use. You're naturally upset with the device, so you take it back to the shop where you bought it from, asking for your money back. Or you spend hours on the phone to the support agents of the network operator trying to diagnose the problem. Either way, the profit made by the handset manufacturer or the network operator from selling you that phone has probably been more than wiped out by the cost of them attending to this usability issue.

But suppose it turns out that the cause of the battery running flat is a third party application you installed which, unknown to you, burns up processor cycles in background. Suppose it also turns out that you have been misled as to the origin of that application: when you installed it, you thought it said "This application has been supplied by your bank, Barclays", but you didn't notice that the certificate from the supplier said (eg) "Barclys" instead of "Barclays". You thought you could trust the website where you found this application, or the people who (apparently) emailed it to you, but it turns out you were wrong. However - and this is the point - you've even forgotten that you installed this app.

The second answer is that, even when we own items, we have social obligations as to what we do with them. We shouldn't play music too loudly in public places. We shouldn't leave garbage in public places. We shouldn't broadcast radio interference over networks. We shouldn't hog more of our fair share of pooled public resources. And, we shouldn't negatively impact the wireless networks (and the associated support infrastructure) on which our mobile phones live.

Both these answers are reasons in principle why users have to accept some limits on what they do with the mobile phones they have purchased.

The more interesting questions, however, are as follows:
  1. To what extent actual do application signing programs meet these requirements - and to what extent do these programs instead support other, less praiseworthy goals?

  2. Could variants of existing signing programs meet these requirements in better ways?
For example, consumers are already familiar with the idea that, when they disassemble the hardware of a device they have purchased, they typically invalidate the manufacturer warranty. (On my Psion Series 5mx, there's still a sticker in place, over a screw, that says "Warranty void if removed".) Would it be possible to educate handset users in a similar way that:
  • Their handsets start out in a situation of having a manufacturer warranty

  • However, if they install an unsigned application (or something similar), they are henceforth on their own, as regards support?

Monday, December 15, 2008

Accelerating out of molasses

Michael Mace has posted a characteristically thoughtful article on his Mobile Opportunity blog:
Every time I think about Nokia and Symbian, I can't help picturing a man knee-deep in molasses, running as fast as he can. He's working up a sweat, thrashing and stumbling forward, and proudly points out that for someone knee-deep in molasses he's making really good time...
The posting is entitled "Nokia: Running in molasses". It arose from Mike reflecting on some of what he heard at the recent Symbian Partner Event (SPE) in San Francisco. The posting is well worth reading. I appreciate the issues that Mike raises. These issues are significant. But as you might expect, I have a somewhat different perspective on some of them.

Large software doesn't mean that software development has to go slow
Charles Davies, Symbian CTO, pointed out to us that Symbian OS has about 450,000 source files. That's right, half a million files. They're organized into 85 "packages"...
There are economies of scale as well as dis-economies of scale. The point of the careful division of the Symbian Platform software into packages is to enable each of the resulting packages to have greater autonomy - and, therefore, to progress more quickly.

There's one subtle point here. Many of the packages include teams from both Symbian and from S60. This applies to cases where the separation of functionality between the two formerly distinct companies resulted in sub-optimal development. Now that Nokia's acquisition of Symbian has completed, these boundaries can be intelligently re-designed.

Disruption, size, and organisational design

This brings me to a comment on the ideas of Clayton Christensen. Here's another extract from Mike Mace's article:
If the folks at Nokia really think they are well positioned to crush Apple, they need to go re-read The Innovator's Dilemma. Being big is not a benefit in a rapidly-changing market with emerging segments.
Agreed, being big is no guarantee of being able to respond well to changing market conditions. That's why I'm personally a big fan of Agile. Agile can help established companies (whether large or small) to launch and embrace disruptions. As Scott Anthony, one of Christensen's co-authors, has recently commented in his article "Can Established Companies Disrupt?":
The data suggests that it is increasingly common for an established company to launch disruptive innovations. More and more incumbents are learning how to embrace disruptive principles such as:
  • Put the customer, and their important, unsatisfied job-to-be-done at the center of the innovation equation
  • Embrace the power of simplicity, convenience, and affordability
  • Create organizational space for disruptive growth businesses
  • Consider innovation levers beyond features and functions
  • Become world class at testing, iterating and adjusting
As I said, being big can have its advantages as well as its disadvantages, so long as individual parts of the company have sufficient autonomy. The hard part is knowing when to seek closer ties, and when to seek looser ties. One of Christensen's later books had some very interesting advice on that score. I can't remember for sure whether that book was "The Innovator's Solution" or "Seeing What's Next". The advice was that where performance remains a critical differentiator, you should look for a tight coupling. Where performance is already "good enough", you should seek a loose coupling - with open APIs and a choice of alternative solutions.

As soon as I read these words, some time around 2003-2004, I had a gut reaction that, one day, the relevant teams in Symbian software engineering and S60 software engineering ought to be combined. It took a long time for that insight to be fulfilled. But now that it's happening, there's plenty of good reason to expect the resulting combined company to start accelerating its development.

Development in parallel with change

Back to Mike Mace, commenting on the SPE presentation by Charles Davies:
Davies talked about the substantial challenges involved in open sourcing a code base that large. He said it will take up to another two years before all of the code is released under the Eclipse license. In the meantime, a majority of the code on launch day of the foundation will be in a more restrictive license that requires registration and a payment of $1,500 for access. There's also a small amount of third party copyrighted code within Symbian, and the foundation is trying to either get the rights to that code, or figure a way to make it available in binary format.

Those are all typical problems when a project is moving to open source, and the upshot of them is that Symbian won't be able to get the full benefits of its move to open source until quite a while after the foundation is launched. What slows the process down is the amount of code that Symbian and Nokia have to move. I believe that Symbian OS is probably the largest software project ever taken from closed to open source. If you've ever dealt with moving code to open source, you'll know how staggeringly complex the legal reviews are. What Nokia and Symbian are doing is heroic, scary, and incredibly tedious. It's like, well, running in molasses.
I have four comments on this:
  1. Even though the full transition to open source may take up to two years from the initial announcement of the foundation (that is, until mid 2010), there are plenty of other things happening in the meantime - with a series of interim releases that progressively convert more of the software from the community-source Symbian Foundation Licence to the open-source Ecliplse Public Licence;
  2. There will be new technologies and new UI features in these interim releases;
  3. The interim releases should already achieve at least some of the considerable benefits of both open source and community source; the first packages which will become available under the EPL are being chosen so that independent developers can do useful things with some of them (including contributing back working code enhancements);
  4. The legal reviews may initially seem daunting, but with the help of modern code-scanning tools and with the advantage of "practice makes perfect", the process is likely to speed up considerably along the way.
Cool stuff in the lab

Mike ends the main part of his article as follows:
Nokia still has a lot of time to get it right. But do they really understand what needs to change? I can't tell, because all I usually get from them is monologues on how big their business is and how much cool stuff they have in the lab.
I accept that analysts must inevitably hedge their bets, regarding the extent of future success of the main mobile operating systems, until a period of proving over the next 12-24 months has shown what these operating systems can actually accomplish. I eagerly look forward to the day when more of the Symbian and Nokia roadmap of stunning new technology, new services, and new user experience attains greater visibility. When that happens, analysts are likely to come down off the hedge.

My own expectation is that the moves to integrate Symbian and Nokia, and to create the Symbian Foundation, will see a substantial speed up of innovation over that time period. But I'm not taking this for granted. After all, I'm well aware of the original subtitle of "The Innovator's Dilemma": "When new technologies cause great firms to fail".

Symbian Signed basics

It's not just Symbian that runs into some criticism over the operation of application certification and signing programs. (See eg the discussion on "Rogue Android apps rack up hidden charges".)

This is an area where there ought ideally to be a pooling of insights and best practice across the mobile industry.

On the other hand, there are plenty of conflicting views about what's best:
  • "Make my network more secure? Yes, please!"
  • "Make it easier to develop and deploy applications? Yes, please!"
If we go back to basics, what are the underlying requirements that lead to the existence of application certification and signing schemes? I append a list of potential requirements. I'll welcome feedback on the importance of various items on this list.

Note: I realise that many requirements in this list are not addressed by the current schemes.

a. Avoiding users suffering from malware

To avoid situations where users suffer at the hands of malware. By "malware", I mean badly behaved software (whether the software is intentionally or unintentionally badly behaved).

Examples of users suffering from malware include:
  1. Unexpectedly high telephone bills
  2. Unexpectedly low battery life
  3. Inability to make or receive phone calls
  4. Leakage without approval of personal information such as contacts, agenda, or location
  5. Corruption of personal information such as contacts, agenda, or location
  6. Leaving garbage or clutter behind on the handset, when the software is uninstalled
  7. Interference with the operation of other applications, or other impact to handset performance.
b. Establishing user confidence in applications

To give users confidence that the applications they install will add to the value of the handset rather than detract from it.

c. Reducing the prevalence of cracked software

To make it less likely that users will install “cracked” free versions of commercial applications written by third parties, thereby depriving these third parties of income.

d. Avoiding resource-intensive virus scanners

To avoid mobile phones ending up needing to run the same kind of resource-intensive virus scanners that are common (and widely unloved) on PCs.

e. Avoiding networks suffering from malware

To avoid situations where network operators suffer at the hands of malware or unrestricted add-on applications. Examples of network operators suffering from such software include:
  1. Having to allocate support personnel for users who encounter malware on their handsets
  2. The network being overwhelmed as a result of data-intensive applications
  3. Reprogrammed cellular data stacks behaving in ways that threaten the integrity of the wireless network and thereby invalidate the FCC (or similar) approval of the handset
  4. DRM copy protected material, provided or distributed by the network operator, being accessed or copied by third party software in ways that violate the terms of the DRM licence
  5. Revenue opportunities for network operators being lost due to alternative lower-cost third party applications being available.
f. Keeping networks open

To prevent network operators from imposing a blanket rule against all third party applications, which would in turn:
  • Limit the innovation opportunities for third party developers
  • Limit the appearance of genuinely useful third party applications.
g. Avoiding fragmentation of signing schemes

To avoid network operators from all implementing their own application certification and approval schemes, thereby significantly multiplying the effort required by third party developers to make their applications widely available; far better, therefore, for the Symbian world to agree on a single certification and approval mechanism, namely Symbian Signed.

Sunday, December 14, 2008

The starfish and the spider

In my quest to understand the full potential of open and collaborative methods of working, I recently found myself re-reading "The Starfish and the Spider: The unstoppable power of leaderless organisations" by Ori Brafman and Rod Beckstrom.


I found this book to be utterly engrossing. I expect that its metaphor of the starfish vs. the spider will increasingly enter common parlance - the same way as "Tipping Point" did. In short:

  • A starfish has a fully de-centralised nervous system, and can survive and prosper when it undergoes an apparent "head-on" attack;
  • A spider has a CEO and a corporate headquarters, without which it cannot function.

The examples in the book show why there's a great deal at stake behind this contrast: issues of commercial revenues, the rise and fall of businesses, the operation of the Internet, and the rise and fall of change movements within society - where the change movements include such humdingers as Slave Emancipation, Female Equality, Animal Liberation, and Al Qaeda.

There are many stories running through the book, chosen both from history and from contemporary events. The stories are frequently picked up again from chapter to chapter, with key additional insights being drawn out. I found some of the stories to be familiar, but others were not. In all cases, the starfish/spider framework cast new light.

The book contains many implications for the question of how best to inspire and guide an open source ecosystem. Each chapter brought an important additional point to the analysis. For example:

  • Factors allowing de-centralised organisations to flourish;
  • The importance of self-organising "circles";
  • The significance of so-called "catalyst" personalities;
  • How successful de-centralised organisations often piggy-back pre-existing networks;
  • How centralised organisations can go about combatting de-centralised opponents;
  • Issues about combining aspects of both approaches.

Regarding hybrid approaches: the book argues that smart de-centralisation moves by both GE and Toyota are responsible for significant commercial successes in these companies. EBay is another example of a hybrid. Managing an open source community surely also falls into this category.

The book spoke personally to me on another level. As it explains, starfish organisations depend upon so-called "catalyst" figures, who may lack formal authority, and who are prepared to move into the background without clinging to power:

  • Catalysts enable major reactions to take place, that would otherwise remain dormant;
  • They trigger the deployments of huge resources from the environment;
  • They make things happen, not by direct power, but by force of influence and inspiration.

There's a big difference between catalysts and CEOs. Think "Mary Poppins" rather than "Maria from Sound of Music". That gave me a handy new way of thinking about my own role in organisations. (I'm like Mary Poppins, rather than Maria! I tend to move on from the departments that I build up, rather than remaining in place.)

Saturday, December 6, 2008

Discovering the adaptive unconscious

Like most people, I sometimes behave in ways that surprise and disappoint either myself or other people who are observing me. I'm occasionally dimly aware of strong under-currents of passion, that seem to have a life of their own. Of course I wonder to mysef, what's going on?

The anicent Greek Delphic injunction is "know thyself". Modern writers use the phrase "Emotional intelligence" to cover some of the same ground. As these modern writers point out, people who are manifestly unaware of their own emotions are unlikely to be promoted to positions of major responsibility within modern corporations or organisations.

Timothy Wilson's fascinating 2002 book "Strangers to ourselves - discovering the adaptive unconscious" takes a slightly different tack. Reading this book recently, I quickly warmed to its theme that - as implied in its title - our attempts to perceive and understand our own motivations can be a lot more difficult or counter-productive than we expect.


Through many examples, the book makes a convincing case that, in addition to our conscious mind, we have a powerful, thoughtful, intelligent, feelingful "adaptive unconscious" that frequently operates outside the knowledge of the conscious mind. It can be just as inaccessible to introspection by the conscious mind as is the operation of our digestive system. Because it is inaccessible, we can often be misled about why we do things (subsequently "fabricating" reasons to explain our behaviour, without realising that we are deceiving ourselves in the process). We can also be seriously misled about what we're feeling, and about what will make us happy.

This adaptive unconscious can often be at odds with our conscious mind:

  • Experiments described in the book show how people, who in their conscious mind are sincerely unprejudiced against (eg) people of other races, can harbour latent prejudices that result in significant discrimation against certain job applicants.
  • These unnoticed prejudices can even have fatal effects - if, for example, policemen have to react super-quickly to a potentially life-threatening situation, and mistakenly infer that (say) a black person is reaching for a gun in his pocket.
Of course, psychologists such as Freud have written widely on this general topic already. But the great merit of this book is that it provides a very balanced and thoughtful review of experimentation and analysis that has taken place throughout the 20th century into the unconscious mind. It puts Freud's ideas into a fuller context. For example, it shows the limitations of the idea that it is "repression" that keeps the activities of the unconscious mind hidden from conscious reflection. Repression is indeed one factor, but it's by no means the only one.

This book contains lots of thought-provoking examples about people's attempts to understand the well-springs of what motivates them. Here's one, from near the end of the book:

"When Sarah met Peter at a party, she did not think she liked him very much; in many ways he was not her type. However, afterwards, she found herself thinking about him a lot, and when Peter telephoned and asked her out for a date, she said yes. Now that she has agreed to the date, she discovers that she likes him more than she knew. This looks like an example of self-perception as self-revelation, because Sarah uses her behaviour to bring to light a prior feeling of which she was unaware, until she agreed to go our with Peter...

"But another possibility is that Sarah really did not like Peter at all when she first met him. She felt obligated to go out with him because he is the son of her mother's best friend, and her mother thought they would be a good match. Sarah does not fully realise this is the reason she said yes, and she mistakenly thinks. 'Hm, I guess I like Peter more than I thought I did, if I agreed to go out with him.' This would be an example of self-fabrication: Sarah misses the real reason for her behaviour...

"The difference between self-revelation and self-fabrication is crucial from the point of view of gaining self-knowledge. Inferring our internal states from our behaviour can be a good strategy if it reveals feelings of which we were previously unaware. It is not such a good strategy if it results in the fabrication of new feelings."

Another issue with gaining greater self-knowledge is that it can damage our self-confidence. The author argues that it can sometimes be beneficial to us to have a slightly inflated view about our talents. That way, we gain the energy to go about difficult tasks. (However, if the discrepancy between our own view and the reality is too great, that's another matter.)

The book concludes by urging that we follow another piece of advice from ancient times. He quotes Aristotle approvingly: "We acquire [virtues] by first having put them into action... we become just by the practice of just actions, self-controlling by exercising self-control, and courageous by performing acts of courage". In short, "do good, to be good".

He goes on to say, "If we are dissatisfied with some aspect of our lives, one of the best approaches is to act more like the person we want to be, rather than sitting around analyzing ourselves."

The book has struck a real chord with me, but it leaves many questions in my mind. Next on my reading list on this same general field is "The Happiness Hypothesis: finding modern truth in ancient wisdom" by Jonathan Haidt.

Friday, December 5, 2008

All Carbide C++ editions are now free of charge

One of the persistent "niggle points" with Symbian OS C++ development has been that developers had to pay significant amounts of money to purchase those features of the Carbide integrated developer environment (IDE) which provided some highly desired functionality such as on-target debugging.

So there's great news today: Carbide v2 now has ZERO licence fee for all editions:

Carbide.c++ 2.0 is now available with support for the latest technologies based on Symbian OS, such as S60 5th Edition and the Qt platform, and it offers significant improvements throughout.

In addition to the technical improvements, Carbide.c++ 2.0 is now available free of charge.

This has already been picked up by various bloggers, including Lucian Tomuta - Carbide.c++ - new and free (yes, like in "free beer") - and Simon Judge - Carbide.c++ 2.0 Free of Charge.

The cost reduction isn't the only piece of good news about this new version. As the Carbide product pages emphasise:
Improvements throughout Carbide.c++ have been designed to make developing Symbian OS C/C++ applications quicker and easier. These improvements include speed and accuracy in code completion, faster response in the Performance Investigator reporting tools, and new connection management for on-device debugging.
This news deserves to run and run.

Wednesday, December 3, 2008

Accelerating the transformation

As noted by Tom Krazit of CNET News, there was lots more news from yesterday's Nokia World event than merely the buzz about the newly announced highly attractive N97. It was also announced that the acquisition of Symbian by Nokia has completed.

One practical impact of the completion of this deal is that preparation can now accelerate - for the forthcoming Symbian Foundation, and for the deep integration of the Symbian and S60 software engineering teams.

As Tom Krazit notes:
After entertaining the world press in Barcelona during the early part of this week, Symbian and Nokia executives will be in San Francisco later this week to discuss their plans for mobile computing and open source, and we'll have reports from the Symbian Partner Event on Thursday.
Personally, I'm about to board my flight to San Francisco for this event. I'm particularly looking forward to open and insightful discussion at this event - including the panel discussion on "Succeeding in the US: the key factors", where I'll be asking for comments and questions from the audience.

Just as I expect very significant amounts of wireless innovation to come from North America in the near future, I also expect very significant amounts to come, perhaps more in the future, from China. Later this month I'll be speaking at an event in Beijing, about "Symbian Platform Development". I'm looking forward to learning a lot - since I plan on listening as well as speaking :-)

In case anyone would like to try to meet up while I'm in San Francisco or in Beijing, please get in touch.

Footnote: There's still time to register for the Partner Event.

Friday, November 28, 2008

Why can't we all just get along?

Blogger Tomaž Štolfa asks me, in a comment to one of my previous posts,
I am also wondering why you are not trying to explore a non-os specific scenario?

Developers and service designers do not want to be bound to a single platform when developing a service for the masses. So it would make much more sense to se a bright future with cross-platform standards set by an independent party (W3C?).

If the industry will not agree on standards quickly enough Adobe (or some other company) will provide their own.
It's a good question. I'm actually a huge fan of multi-platform standards. Here's just a few of many examples:
  • Symbian included an implementation of Java way back in v4 of Symbian OS (except that the OS was called "EPOC Release 4" at the time);
  • Symbian was a founder member of the Open Mobile Alliance - and I personally served twice on the OMA Board of Directors;
  • I have high hopes for initiatives such as OMTP's BONDI that is seeking to extend the usefulness of web methods on mobile devices.

Another example of a programming method that can be applied on several different mobile operating systems is Microsoft's .NET compact framework. Take a look at this recent Microsoft TechEd video in which Andy Wigley of Appa Mundi interviews Mike Welham, CTO of Red Five Labs, about the Red Five Labs Net60 solution that allows compact framework applications to run, not only on Windows Mobile, but also on S60 devices.

There's no doubt in my mind that, over time, some of these intermediate platforms will become more and more powerful - and more and more useful. The industry will see increasing benefits from agreeing and championing fit-for-purpose standards for application environments.

But there's a catch. The catch applies, not to the domain of add-on after market solutions, but to the domain of device creation.

Lots of the software involved in device creation cannot be written in these intermediate platforms. Instead, native programming is required - and involves exposure to the underlying operating system. That's when the inconsistencies at the level of native operating systems become more significant:

  • Differences between clearly different operating systems (eg Linux vs. Windows Mobile vs. Symbian OS);
  • Differences between different headline versions of the same operating system (eg Symbian OS v8 vs. Symbian OS v9);
  • Differences between different flavours of the same operating system, evolved by different customers (eg Symbian OS v7.0 vs. Symbian OS v7.0s);
  • Differences between different customisations of the same operating system, etc, etc.

(Note: I've used Symbian OS for most of these examples, but it's no secret that the Mobile Linux world has considerably more internal fragmentation than Symbian. The integration delays in that world are at least as bad.)

From my own experience, I've seen many device creation projects very significantly delayed as a result of software developers encountering nasty subtle differences between the native operating systems on different devices. Product quality suffered as a result of these project schedule slips. The first loser was the customer, on encountering defects or a poor user experience. The second loser was the phone manufacturer.

This is a vexed problem that cannot be solved simply by developing better multi-os standard programming environments. Instead, I see the following as needed:

  1. Improved software development tools, that alert systems integrators more quickly to the likely causes of unexpected instability or poor performance on phones (including those problems which have their roots in unexpected differences in system behaviour); along this line, Symbian has recently seen improvements in our own projects from uses of the visual tools included in the Symbian Analysis Workbench;
  2. A restructuring of the code that runs on the device in order to allow more of that code to be written in standard managed code environments - Symbian's new Freeview architecture for networking IP is one step in this direction;
  3. Where possible, APIs used by aspects of the different native operating systems should become more and more similar - for example, I like to imagine that, one day, the same device driver will be able to run on more than one native operating system
  4. And, to be frank, we need fewer native operating systems; this is a problem that will be solved over the next couple of years as the industry gains more confidence in the overall goodness of a small number of the many existing mobile operating systems.

The question of technical fragmentation is, of course, only one cause of needless extra effort having to be exerted within the mobile industry. Another big cause is that different players in the value chain are constantly facing temptation to try to grab elements of value from adjacent players. Hence, for example, the constant tension between network operators and phone manufacturers.

Some elements of this tension are healthy. But, just as for the question of technical fragmentation, my judgement is that the balance is considerably too far over to the "compete" side of the spectrum rather than the "cooperate" side.

That's the topic I was discussing a few months back with Adam Shaw, one of the conference producers from Informa, who was seeking ideas for panels for the "MAPOS '08" event that will be taking place 9-10 December in London. Out of this conversation, Adam came up with the provocative panel title, "Can’t We All Just Get Along? Cooperation between operators and suppliers". Here's hoping for a constructive dialog!

Sunday, November 23, 2008

Problems with panels

As an audience member, I've been at the receiving end of some less-than-stellar panel discussions at conferences in the last few months. On these occasions, even though there's good reason to think that the individuals on the panels are often very interesting in their own right, somehow the "talking heads" format of a panel can result in low energy and low interest. The panellists make dull statements in response to generic questions and ... interest seeps away.

On the other hand, I've also recently seen some outstandingly good panels, where the assembled participants bring real collective insight, and the audience pulse keeps beating. Here are two examples:

The format of this fine RSA panel was in the back of my mind as I prepared, last Monday, to take part in a panel myself: "What's so smart about Smartphone Operating Systems", at the Future of Mobile event in London. I shared the stage with some illustrious industry colleages: Olivier Bartholot of Purple Labs, Andy Bush of the LiMo Foundation, Rich Miner of Android, James McCarthy of Microsoft, and the panel chair, Simon Rockman of Sony Ericsson. I had high hopes of the panel generating and conveying some useful new insights for the audience.

Alas, for at least some members of the audience, this panel fell into the "less-than-stellar" category mentioned above, rather than the better examples:

  • Tomaž Štolfa, writing in his blog "Funky Karaoke", rated this panel as just 1 out of 5, with the damning comment "a bunch of mobile OS guys, talking about the wrong problems. Where are cross platform standards?!?"; Tomaž gave every other panel or speaker a rating of at least 3 out of 5;
  • Adam Cohen-Rose, in his blog "Expanding horizons", summed up the panel as follows: "This was a rather boring panel discussion: despite Simon’s best attempts to make the panellists squirm, they stayed very tame and non-committal. The best bits was the thinly veiled spatting between Microsoft and Google — but again, this was nothing new…";
  • The Twitter back-channel for the event ("#FOM") had remarks disparaging this panel as "suits" and "monologue" and "big boys".

It's true that I can find other links or tweets that were more complimentary about this panel - but none of these comments pick this panel out as being one of the highlights of the day.

As someone who takes communication very seriously, I have to ask myself, "what went wrong?" - and, even more pertinently, "what should I do differently, for future panels?".

I toyed for a while with the idea that over-usage of Twitter by some audience members diminishes the ability of these audience members to concentrate sufficiently and to pick out what's actually genuinely interesting in what's being said. This is akin to Nicholas Carr's argument that "Google is making us stupid":

"Over the past few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn’t going—so far as I can tell—but it’s changing. I’m not thinking the way I used to think. I can feel it most strongly when I’m reading. Immersing myself in a book or a lengthy article used to be easy. My mind would get caught up in the narrative or the turns of the argument, and I’d spend hours strolling through long stretches of prose. That’s rarely the case anymore. Now my concentration often starts to drift after two or three pages. I get fidgety, lose the thread, begin looking for something else to do. I feel as if I’m always dragging my wayward brain back to the text. The deep reading that used to come naturally has become a struggle..."

After all, I do think that I said something interesting when it was my turn to speak - see the script I prepared in advance. But after more reflection, I gave up on the idea of excusing the panel's poor rating by that kind of self-serving argument (which blames the audience rather than panellists). That was after I remembered my own experience as being on the receiving end of lots of uninspiring panels - as I mentioned earlier. Further, I remembered that, when these panels started to become boring, my own attention would wander ... so I would miss anything more interesting that was said later on.

So on reflection, here are my conclusions, for avoiding similar problems with future panels:

  1. Pre-prepared remarks are fine. There's nothing wrong in itself with having something prepared to say, that takes several minutes to say it. These opening comments can and should provide better context for the Q&A part of the panel that follows;
  2. However, high energy is vital; especially with an audience where people might get distracted, I ought to be sure that I speak with passion, as well as with intellectual rigour; this may be hard when we're all sitting down (that's why sofa panels are probably the worst of all), but it's not impossible;
  3. The first requirement is actually to be sure the audience is motivated to listen to the discussion - the panel participants need to ensure that the audience recognise the topic as sufficiently relevant. On reflection, our "mobile operating systems" panel would have been better placed later on in the agenda for the day, rather than right at the beginning. That would have allowed us to create bridges between problems identified in earlier sessions, and the solutions we wanted to talk about;
  4. "Less is more" can apply to interventions in panels as well as to product specs (and to blogs...); instead of trying to convey so much material in my opening remarks, I should have prioritised at most two or three soundbites, and looked to cover the others during later discussion.

These are my thoughts for when I participate as a panellist on someone else's panel. When I am a chair (as I'll be at the Symbian Partner Event next month in San Francisco) I'll have different lessons to bear in mind!

Friday, November 21, 2008

Emulating the human brain

Artificial Intelligence (AI) already does a lot to help me in my life:
  • The real-time route calculation (and re-calculation) capabilities of my TomTom satnav system are extremely handy;
  • The automated language translation functionality inside Google web-search, whilst far from perfect, often allows me to understand at least the gist of webpages written in languages other than English;
  • The intelligent recommendation engine of Amazon frequently brings books to my attention that I am glad to investigate further.
On the other hand, the field of general AI has failed to progress as quickly as some of its supporters over the years had hoped. The Wikipedia article on the History of AI lists some striking examples of significant over-optimism among leading AI researchers:
  • 1958, H. A. Simon and Allen Newell: "within ten years a digital computer will be the world's chess champion" and "within ten years a digital computer will discover and prove an important new mathematical theorem."
  • 1965, H. A. Simon: "machines will be capable, within twenty years, of doing any work a man can do."
  • 1967, Marvin Minsky: "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved."
  • 1970, Marvin Minsky (in Life Magazine): "In from three to eight years we will have a machine with the general intelligence of an average human being."
Prospects for fast progress with general AI remain controversial. As we gather more and more silicon power into smartphones and other computers, will this mean these devices become more and more intelligent? Or will they simply be fast rather than generally intelligent?

In this context, one interesting line of analysis is to consider a separate but related question: to what extent will it be possible to create a silicon emulation of the brain itself (rather than to focus on algorithms for intelligence)?

My friend Anders Sandberg, Neuroethics researcher at the Future of Humanity Institute, Oxford University, will be addressing this question in a presentation tomorrow afternoon (Saturday 22nd November) in Central London. The presentation is entitled "Emulating brains: silicon dreams or the next big thing?"

Anders describes his talk as follows:
The idea of creating a faithful copy of a human brain has been a popular philosophical thought experiment and science fiction plot for decades. How close are we to actually doing it, how could it be done, and what would the consequences be? This talk will trace trends in computing, neuroscience, lab automaton and microscopy to show how whole brain emulation could become feasible in the mid term future.
The talk is organised by the UKTA. Last weekend, at the Convergence08 "unconference" in Mountain View, California, Anders gave an earlier version of the same talk. George Dvorsky blogged the result:

Convergence08: Anders Sandberg on Whole Brain Emulation

The term 'whole brain emulation' sounds more scientific than it does science fiction like, which may bode well for its credibility as a genuine academic discipline and area for inquiry.

Sandberg presented his whole brain emulation roadmap which had a flowchart like quality to it -- which he quipped must be scientific because it was filled with arrows.

Simulating memory could be very complex, possibly involving chemical transference in cells or drilling right down to the molecular level. We may even have to go down to the quantum level, but no neuroscientist that Anders knows takes that possibility seriously...

As Anders himself told me afterwards,
...interest was high but time limited - I got a lot of useful feedback and ideas for making the presentation better.
I'm expecting a fascinating discussion.

Wednesday, November 19, 2008

New mobile OSes mean development nightmares

Over on TechRadar, Dan Grabham has commented on one of the themes from Monday's Future of Mobile event in the Great Hall in High Street Kensington, London:
The increase in mobile platforms caused by the advent of the Apple iPhone and Google's Android are posing greater challenges for those who develop for mobile. That was one of the main underlying themes of this week's Future of Mobile conference in London.

Tom Hume, Managing Director of
developer Future Platforms, picked up on this theme, saying that from a development point of view things were more fragmented. "It's clear that it's an issue for the industry. I think it's actually got worse in the last year or so."

Indeed, many of the panellists representing the major OS vendors said that they expected some kind of consolidation over the coming years as completion in the mobile market becomes ever fiercer.
The theme of collaboration vs. competition was one that I covered in my own opening remarks on this panel. Before the conference, the panel chairman, Simon Rockman of Sony Ericsson, had asked the panellists to prepare a five minute intro. I'll end this posting with a copy of what I prepared.

Before that, however, I have another comment on the event. One thing that struck me was the candid comments from many of the participants about the dreadful user experience that mobile phones deliver. So the mobile industry has no grounds for feeling pleased with itself! This was particularly emphasised during the rapid-fire "bloggers 6x6 panel", which you can read more about from Helen Keegan's posting - provocatively entitled "There is no future of mobile". By the way, Helen was one of the more restrained of that panel!

So, back to my own remarks - where I intended to emphasise that, indeed, we face hard problems within our industry, and need new solutions:

This conference is called the Future of Mobile – not the Present Day of Mobile – so what I want to talk about is developments in mobile operating systems that will allow the mobile devices and mobile services of, say, 5 years time – 2013 – to live up to their full potential.

I believe that the mobile phones of 2013 will make even the most wonderful phones of today look, in comparison, jaded, weak, slow, and clunky. It’s my expectation that the phones used at that time, not just by technology enthusiasts and early adopters, but also by mainstream consumers, will be very considerably more powerful, more functional, more enchanting, more useful, more valuable, and more captivating than today’s smartphones.

To get there is going to require a huge amount of sophisticated and powerful software to be developed. That’s an enormous task. To get there, I offer you three contrasts.

The first contrast is between cooperation and competition.

The press often tries to portray some kind of monster, dramatic battle of mobile operating systems. In this battle, the people sitting around this table are fierce competitors. It’s the kind of thing that might sell newspapers. But rather than competition, I’m more interested in collaboration. The problems that have to be solved, to create the best possible mobile phone experiences of the next few years, will require cooperation between the people in the companies and organisations represented around this table – as well as with people in those companies and organisations that don’t have seats here at this moment, but which also play in our field. Instead of all of us working at odds with each other, spreading our energies thinly, creating incomplete semi-satisfactory solutions that are at odds with each, it would be far better for us to pool more of our energies and ideas.

I’m not saying that all competition should be stopped – far from it. An element of competition is vital, to prevent a market from becoming stale. But we’ve got too much of it just now. We’ve got too many operating systems that are competing with each other, and we’ve got different companies throughout the value chain competing with each other too strongly.

Where the industry needs to reach is around 3 or 4 major mobile operating systems – whereas today the number is somewhere closer to 20 – or closer to 200, if you count all the variants and value-chain complications. It’s a fragmentation nightmare, and a huge waste of effort.

As the industry consolidates over the next few years, I have no doubt that Symbian OS will be one of the small number of winning platforms. That brings me to my second contrast – the contrast between old and new – between past successes and future successes.

Last year, Symbian was the third most profitable software company in the UK. We earned licensing revenues of over 300 million dollars. We’ve been generating substantial cash for our owners. We’re in that situation because of having already shipped one quarter of a billion mobile phones running our software. There are at present some 159 different phone models, from 7 manufacturers, shipping on over 250 major operator networks worldwide. That’s our past success. It grows out of technology that’s been under development for 14 years, with parts of the design dating back 20 years.

But of course, past success is no guarantee of future success. I sometimes hear it said that Symbian OS is old, and therefore unsuited to the future. My reply is that many parts of Symbian OS are new. We keep on substantially improving it and refactoring it.

For example, we introduced a new kernel with enhanced real-time capabilities in version 8.1b. We introduced a substantial new platform security architecture in v9.0. More recently, there’s a new database architecture, a new Bluetooth implementation, and new architectures for IP networking and multi-surface graphics. We’re also on the point of releasing an important new library of so-called “high level” programming interfaces, to simplify developers’ experience with parts of the Symbian OS structure that sometimes pose difficulty – like text descriptors, active objects, and two-phase object construction and cleanup. So there’s plenty of innovation.

The really big news is that the pace of innovation is about to increase markedly – for three reasons, all tied up with the forthcoming creation of the Symbian Foundation:

  1. The first reason is a deeper and more effective collaboration between the engineering teams in Symbian and S60. This change is happening because of the acquisition of Symbian by Nokia. By working together more closely, innovations will reach the market more quickly.
  2. The second reason is because of a unification of UI systems in the Symbian space. Before, there were three UI systems – MOAP in Japan, UIQ, and S60. Now, given the increased flexibility of the latest S60 versions, the whole Symbian ecosystem will standardise on S60.
  3. The third reason is because of the transition of the Symbian platform – consisting of Symbian OS together with the S60 UI framework and applications – into open source. By adopting the best principles of open source, Symbian expects to attract many more developers than before to participate in reviewing and improving and creating new Symbian platform code. So there will be more innovation than before.
This brings me to the third of the three contrasts: openness vs. maturity.

Uniquely, the Symbian platform has a stable, well-tested, battle-hardened software base and software discipline, that copes well with the hard, hard task of large-scale software integration, handling input from many diverse and powerful customers.

Because of that, we’ll be able to cope with the flood of innovation that open source will send our way. That flood will lead to great progress for us, whereas for some other software systems, it will probably lead to chaos and fragmentation.

In summary, I see the Symbian platform as being not just one of several winners in the mobile operating system space, but actually the leading winner – and being the most widely used software platform on the planet, shipping in literally billions of great mobile devices. We’ll get there, because we’ll be at the heart of a huge community of impassioned and creative developers – the most vibrant developer ecosystem on the planet. Although the first ten years of Symbian’s history has seen many successes, the next ten years will be dramatically better.

Footnote: For other coverage of this event, see eg Tom Hume, Andrew Grill, Vero Pepperrell, Jemima Kiss, Dale Zak, and a very interesting Twitter channel (note to self: it's time for me to stop resisting Twitter...)