Sunday, July 27, 2008

Understanding Open Source Licensing

"What's the best book to read for an introduction to Open Source?"
I've already given one set of answers to this question, in my article, "Clear thinking about open source". One reply to that article - from Joel West, a writer and researcher on Open Innovation and Open Source whose advice I value - urged me to include one more book in my reading list: Lawrence Rosen's "Open Source Licensing: software freedom and intellectual property law". This weekend I've finished reading it. And indeed, I do now endorse it as being clearly written yet also highly insightful.

Initally, I tended to shy away from this book, instead preferring the book by Heather Meeker that I covered in my earlier article. Both books focus on open source licensing issues, but Meeker's was published this year, whereas Rosen's dates from 2004. So Rosen's book makes no mention of GPL v3, or Sun's experience with open-sourcing Java, or even the Eclipse Public License (EPL) which the Symbian Foundation is likely to adopt. That makes Rosen's book appear out of date. However, I realised that one license which the book does cover (comprehensively) is the Common Public License (CPL) which is the precursor of the EPL and which differs from the EPL in very few places. Reassured, I dipped into the book - and then could hardly put it down.

In summary, I now recommend both the Meeker book and the Rosen book for their coverage of open source licensing. They complement each other nicely. There's a bit of overlap, but also lots of good material in each book that you won't find in the other.

Specifically, here are a few of the "aha"s or other learnings I took away from Rosen's book:

1.) The ten principles of the Open Source Definition are actually quite hard to understand in places (this comment came as a relief to me, since I had been thinking the same thing).

2.) Patents and Copyrights should be approached as parallel sets of legal principle - the former applicable to ideas, and the latter to expressions of ideas. That's a far better approach than initially just thinking about Copyrights, and then trying to squeeze in considerations about Patents at the end.

3.) One of the key differences between different open source licenses is in the treatment of patent licenses - and in the different circumstances in which patent licenses (and/or copyright licenses) can be withdrawn in the wake of various kinds of patent infringement suits. There's a tricky balance that has to be drawn between the needs of both licensor and licensee concerning the continuing value of their respective patent portfolios.

4.) One piece of license evolution covered in the book - the difference between v2.0 and v2.1 of the Open Software License (OSL) - closely mirrors the principal difference between the CPL and the EPL: it's a reduction in the circumstances in which a patent license can be withdrawn when a licensee brings a separate patent infringement case against the licensor.

5.) The insistence in GPL v2 about not being compatible with other licenses that introduce additional restrictions (even restrictions that the initial drafters of GPL v2 had not considered), is a real drawback of that license, since it unnecessarily hinders aggregation of code written under similar but different licenses. (Possible restrictions that have emerged more recently include provisions for defence against patent infringement lawsuits or to protect the licensor's trademarks.)

6.) "... sections of the LGPL are an inpenetrable maze of technological babble. They should not be in a general purpose software license." (page 124)

7.) Disclaimers of liability that are generally written into open source licenses may be overridden by general consumer legislation. Recognising this, the CPL (and hence the EPL) introduces a clause that allocates particular responsibility to "commercial contributors" to defend and indemnify all other contributors against losses, damages, or costs.

8.) One possible way for a company to make money from software is via the mechanism Rosen calls "Eventual Source": code is released as open source after some delay period, but recipients can elect to pay an early access license fee to be able to work with the code (under a non-open source license) ahead of its release as open source.

I've still got lots of questions about open source licensing (for example, about the prospects for wider adoption of GPL v3, and about how successful Rosen's own preferred OSL is likely to be in the longer run). I'll be attending the Open Source in Mobile conference in Berlin in September, when I hope to find out more answers! (And no doubt there will be new questions too...)

Saturday, July 26, 2008

Naming the passion killers

Passion makes a big difference. Posters all over Symbian premises (and on our websites) boldly declare that we "are at our best when we... love working for Symbian, drive to succeed, believe in ourselves, and take pride in what we do..."

That's the Symbian description of the practical importance of passion. Along with people, collaboration, integrity, collaboration, and excellence, passion is one of Symbian's six declared corporate values.

Like many other companies, Symbian each year carries out an internal employee satisfaction survey. The survey is conducted by an external agency, who provide us with information on how our results compare with broadly similar surveys held by other high-tech companies. In the most recent survey, aggregate Symbian employee views demonstrated strong Passion (80% positive rating). Of the six values, this one had the strongest support of all. The score also came in notably higher than the benchmark. In general, our employees enjoy working here, and put their hearts into their activities.

In some ways, "passion" is a longer word for "fun". The good news is that, on the whole, Symbian employees enjoy and value their work. The bad news, however, is as I covered in my previous blog posting, "Symbian, just for fun": many developers outside the company have a less positive feeling about working with Symbian OS software. They may persevere with writing Symbian OS software because their employer pays them to do so, and because of the somewhat attractive prospect of a share in a growing 200M+ unit market, but they often lack the kind of inner motivation and satisfaction that can put them into a super-productive state of "flow".

The encouraging responses I've received to that posting (both via email and online) stengthen my view that it's vitally important to identify understand the inhibitors to developer flow - the killers of Symbian passion. That's a big topic, and I suspect I'll be writing lots more on this topic in the months ahead. But let's make a start.

Lack of clarity with Symbian Signed

The experience of my correspondent ilgaz is probably quite common:

I think the issue here is , we (even technical users) don't really get what should be signed, what shouldn't.
Ilgaz wanted to use a particular third party application (Y-Tasks by Dr Jukka), and thought that it would first need to be signed with a developer certificate. That proved to be an awkward process. However, it turns out that the application is ready to use (for many purposes) without any additional signing. So the attempt to get a developer certificate was unnecessary.

Some might say that Symbian Signed itself is intrinsically a passion killer. I disagree - as I've argued elsewhere. But what does kill passion here is the confusion about the rules for Symbian Signed. You can't expect flow from confusion. I see six causes for this confusion:
  1. Different devices implement Symbian Signed in different ways. Some devices helpfully support a setting to allow the installation of self-signed apps, as well as Symbian Signed ones. Others do not;
  2. Different operators have different views about what kinds of applications they want to allow on their phones;
  3. The subject of permissions for the different capabilities of different pieces of software is intrinsically complex;
  4. The operation of Symbian Signed has changed over time. It's great that it has improved, but some people still remember how it used to work, and that confuses them;
  5. "Once bitten, twice shy": past bad experiences sometimes over-colour present views on the topic;
  6. A small number of people seem to be motivated to spread particularly bad vibes about Symbian Signed.
In this situation, we can't expect to reverse all the accumulated mistrust and apprehension overnight. But the following steps should help:
  • Continue to seek to improve the clarity of communications;
  • Be alert to implementation issues (eg an overworked website - as experienced some months back) and seek to address them quickly;
  • Avoid a divergence of implementations of different application approval schemes by different network operators.
It's my profound hope that the attractive statements of common aims of openness, made by the various parties supporting the Symbian Foundation, will translate into a unity of approaches towards application approval schemes.

Lack of reprogrammable devices

Another correspondent, puterman, points out:
Getting people to develop apps just for fun is one thing, but getting them to hack the actual OS is another thing. For that to be of interest, there have to be open devices available, so that the developers can actually see their code running.
I agree with the importance of quick feedback to changes made in your software. If you change the lower levels of the software, you'll need to be able to re-program an actual device.

The Linux community shows the way here, with the Trolltech Greenphone and the FIC OpenMoko Neo1973 and FreeRunner devices. It's true that there have been issues with these devices. For example, Trolltech eventually discontinued the Greenphone, and the FIC devices have proved quite hard to purchase. However, as the Symbian Foundation software becomes increasingly open source, we can reasonably expect the stage-by-stage appearance of phones that are increasingly end-user re-programmable.

Lack of well-documented API support for "interesting" features of a phone

Marcus Groeber makes a series of insightful points. For example,
One of the main things mobile developers would want to do is make use of the unique features of a mobile phone (connectivity, built in camera, physical interaction with the user). However, it is those area where documentation is still most patchy and API support is erratic (CCameraAdvancedSettings anyone?).

In my view, this aspect of mobilie development should be acknowledged to a much greater degree, and the documentation efforts focused accordingly: If there is a feature in a built-in app of the phone, chances are that a developer will want to try and improve on that. Can s/he?...

I believe that these moments of frustration - finding an API that looks useful in the SDK docs, then spending an evening writing an application that uses it, only to get KErrNotSupported in the end - is probably among the chief reasons for people abandoning their pet projects...

True, many "fun" programmers (me included) don't want to wade through tons of documentation and whitepapers before writing their first proof-of-concept - but to me this makes it even more important that the existing documentation is streamlined, accurate and compact.
Improving our developer documentation remains one of the top-priority goals at Symbian. In parallel, we're hoping that additional publications from Symbian Press (and others) will help to guide developers more quickly through the potential minefields of APIs for the more interesting functionality. The book "Quick Recipes on Symbian OS" (which I mentioned at the end of an earlier posting, "Mobile development in a hurry") is intended to address this audience.

Of course, as Simon Judge points out, sometimes it's not a matter of improving the documentation of existing APIs. Sometimes, what's required is to improve the APIs themselves.

API awkwardness across the UI-OS boundary

The last passion-killer I'll mention for now is another one raised by Marcus Groeber:
most of the "interesting" bits of developing for devices actually come from the licensee's layers of API (in my case, mostly S60), and I believe it is here where there is most work to be done, as well as the interface between the two...

The ad-hoc-ish nature of the S60 UI, which seems to require a lot of experimenting and guesswork for developing even very simple screen layouts that mimic closely what is already present in the phone in dozens of places. Even after years of development, I still consider the CAkn and CEik listbox classes a jungle.
As one of the original designers of the CEik listbox class hierarchy (circa 1995-6) perhaps I should keep my head low at this point! (Though I can claim little direct credit - or blame - for the subsequent evolution of these classes.)

However, the bigger point is the following: both Symbian and S60 have recognised for many years that the separation of the two software development teams into two distinct companies has imposed drawbacks on the overall design and implementation of the APIs of functionality that straddles the two domains. Keeping the UI and the OS separate had some positives, but a lot of negatives too. Assuming the acquisition by Nokia of Symbian receives regulatory approval, the resulting combined engineering teams should enable considerably improved co-design. The new APIs will, hopefully, inspire greater fascination and approval from those who use them!

Wednesday, July 23, 2008

Symbian, just for fun

"There are two kinds of OSS developers: the guys who do things for fun, and the guys who do OSS because they are paid to do so. In order for an open source project to really flourish and take over the world, you need both."
These comments were made a few days ago by Janne Jalkanen of Nokia, speaking in a personal capacity. I think Janne is competely right. My own view is that the only reliable way for the Symbian Foundation software to become the most widely used software platform on the planet, is if that software also becomes the most widely liked software platform on the planet.

The two kinds of OSS developers aren't completely distinct. Ideally the ones who are paid by their company to work on the software should also have a strong inner desire to do that work - to go the extra mile out of the sheer enjoyment and fascination they get from that software.

I've seen that kind of deep enthusiasm for software many times in my life. I first ventured onto online community discussion groups in the early 1990s, using the login name "dw2" on the CIX (Compulink Information eXchange) bulletin boards. The Psion devices of that time - running a 16-bit precursor to Symbian OS - could be programmed using an interpreted language called OPL. Hobbyists made increasingly creative use of the possibilities of that language, creating some highly impressive games, serviceable business applications, alternative personal information management functionality, and lots more besides. I was drawn into providing support and encouragement to this burgeoning community. Plucking an example at random from September 1992 from my archives, here's a reply I posted to someone who had been pushing the envelope of OPL functionality:

Access to C routines from Opl

I don't suppose it'll cause any harm to pre-announce something that Psion will shortly make available to Series3 Opl programmers. Namely a mechanism to access functionality written in a C library, from Opl. What will be possible is as follows:

  • Someone provides some C functionality in a so-called DYL library
  • Opl programs can hook into this functionality by means of the LibSend operating system service (CALL ($cf)).
Psion will make some suitable DYLs available, and it will be up to third parties to provide other general or specific DYLs. For example, in a hypothetical company writing software for the S3, out of a team of say six programmers, only one would need to understand C. All routine coding could be done in Opl, with only the performance-critical parts being done in C (together with a few parts that are technically out of the reach of pure Opl).

Even before you (BobG) raised this subject, Psion were working on a specific DYL to quicksort the index of a DBF file.

Regards, DavidW


That was 1992, when many enthusiasts were happy to while away their free time programming devices powered by EPOC16. Fast forward again to 2008. Janne goes on to say,
The problem with Symbian is that very, very few people touch it for fun. So I believe that while we can open source it, it is going to be very difficult to get people participate out of their own free will, unless we are prepared to make very serious refactorings to the entire system.

My first instinct is to disagree with Janne here. I'd love to list lots of people I know who do seem to enjoy developing Symbian software, "just for fun". For example,

  • Python on S60 can be a real joy to use - and supports lots of extensions. (In many ways, Python is for Symbian OS in 2008 what OPL was for EPOC16 back in the 1990s.)
  • The forthcoming new Symbian graphics architecture ("ScreenPlay") and IP networking architecture ("FreeWay") are full of interesting software development opportunities
  • The PIPS libraries hide away many of the idiosyncracies of native Symbian C++ development, and can increase the pleasure of porting certain types of applications to Symbian devices.

However, as Mike Rowehl rightly reminds all would-be Symbian blogging enthusiasts - like me! - the first duty of a blogger is to listen, rather than to speak:

I’m not saying that Nokia doesn’t have market share, I’m saying they don’t have developer mindshare and they haven’t captured the attention of new entrants. How often do you hear about people “fooling around with developing for Symbian” just for fun in their free time? I’ve attended developer focused events in a number of different areas and I’ve heard that very infrequently. Compare that to the number of times you run across people fooling around with iPhone or Android SDKs (or even Maemo for that matter). I’m filtering out all the Silicon Valley events cause we’re weird over here. But even of events in others areas - developers area paying way more attention to the other platforms. You can argue that all you want but it won’t go away, I’m just telling you what I hear. Do with it what you want. If you want to deny it though, you’ve already lost really.

And I can't deny that, as I search through the blogosphere and developer forums, I find the number of postings that are negative about the the developer experience of Symbian and S60 kits significantly exceeds those that express heart-felt enjoyment with the experience. As much as I can find reasons to discount individual postings, I can't discount the overall weight of comments by such a diverse group of writers.

So all I can say is the following:

  • I see lots of API improvement projects inside the Symbian labs - such as the experimental forthcoming ZString class alternative to text descriptors, and the proposed RAll utility classes for simplified resource management - which should be warmly received by a wide audience
  • I believe Symbian's developer tools and documentation have improved significantly over the last few years, and are continuing to make big leaps forward (but the impressions some developers hold towards these topics is unduly negatively coloured by their past bad experiences with older tools or documentation)
  • A more transparent approach to planning and experimentation inside Symbian's development halls - as befits a switch to open source development - will generate more good ideas (and even some good will...)
  • Experimentation and quick starts on Symbian development projects will become easier.

(I also believe, by the way, that developers' enthusiasm for their experience on other platforms will decline, unless these other platforms learn to cope with some hard disciplines like binary compatibility and SDK quality control, as their market success grows. For related comments, see "The emperor's new handset".)

I close by making a commitment: improved developer experience will be central to the goals of the Symbian Foundation. If the number of people who develop for Symbian "just for fun" doesn't increase substantially, the Foundation will have failed in its objectives.

Sunday, July 20, 2008

Rationally considering the end of the world

My day job at Symbian is, in effect, to ensure that my colleagues in the management team don't waken up to some surprising news one morning and say, "Why didn't we see this coming?". That is, I have to anticipate so-called "Predictable surprises". Drawing on insight from both inside and outside of the company, I try to keep my eye on emerging disruptive trends in technology, markets, and society, in case these trends have the potential to reach some kind of tipping point that will significantly impact Symbian's success (for good, or for ill). And once I've reached the view that a particular trend deserves closer attention, it's my job to ensure that the company does devote sufficient energy to it - in sufficient time to avoid being "taken by surprise".

For the last few days, I've pursued my interest in disruptive trends some way outside the field of smartphones. I booked a holiday from work in order to attend the conference on Global Catastrophic Risks that's been held at Oxford University's James Martin 21st Century School.

Instead of just thinking about trends that could destabilise smartphone technology and smatphone markets, I've been immersed in discussions about trends that could destabilise human technology and markets as a whole - perhaps even to the extent of ending human civilisation. As well as the more "obvious" global catastrophic risks like nuclear war, nuclear terrorism, global pandemics, and runaway climate change, the conference also discussed threats from meteor and comet impacts, gamma ray bursts, bioterrorism, nanoscale manufacturing, and super-AI.

Interesting (and unnerving) as these individual discussions were, what was even more thought-provoking was the discussion on general obstacles to clear-thinking about these risks. We all suffer from biases in our thinking, that operate at both individual and group levels. These biases can kick into overdrive when we begin to comtemplate global catastrophes. No wonder some people get really hot and bothered when these topics are discussed, or else suffer strong embarrassment and seek to change the topic. Eliezer Yudkowsky considered one set of biases in his presentation "Rationally considering the end of the world". James Hughes covered another set in "Avoiding Millennialist Cognitive Biases", as did Jonathan Wiener in "The Tragedy of the Uncommons" and Steve Rayner in "Culture and the Credibility of Catastrophe". There were also practical examples of how people (and corporations) often misjudge risks, in both "Insurance and catastrophes" by Peter Taylor and "Probing the Improbable. Methodological Challenges for Risks with Low Probabilities and High Stakes" by Toby Order and co-workers.

So what can we do, to set aside biases and get a better handle on the evaluation and prioritisation of these existential risks? Perhaps the most innovative suggestion came in the presentation by Robin Hanson, "Catastrophe, Social Collapse, and Human Extinction". Robin is one of the pioneers of the notion of "Prediction markets", so perhaps it is no surprise that he floated the idea of markets in tickets to safe refuges where occupants would have a chance of escaping particular global catastrophes. Some audience members appeared to find the idea distasteful, asking "How can you gamble on mass death?" and "Isn't it unjust to exclude other people from the refuge?" But the idea is that these markets would allow a Wisdom of Crowds effect to signal to observers which existential risks were growing in danger. I suspect the idea of these tickets to safe refuges will prove impractical, but anything that will help us to escape from our collective biases on these literally earth-shattering topics will be welcome.

(Aside: Robin and Eliezer jointly run a fast throughput blog called "Overcoming bias" that is dedicated to the question "How can we obtain beliefs closer to reality?")


Robin's talk also contained the memorable image that the problem with slipping on a staircase isn't that of falling down one step, but of initiating an escalation effect of tumbling down the whole staircase. Likewise, the biggest consequences of the risks covered in the conference aren't that they will occur in isolation, but that they might trigger a series of inter-related collapses. On a connected point, Peter Taylor mentioned that the worldwide re-insurance industry would have collapsed altogether if a New Orleans scale weather-induced disaster had followed hot on the heels of the 9-11 tragedies - the system would have had no time to recover. It was a sobering reminder of the potential fragility of much of what we take for granted.

Footnote: For other coverage of this conference, see Ronald Bailey's comments in Reason. There's also a 500+ page book co-edited by Nick Bostrom and Milan Cirkovic that contains chapter versions of many of the presentations from the conference (plus some additional material).

Thursday, July 17, 2008

Mobile development in a hurry

"Google Mobile are moving all development away from downloadable apps to the mobile web"
That's a message mjelly records Charles Wiles, product manager for Google Gears for mobile, as making at this week's MoMo London event.

I was at the same event. I'm not sure I remember hearing quite such an emphatic message as mjelly reports, but I do remember hearing the following:
  • Eric Schmidt (Google CEO) has been asking the Google Mobile team why they only make one app release every six months, whereas development of apps for PC web-browser happens much more quickly
  • Downloadable apps for mobile devices are fraught with problems - including BIG issues with device fragmentation
  • Taking Google Maps for mobile as an example: there are 10+ platforms to support, requiring 100's of builds in total - it all adds up to PAIN
  • There must be a better way!
  • The better way is to deliver services through the mobile web, instead of via downloadable applications.

I've heard this kind of message at previous MoMo London events, from lots of different speakers. Downloadable applications (whether written in native C++ for in Java) introduce lots of problems with development, deployment, and usability, whereas mobile web apps are a whole world simpler. The message that comes across is: If you want rapid development that in turn allows rapid innovation, stick with the mobile web. It's not a message I've enjoyed hearing, but I can't deny that lots of speakers have said it (in various different ways).

But what made the presentation from Charles Wiles all the more interesting was that, after highlighting difficulties facing downloadable mobile apps, he was equally critical of mobile web applications (which run inside a web browser environment on the device):

  • Mobile web apps suck too!
  • Javascript takes time to execute on mobile devices, and since it's single threaded, it blocks the UI
  • There's often high network latency
  • The mobile web apps lack access to location, the address book, and camera, etc.

It's for this kind of reason that Google has continued to release downloadable versions of their most popular applications. (Incidentally, pride of place on the Quick Access bar on my Nokia E61i idlescreen are the native C++ versions of Google Search and Google Maps. They're in that pole position because I find them both incredibly useful.)

It's also for this kind of reason that Apple's initial message about how to develop apps for the iPhone - that developers should just write web applications - was so poorly received. Would-be iPhone developers strongly suspected they could achieve better results, in many cases, by writing downloadable apps. This expectation has been vindicated by the heady events around the recent launch of the iPhone application store.

Four challenges facing mobile web apps

The four factors I generally highlight as limitations in mobile web applications vs. downloaded apps are:

  1. The UI provided by a web browser is general purpose, and is often sub-optimal for a more complex application on the small screen of a mobile device (an example of the unsuitedness of the web browser UI in general is when users are confronted with messages such as "Don't press the Back button now!" or "Only press the OK button once!")
  2. Applications need to be able to operate when they are disconnected from the network - as in an airplane or during a trip in an Olde World London underground train - or whenever reception is flaky. On a mobile device, the user experience of intermittently connected "push email" from the likes of BlackBerry is far more pleasant than an "always connected web browser" interface to server-side email
  3. Web applications suffer from lack of access to much of the more "interesting" functionality on the phone
  4. Web applications are often more sluggish than their downloaded equivalents.

Exploring two routes to improved mobile apps

So what is the best answer? Improve native mobile app development or improve mobile web app development? Unsurprisingly, the industy is exploring both routes.

To improve mobile web app development:

Each of these initiatives (and I could have mentioned quite a few more) is significant, and each deserves wide support. Each of them also faces complications - for example, the more AJAX is included in a web application (addressing problem #1 of the four I listed above), the more sluggishly that application tends to run (exacerbating problem #4). And as web applications gain more access to underlying rich phone functionality, complex issues of security and application validation rear their heads again. I doubt if any of these complications are fatal, but they reinforce the argument for the industry also looking, in parallel, at initiatives to improve native mobile app development.

To improve native mobile app development, Symbian has been putting considerable effort over the last few years into improved developer tools, developer documentation, APIs, and so on. The results are encouraging, but the job is far from done.

Quick recipes on Symbian OS

One of the disincentives to doing native application development on Symbian phones is the learning curve that developers need to climb, as they become familiar with various programming idioms. That's a topic that Kari Pulli (Nokia Research Fellow) discussed with me when he visited Symbian HQ back in Fall 2006. Kari had in mind the needs of people (especially in universities) who were already good C++ developers, but who don't have a lot of spare time or inclination to learn brand new programming techniques.

We brainstormed possible titles for a new Symbian Press book specifically targeted at this important developer segment:

  • "Symbian progamming in a hurry"?
  • "Hacking Symbian OS"?

In the months that followed, this idea bounced around inside Symbian, and gathered more and more support. The title changed in the process, to the more 'respectable' "Quick Recipes on Symbian OS". Michael Aubert stepped forwards as the lead author - you can read an interview with him on the Symbian Developer Network. Happily, the book went on sale last month. For my hopes for the book, I append a copy of the foreword I wrote for the book:

This book has been designed for people who are in a hurry.

Perhaps you are a developer who has been asked to port some software, initially written for another operating system (such as may run on a desktop computer), to Symbian OS. Or perhaps you have to investigate whether Symbian OS could be suited to an idea from a designer friend of yours. But the trouble is, you don’t have much time, and you have heard that Symbian OS is a sophisticated and rich software system with a considerable learning curve.

If you are like the majority of software engineers, you would like to take some time to investigate this kind of task. You might prefer to attend a training course, or work your way through some of the comprehensive reference material that already exists for Symbian OS. However, I guess that you don’t have the luxury of doing that – because you are facing tight schedule pressures. There isn’t sufficient slack in your schedule to research options as widely as you’d like. Your manager is expecting your report by the end of the week. So you need answers in a hurry.

That’s why Symbian Press commissioned the book you are now holding in your hands. We are assuming that you are a bright, savvy, experienced software developer, who’s already familiar with C++ and with modern software programming methods and idioms. You are willing to work hard and can learn fast. You are ready to take things on trust for a while, provided you can quickly find out how to perform various tasks within Symbian OS. Over time, you would like to learn more about the background and deeper principles behind Symbian OS, but that will have to wait – since at the moment, you’re looking for quick recipes.

Congratulations, you’ve found them!

In the pages ahead, you’ll find recipes covering topics such as Bluetooth, networking, location based services, multimedia, telephony, file handling, personal information management – and much more. In most recipes, we provide working code fragments that you should be able to copy and paste directly into your own programs, and we provide a full set of sample code for download from the book's website. We have also listed some common gotchas, so you can steer clear of these potential pitfalls.

Since you are in a hurry, I will stop writing now (even though there is lots more I would like to discuss with you), so that you can proceed at full pace into the material in the following pages. Good speed!

Tuesday, July 15, 2008

MoMo London: the momentum continues

Mobile Monday is a worldwide phenomenon, with chapters in more than 60 cities. Typically, chapters hold one meeting most months, usually on the first (or second) Monday - though some smaller groups meet less frequently. I hear that the London chapter is among the liveliest.

Tonight, Mobile Monday London held its thirtieth speaker meeting. Checking back through my Series 5mx Agenda, I counted that I've attended 18 out of the 30, going back to my first attendance in December 2005. The reasons I keep returning to these events are:
  1. The networking opportunities are first class: all sorts of developers, entrepreneurs, VCs, project managers etc attend, from both large and small companies (including independent contractors)
  2. The presentations (which are deliberately kept short) and the demos that follow (which are kept even shorter) often convey new insight about the cutting edge of the mobile industry
  3. Disruptive yet throughtful questions are asked by highly knowledgeable audience members who have in many cases already personally been through a couple of business cycles, in different companies, experiencing the reality of technical ideas and business models similar to those being advocated by the presenters.

The quality of the Q&A alone often makes these meetings considerably more interesting and useful than some industry conferences which come with hefty price tags. That's the benefit of the collectively highly experienced MoMo London community.

The topic for this evening was "Enabling Location in Applications". The audience was enormous - being swelled, first by some members of the W3C who are attending a working meeting in London, and second by visiting members from overseas MoMo chapters (Germany, Estonia, Sweden, Spain, Boston, Italy, and New York, among others) who were in town to discuss the future international setup of the organisation. This was on top of the very sizeable more local audience.

All seven of the presentations / demos included interesting comments. Here's a few points that caught my attention:

  • Skyhook Wireless (who were the sponsors for this particular event) have a database of the locations of over 50 million wireless access points, including 16M+ in Europe alone. This database grows as the result of the records made by 500 drivers worldwide, include 200 in Europe (who have already driven some 750,000 km)
  • A (non-mobile phone) application of the Skyhook technology is explained by David Pogue in this video: the Eye Fi system of automatically geo-tagging photos taken by your digital camera, without involving any GPS receiver
  • Another partner of Skyhook is Trapster, who have an app for mobile phones that allows drivers to provide real-time alerts to one another about speed traps in the area
  • Google Gears provides a Geolocation API, which in turn could provide much of the basis of a similar API in HTML5; that's a reminder that (as stressed by the Google speaker, Charles Wiles) "Google Gears is much more than offline"
  • The demos and screenshots tended to show either the Nokia N95 or the iPhone; Andrew Scott of Rummble cheekily remarked that "It will take a long time before everyone has an iPhone - maybe two years"
  • Andrew touched on another sensitive point with a follow-up remark: "Mobile Network Operators are probably never going to waken up and realise that they shouldn't be charging for location information"
  • Both Andrew and Justin Davis of NinetyTen emphasised that mobile search and recommendations needed to be filtered, to give more prominence to entries that had been favourably reviewed by trusted contacts of the user
  • Uniquely of all the speakers, Mark White of Locatrix (who said he had flown all the way from Brisbane Australia to speak at this event) spent more time reviewing business model issues. "'Can do' doesn't mean 'can make money'", he emphasised
  • During the Q&A, the panel suggested it was only a matter of time before a free access API would be available, allowing applications to query central databases to find out the location of a cell with a given ID; any new startups who are working on providing this service wouild therefore be well advised to stop this at once.

Because the room was so full and was becoming pretty warm, the Q&A was stopped before it got into full gear, which was a bit of a pity. But lots of lively conversation continued in the reception area afterwards, over drinks.

To my mind, the energy and upbeat attitude of the meeting is testimony to:

  • The overall health of the mobile industry in and around London
  • The ever greater role of location elements in mobile applications.

I'll end by echoing the closing words of Mark White: "This is not the LBS industry of 2000. It's better". Users have learned about the general benefits of GPS and positioning from car-based satnav systems, and are now increasingly looking for similar benefits from their mobile phones.

Sunday, July 13, 2008

A picture is worth a thousand words: Enterprise Agile

Communications via words often isn't enough. You generally need pictures too.

For example, in seeking to explain to people about the merits of Agile over more traditional, "plan-based" software development methods, I've often found excerpts from the following sequence of pictures to be useful:











The last two pictures in this series are an attempt to show how Agile can be applied in multiple layers in the more complex environment of large-scale ("enterprise-scale") software projects. Of course, it's particularly challenging to gain the benefits of Agile in these larger environments.

I drew these diagrams (almost exactly 12 months ago) after having read fairly widely in the Agile literature. So these diagrams draw upon the insights of many Agile advocates. Someone who influenced me more than most was Dean Leffingwell, author of the easy-to-read yet full-of-substance book "Scaling Software Agility: Best practices for large enterprises" that I've already mentioned in this blog. I'd also like to highlight the "How to be Agile without being Extreme" course developed and delivered by Construx as being particularly helpful for Symbian.

Dean has carried out occasional training and consulting engagements for Symbian over the last twelve months. One outcome of this continuing dialog is an impressive new picture, which tackles many issues that are omitted by simpler pictures about Agile. The picture is now available on Dean's blog:


If the picture intrigues you, I suggest you pay close attention to the next few posts that Dean makes, where he promises to provide annotations to the different elements. This could be the picture that generates many thousands of deeply insightful words...

Footnote: I've long held that Open Source is no panacea for complex software projects. If you aren't world class in software development skills such as compatibility management, system architecture review, modular design, overnight builds, peer reviews, and systematic and extensive regression testing, then Open Source won't magically allow you to compete with companies that do have these skillsets. One more item to add to this list of necessary skills is enterprise-scale agile. (Did I call it "one more item"? Scratch that - there are many skills involved, under this one label.)

Friday, July 11, 2008

Into the long, deep, deep cold

My interest in smartphones stems from my frequent observation and profound conviction that these devices can make their human users smarter: more knowledgeable, more connected, and more in control. It's an example of the careful use of technology to make users that are, in some sense, better humans. Technology - including the wheel, the plough, the abacus, the telescope, the watch, the book, the steam engine, the Internet, and (of course) much more besides - has been making humans "better" (stronger, fitter, and cleverer) since the dawn of history. What's different in our age is that the rate of potential improvement has accelerated so dramatically.

The website "Better Humans" often has interesting articles on this theme of accelerating real-world uses of technology to enhance human ability and experience. This morning my attention was taken by some new articles there with an unusual approach to the touchy subject of cryonics. For example, the article "Cryonics: Using low temperatures to care for the critically ill" starts by quoting the cryobiologist Brian Wowk:

“Ethically, what is the correct thing to do when medicine encounters a difficult problem? Stablize the patient until a solution can be found? Or throw people away like garbage? Centuries from now, historians may marvel at the shortsightedness and rationalizations used to sanction the unnecessary death of millions.”
The article (originally from a site with a frankly less-than-inspiring name, Depressed Metabolism) continues as follows:
In contemporary medicine terminally ill patients can be declared legally dead using two different criteria: whole brain death or cardiorespiratory arrest. Although many people would agree that a human being without any functional brain activity, or even without higher brain function, has ceased to exist as a person, not many people realize that most patients who are currently declared legally dead by cardiorespiratory criteria have not yet died as a person. Or to use conventional biomedical language, although the organism has ceased to exist as a functional, integrated whole, the neuroanatomy of the person is still intact when a patient is declared legally dead using cardiorespiratory criteria.

It might seem odd that contemporary medicine allows deliberate destruction of the properties that make us uniquely human (our capacity for consciousness) unless one considers the significant challenge of keeping a brain alive in a body that has ceased to function as an integrated whole. But what if we could put the brain “on pause” until a time when medical science has become advanced enough to treat the rest of the body, reverse aging, and restore the patient to health?

Putting the brain on pause is not as far fetched as it seems. The brain of a patient undergoing general anesthesia has ceased being conscious. But because we know that the brain that represents the person is still there in a viable body, we do not think of such a person as “temporarily dead.”

One step further than general anesthesia is hypothermic circulatory arrest. Some medical procedures, such as complicated neurosurgical interventions, require not only cessation of consciousness but also complete cessation of blood flow to the brain. In these cases the temperature of the patient is lowered to such a degree (≈16 degrees Celsius) that the brain can tolerate a period without any circulation at all. Considering the fact that parts of the human brain can become irreversibly injured after no more than five minutes without oxygen, the ability of the brain to survive for at least an hour at these temperatures without any oxygen is quite remarkable.
And so it continues. See also, by the same author, "Why is cryonics so unpopular?"

Is it really conceivable that the human body (or perhaps just the human head) could be placed into deep, deep cold, potentially for decades, and then subsequently revived and repaired, using the substantially improved technology of the future? Never mind conceivable, is it desirable?

I'm reminded of a book that made a big impression on me, several years ago - the provocatively titled "The first immortal" by James Halperin. It's written as fiction, but it's intended to describe a plausible future scenario. I understand that the author did a great deal of research into the technology of cryonics, in order to make the account scientifically credible.

As a work of fiction, it's no great shakes. The characterisation, the plotting, and the language is often laboured - sometimes even embarrassing. But the central themes of the book are tremendously well done. As a reader, you get to think lots of new thoughts, and appreciate the jaw-dropping ups and downs that cryonics might make possible. (By the way, some of the ideas and episodes in the book are very vivid indeed, and remain clearly in my mind now, quite a few years after I read the book.) As the various characters in the book change their attitudes towards the possibility and desirability of cryonic preservation and restoration, it's hard not to find your own attitude changing too.

Footnote: Aubrey de Grey, one of the speakers at tomorrow's UKTA meeting ("How to live longer and longer yet healthier and healthier: realistic grounds for hope?"), has put on public record the fact that he has signed up for cryopreservation. See here for some characteristically no-nonsense statements from Aubrey himself on this topic.

Inspiring the rising stars in universities

One of the goals I set myself for 2008 involves influencing university research departments around the world to become more active in the areas of smartphones and Symbian OS.

With that goal in my mind, I decided to accept an invite to the "Wireless 2.0" conference organised by Silicon South West, here in Bristol, where I've travelled for the event. I decided to attend because of the mix of both industry and university attendees.

The event hosted a "Rising Star Awards Dinner" this evening, where six university students studying electrical engineering (or a related degree) received special awards - a plaque and a handy amount of spending money. There was one winner from each of the six universities in the area covered by Silicon South West: Bath, Bournemouth, Bristol, Exeter, Plymouth, and West of England. It was heart warming to hear the personal testimonies of the winners (and their university tutors).

But links between commercial research departments and university research departments aren't always so rosy. Universities and industry have many overlapping interests, but also some conflicting cultures. I see Symbian as having had mixed success, historically, in relations with universities:
  • On the clearly positive side, we've run good graduate recruitment and induction programs, every year since 1993 (that was in the Psion days, pre-Symbian); these have gone from strength to strength.
  • On the increasingly positive side, 58 universities have enrolled into the Symbian Academy program, in which Symbian supports university lecturers to deliver academic courses on Symbian OS software development.
  • On the "could do better" side, there are still only a small number of truly productive ongoing research collaborations between Symbian and individual universities, in which findings from university research projects regularly feed into Symbian's roadmap (and vice versa).

It turns out that it's not just Symbian that feels somewhat uncomfortable about the limited benefits realised from attempted collaboration with universities. Other commercial companies have noted similar concerns. And this has even become a field of academic study in its own right, known as (amongst other names) UIC, meaning University-Industry Collaboration. My friend Joel West of San Jose State University recently attended a two-conference on UIC at University of California, Irvine, and wrote up his observations. There's lots to ponder there. For example, Joel described three pieces of advice on successful UIC negotiations, as given in a presentation by UIDP executive director Anthony Boccanfuso:

  1. A successful UI collaboration should support the mission of each partner. Any effort in conflict with the mission of either partner will fail. (Joel’s translation: all deals must be win-win)

  2. Institutional practices and national resources should focus on fostering appropriate long term partnerships between universities and industry. (It’s more than just the money)

  3. Universities and industry should focus on the benefits to each party that will result from collaborations by streamlining negotiations to ensure timely conduct of the research and the development of the research findings. (There is a finite window for commercialization)

With Symbian research projects, one additional hiccup has been the difficulties in allowing universities access to Symbian OS source code. Time and again we've been discussing an attractive-sounding joint research project with a university, when we've realised that the project would need more visibility of Symbian source code than was possible under the existing licensing rules. And that's constrained the kinds of projects we can consider. (This realisation was just one of many that led to an increasing desire inside the Symbian ecosystem to find ways to liberalise access to our source code - and thus helped to set the scene for the mega-decision to embrace open source principles.)

However, not all research requires close access to source code. With that thought in mind, Symbian Research decided a few weeks back to launch the Symbian Student Essay Contest. This involves students writing an essay of no more than eight pages on the general topic "The next wave of smartphone innovation - issues and opportunities with smartphone technologies". Up to ten students will receive a prize of UKP 1000. (See here for the contest rules.)

This prize contest has some common principles with the Silicon South West "Rising Star Awards":

  • We're seeking to encourage and reward individual students who show particular insight into this ever-more important set of ideas
  • We're also seeking to inspire individual universities to give a higher priority to this domain of study.

High quality essays from a university will indicate to Symbian that there is good smartphone expertise in that university. That's something we're particularly interested to find out, since Symbian Research needs to decide which universities worldwide should receive higher priority attention for future collaborative research projects. That's a tough decision to make.

Footnote: At tonight's dinner, Prof Joe McGeehan of the University of Bristol mentioned that wise heads had been advising him, ever since 1973, that "there's no future in research in wireless communications". Thankfully, he persistently ignored these skeptics, and the field has indeed grown and grown. There's now an impressive list of local south-west companies that have world-beating wireless technologies. I'm looking forward to hearing, tomorrow, what they have to say. The future of smartphones is, of course, a big part in "wireless 2.0", but there's lots more going on at the same time.

Tuesday, July 8, 2008

Taming the security risks of going open source

The Wireless Informatics Forum asks (here and here),

Will an open source model expose Symbian's security flaws?

I wonder what security implications are being presented to Symbian? In the computing world there’s plenty of debate about the impact of opening up previously proprietary code. The primary concern being that an open source model exposes code not only to benevolent practitioners but also to malevolent attackers...

With much of the mobile industry steering towards m-commerce initiatives, potential security risks must be considered...

How much of the legacy Symbian code will be scrapped and built from scratch according to open source best practice?


First, I agree with the cardinal importance of security, and share the interest in providing rock solid enablers for m-commerce initiatives.

But I'm reasonably optimistic that the Symbian codebase is broadly in a good state, and won't need significant re-writes. That's for three reasons:
  1. Security is something that gets emphasised all the time to Symbian OS developers. The whole descriptor system for handling text buffers was motivated, in part, by a desire to avoid buffer overrun errors - see my May 2006 article "The keystone of security".
  2. Also, every now and then, Symbian engineers have carried out intense projects to review the codebase, searching high and low for lurking defects.
  3. Finally, Symbian OS code has been available for people from many companies to look at for many years - these are people with CustKit or DevKit licenses. So we've already had at least some of the benefits of an open source mode of operation.
On the other hand, there's going to be an awful lot of code in the overall Symbian Foundation Platform - maybe 30+ million LOC. And that code comes from many different sources, and was written under different cultures and with different processes. For that reason, we've said it could be up to two years before the entire codebase is released as Open Source. (As my colleague John Forsysth explains, in the section entitled "Why not open source on day 1?", there are other reasons for wanting to take time over this whole process.) Of course we'd like to go faster, but we don't at this stage want to over-promise.

So to answer the question, I expect the lion's share of the Symbian codebase to stay in place during the migration, no doubt with some tweaks made here and there. Time will tell how much of the peripheral pieces of code need to be re-written.

Monday, July 7, 2008

Symbian signed and openness

The team at Telco2.0 have run some good conferences, and there's much to applaud in their Manifesto. Recently, the Telco2.0 blog has run a couple of hit-and-miss pieces of analysis on the Symbian Foundation. There's a lot of speculation in their pieces, and alas, their imagination has run a bit wild. The second of these pieces, in particular, is more "miss" than "hit". Entitled "Symbian goes open - or does it?", the piece goes most clearly off the rails when it starts speculating about Symbian Signed:
...the Symbian signing process doesn’t just apply to changes to Symbian itself — it applies to all applications developed for use on Symbian, at least ones that want to use a list of capabilities that can be summed up as “everything interesting or useful”. I can’t even sign code for my own personal use if it requires, say, SMS functionality. And this also affects work in other governance regimes. So if I write a Python program, which knows no such thing as code-signing and is entirely free, I can’t run it on an S60 device without submitting to Symbian’s scrutiny and gatekeeping. And you though Microsoft was an evil operating system monopolist…
This makes the Symbian signing process sound awful. But wait a minute. Isn't there a popular book, "Mobile Python - rapid prototyping of applications on the mobile platform", written by Jurgen Scheible and Ville Tuulos, that highlights on the contrary just how simple it is to get going with sophisticated Python applications on S60 devices? Yep. And what do we find as early as page 45 of the book? A two-line program that sends an SMS message:
import messaging
messaging.sms_send("+14874323981", u"Greetings from PyS60")
I tried it. It took less than an hour to download and install the SIS files for the latest version of PyS60 from Sourceforge, and then to type in and run this program. (Of course, you change the phone number before testing the app.) Nowhere in the process is there any submitting of the newly written program "to Symbian's scrutiny and gatekeeping". The fanciful claims of the Telco2.0 piece are refuted in just two lines of Python.

So what's really going on here? How is it that normally intelligent analysts and developers often commit schoolboy howlers when they start writing about Symbian Signed? (Unfortunately, the Telco2.0 writers are by no means unique in getting the Symbian Signed facts wrong.) And why, when people encounter glitches or frustrations in the implementation of Symbian Signed, are they often too ready to criticise the whole system, rather than being willing to ask what small thing they might do differently, to get things working again?

I suspect three broader factors are at work:

1. An over-casual approach to the threat of mobile malware

Symbian Signed is part of an overall system that significantly reduces the threat of mobile viruses and the like. Some developers or analysts sometimes give the impression that they think they stand immune from malware - that it's only a problem that impacts lesser mortals, and that the whole anti-malware industry is a "cure that's worse than the disease". Occasionally I sympathise with this view, when I'm waiting for my desktop PC to become responsive, with its CPU cycles seemingly being consumed by excessive scanning and checking for malware. But then I remember the horrors that ensue if the defences are breached - and I remember that the disease is actually worse than the cure.

If we in the mobile industry take our eye off the security ball and allow malware to take root in mobile phones in ways similar to the sad circumstances of desktop PCs, it could produce a meltdown scenario in which end users decide in droves that the extra intelligence of smart mobile phones brings much more trouble than it's worth. And smartphones would remain of only niche interest. For these reasons, at least the basic principles of Symbian Signed surely deserve support.

2. A distrust of the motivation of network operators or phone manufacturers

The second factor at work is a distrust of control points in the allocation of approvals for applications to have specific capabilities. People reason something like this:
  • OK, maybe some kind of testing or approvals process does makes sense
  • But I don't trust Entity-X to do the approving - they have mixed motivations.

Entity-X could be a network operator, that may fear losing (for example) their own SMS revenues if alternative IM applications were widely installed on their phones. Or Entity-X could be a device manufacturer, like Apple, that might decide to withhold approval from third party iPhone applications that provide download music stores to compete with iTunes.

Yes, there's a potential risk here. But there are two possible approaches to this risk:
  1. Decide that there's no possible solution, and therefore the power of a system like Symbian Signed should be criticised and diminished
  2. Work to support more of the decision making happening in a fully transparent and independent way, outside of the influence of mixed motivations.
The second approach is what's happening with the Symbian Foundation. The intent with the Symbian Foundation is to push into the public sphere, not only more and more of the source code of the Symbian Platform, but also as much of the decision-making as possible - including the rules and processes for approval for Symbian Signing.

Incidentally, the likely real-world alternative to a single, unified scheme for reviewing and signing applications is that there will be lots of separately run, conflicting, fragmented signing schemes. That would be a BAD outcome.

3. A belief that openness trumps security

This brings us to the final factor. I suspect that people reason as follows:
  • OK, I see the arguments for security, and (perhaps) for quality assurance of applications
  • But Symbian Signed puts an obstacle in the way of openness, and that's a worse outcome
  • Openness is the paramount virtue, and needs to win.
As a great fan of openness, I find myself tempted by this argument from time to time. But it's a misleading argument. Instead, freedom depends on a certain stability in the environment (including a police force and environmental inspectors). Likewise, openness depends on a basic stability and reliability in the network, in the underlying software, and in the way the ecosystem operates. Take away these environmental stability factors, and you'll lose the ability to meaningfully create innovative new software.

The intention behind Symbian Signed to help maintain the confidence of the industry in the potential of smartphones - confidence that smartphones will deliver increasing benefits without requiring debilitating amounts of support or maintenance.

It's true that the rules of Symbian Signed can take a bit of learning. But hey, lots of other vital pieces of social or technical infrastructure likewise take time to appreciate. In my mind, the effort is well worth it: I see Symbian Signed as part of the bedrock of meaningful openness, instead of some kind of obstacle.

Sunday, July 6, 2008

Clear thinking about open source

"What's the best book to read for an introduction to Open Source?" That's a question I've been asked several times in the last fortnight - as many of my colleagues in and around Symbian have realised that Open Source is a more complex and more intriguing subject than they first thought. (Of course, the announcements of 24 June have had something to do with this increased interest level.)

I'm still not sure how to answer that question. Over the years, I've read lots of books about Open Source - but with the passage of time, I've forgotten what I've learnt from each book.

Two books that stick out in my mind, through the veil of intervening years, as particularly enjoyable are:

Of these, the latter stands out as an especially easy and engrossing read. (It also happens to be the first serious book read independently by all three members of my immediate family - my wife, my son, and myself.) But when I pulled these two books from my bookshelf the other day and checked their inside cover, where I usually record the date when I purchase a book, I realised I had read them both as long ago as 2001. And Open Source has moved on a lot since that time. So while both these books are great sources of historical insight, readers will need to turn elsewhere for more up-to-date info.

A more recent book I remember making a big impact on my thinking at the time (2005, according to the inside cover) was:

Flicking through that book again just now, I see so many interesting snippets in it that I'm tempted to try to squeeze it back into my already hopelessly overfull reading in-box, for a second-time-round read. But even a 2005 book is dated.

That brings me to the book I've just finished reading:

Heather Meeker is Co-Managing Shareholder at the East Palo Alto law firm Greenberg Traurig. I first saw Heather speak at the Olswang "Open Source Summit" in London in November 2007. I was impressed at the time by the clarity of her grasp of the legal issues surrounding Open Source. Heather's book has the same fine qualities:

  • It's primarily exposition (education) rather than advocacy (evangelism)
  • I had many "of course!" and "aha!" moments while reading it
  • There are some particularly clear diagrams
  • Crucially, the language is easy to read
  • Also crucially, the book is comfortable both with legal matters and with technical matters (eg aspects of C and C++).

So I would say, this is the book to read, for a good account of the legal aspects surrounding open source.

One part that really shines comes about three quarters of the way through the book. It's by far the best analysis I've read of "The border dispute of GPL2". The question in the minds of many commercially-driven companies, of course, is whether they risk having to publish the source code of any of their own software that happens to interact with code (such as the Linux kernel) released under GPL. The book makes it strikingly clear that the commercial risks aren't just because the original drafters of the GPL are philosophically opposed to closed source software. They're also because of some deep-rooted ambiguities inside the license itself. To quote from page 188:

This is why attorneys who read the GPL quickly come to the conclusion that this phrase - upon which entire companies and development projects depend - is irretrievably vague.

And again from the footnote to page 189:

To provide context for nonlawyer readers, drafting unique (in the document) and unambiguous definitions is considered a baseline lawyering skill in transactional practice. Doing otherwise is generally a sign that the drafter is not a lawyer or, more precisely, does not have baseline drafting skills. If this seems harsh, consider that many programming languages require one, and only one, definition of a user-defined variable. (Some languages allow multiple definitions, or "overloading", but using this feature requires intimate knowledge of the rules used by the compiler or interpreter to resolve them.) Failing to understand these rules properly creates bugs. So, in a sense, multiple or conflicting definitions [such as occur in the GPL] in a legal document, without express rules to resolve them, is a "bug" in drafting.

I can well imagine senior managers in mobile phone companies getting more and more worried as they read this book, finding more and more reasons, chapter by chapter (not just the chapter on the Border Dispute), to fear eventual legal cases against them, if they have code of their own in a phone that interacts with a GPL kernel.

Perhaps inevitably, the book has less to say about the EPL - which is the license to be used by the Symbian Foundation. After all, GPL is (the book suggests) the "most widely used license on the planet". But the EPL has many fewer ambiguities, and is significantly more business-friendly.

Does v3 of GPL change matters? Not really. First, as the final chapters of the book make clear, many of the deep-rooted ambiguities remain, despite the massive (and impressive) work done by the drafting team for v3. Second, Linux is likely to remain on v2 GPL for the foreseeable future.

Thursday, July 3, 2008

Nanoscience and the mobile device: hopes and fears

Nokia's concept video of a future morphing mobile phone, released back in February, has apparently already been viewed more than two million times on YouTube. It's a clever piece of work, simultaneously showing an appealing vision of future mobile devices and giving hints about how the underlying technology could work. No wonder it's been popular.

So what are the next steps? I see that the office of Nokia's CTO has now released a 5 page white paper that gives more of the background to the technologies involved, which are collectively known as nanotechnology. It's available on Bob Iannucci's blog, and it's a fine read. Here's a short extract:

After a blustery decade of hopes and fears (the fountain of youth or a tool for terrorists?), nanotechnology has hit its stride. More than 600 companies claim to use nanotechnologies in products currently on the market. A few interesting examples:

  • Stain-repellant textiles. A finely structured surface of embedded "nanowhiskers" keeps liquids from soaking into clothing—in the same way that some plant leaves keep themselves clean.
  • UV-absorbing sunscreen. Using nanoparticulate zinc oxide or titanium dioxide, these products spread easily and are fully transparent —while absorbing ultraviolet rays to prevent sunburn.
  • Purifying water filters. Aluminum oxide nanofibers with unusual bioadhesive properties are formulated into filters that attract and retain electronegative particles such as bacteria and viruses.
  • Windshield defoggers. A transparent lacquer of carbon nanotubes connects to the vehicle’s electrical source to evenly warm up the entire surface of the glass.

Even more interesting, to my mind, than the explanation of what's already been accomplished (and what's likely to be just around the corner), is a set of questions listed in the white paper. (In my view, the quality of someone's intelligence is often shown more in the quality of the questions they ask than in the quality of the answers they give to questions raised by other people.) Here's what the white paper says on this score:

As Nokia looks toward the mobile device of 2015 and beyond, our research teams, our partner academic institutions, and other industry innovators are finding answers to the following questions:

  1. What will be the form factors, functionalities, and interaction paradigms preferred by users in the future?
  2. How can the device sense the user’s behavior, physiological state, physical context, and local environment?
  3. How can we integrate energy-efficient sensing, computing, actuation, and communication solutions?
  4. How can we create a library of reliable and durable surface materials that enable a multitude of functions?
  5. How can we develop efficient power solutions that are also lightweight and wearable?
  6. How can we manufacture functional electronics and optics that are transparent and compliant?
  7. How can we move the functionality and intelligence of the device closer to the physical user interface?
  8. As we pursue these questions, how can we assess—and mitigate— possible risks, so that we introduce new technologies in a globally responsible manner?

That's lots to think about! In response to the final question, one site that has many promising answers is the Center for Responsible Nanotechnology, founded by Mike Treder and Chris Phoenix. As he explains in his recent article "Nano Catastrophes", Mike's coming to Oxford later this month to attend a Conference on Global Catastrophic Risks, where he'll be addressing these issues. I'll be popping down that weekend to join the conference, and I look forward to reporting back what I find.

This is a topic that's likely to run and run. Both the potential upsides and the potential downsides of nanotechnology are enormous. It's well worth lots more serious research.

Tuesday, July 1, 2008

Win-win: how the Symbian Foundation helps Google to win

Olga Kharif of Business Week has found an interesting new angle on the Symbian Foundation announcement, in her article "How Nokia's Symbian Move Helps Google":
Nokia rocked the wireless industry June 24 with news it would purchase the portion of Symbian, a maker of mobile-phone software, that it didn't already own—and then give away the software for nothing. ...

But Nokia's move may play right into Google's hands, by helping to nurture a blossoming of the mobile Web and spur demand for all manner of cell-phone applications—and most important, the ads sold by Google. "There's nothing to say that this isn't what Google's plan was all along," says Kevin Burden, research director, mobile devices at consultancy
ABI Research. "They might have wanted a more open device environment anyway. This might have been Google's end game."
My comment on this analysis is: why does it need to be a bad thing for Nokia and Symbian, if the outcome has benefits for Google? If Google wins (by being able to sell more ads on mobile phones than before), does it mean that Nokia and Symbian lose? I think not. I prefer to see this as being mutually beneficial.

The truth is, many of the companies who provide really attractive applications and services for Symbian-powered phones are both complementors and competitors of Symbian:
  • RIM provide the first class BlackBerry email service that runs on my Symbian-powered Nokia E61i and which I use virtually every hour I'm awake; they also create devices that run their own operating system, and which therefore compete with Symbian devices
  • Google, as well as working on Android, provide several of the other mobile applications that I use heavily on my E61i, including native Google Maps and native Google Search.

If companies like RIM and Google are able, as a result of the Symbian Foundation and its unification of the currently separate Symbian UIs (not to mention the easier accessibility of the source code), to develop new and improved applications for Symbian devices more quickly than before - then it will increase the attractiveness of these devices. RIM and Google (and there are many others too!) will benefit from increased services revenues which these mobile apps enable. Symbian and the various handset manufacturers who use the Symbian platform will benefit from increased sales and increased usage of the handsets that contain these attractive new applications and services. Win-win.

I see two more ways in which progress by any one of the open mobile operating systems (whether Android or the Symbian Platform, etc) boosts the others:

  1. The increasing evident utility of the smartphones powered by any one of these operating systems, helps spread word of mouth among end users that, hey, smartphones are pretty useful things to buy. So next time people consider buying a new phone, they'll be more likely to seek out one that, in addition to good voice and text, also supplies great mobile web access, push email, and so on. The share of smartphones out of all mobile phones will rise.
  2. Progress of these various open mobile operating systems will help the whole industry to see the value of standard APIs, free exchange of source code, open gardens, and so on. The role of open operating systems will increase and that of closed operating systems will diminish.

In both cases, a rising tide will lift all boats. Or in the words of Symbian's motto, it's better to seek collaboration than to seek competition.