Interactive Infographic: How To Handle The Media

Since 1988, Martin Banks and I have been running media skills training courses. Early on, we introduced an ‘architecture’ for the process. We drew it on flipcharts for a few years then, in 2004, we formalised it and started giving it out as part of our wallet-sized plastic business card. The model acts as an ‘aide memoire’ for all who’ve attended our training.

Card

A few weeks ago, I was rummaging (as you do) some infographics – pictures that speak well over a thousand words – and took a shine to the interactive variety, where the graphic responds to the user’s actions.

I’d just been doing some training work with the Racepoint Group and, coincidentally, one of its US staff Kyle Austin wrote a blog post: Are Infographics the New Slide Shows?  Good point, I thought, having just taken someone through our ‘architecture’.

So I set to work to convert our flat image into something a little more lively. It’s aim is to refresh the memories of those who’ve attended our training and to give others an appreciation of how they might set about handling the media.

The first attempt was an animated .gif file with text boxes to expand on each element of the image. Horrible. Boring. Sequential. No user interaction. Didn’t lend itself to the web. Etc.

I wanted an interactive infographic that would work in more or less any browser and not depend on the presence of JavaScript, Flash or any other kind of plug-in. Just HTML and CSS. (I’d done some simple stuff before, here and here, so I was optimistic that it could be done.)

The second attempt was a graphic that the user could mouse over, highlighting image elements and showing the relevant text in a nearby box. The size was determined by my computer screen, which was a bit stupid because many of the people I’d like to share it with might have a smaller screen – an iPad for example.

So I reworked it with the iPad in mind. The hover can be achieved with a finger, even on the smallest graphical element. And while I was resizing everything, I added drop shadows and rounded corners to the text boxes.

If you’re interested, the end result is at How To Handle The Media

IPad1

I hope you enjoy it.

 

PS If anyone wants the gory technical details of how to do this sort of thing, I’ll pen another post. Just ask.

Creating a book from a blog (unintentionally, for free)

Guy Kewney, who I’ve known for many years, kept a little-known blog from which he let rip on whatever was bugging him at the time. In the past year or so, a lot of his commentary was about the cancer – its symptoms and treatment – that claimed his life on April 8.

On March 1st, he wrote a particularly poignant entry which, in summary, showed that he’d finally given up hope. This gave me the idea of starting a tribute blog to which people could post comments and stories for Guy to enjoy while he still could. Guy read the blog comments until very close to the end.

Yesterday, his wife Mary wrote to me to say, “I could never explain to you what a positive thing it was for Guy. It was truly life changing.” Which is wonderful to hear. Thank you Mary.

After his death, the tributes poured in, many of which appeared online. These were duly listed and linked to in the blog. Eventually, things dried up and it seemed a good time to ‘freeze’ the blog.

I wanted to create a CD of the blog, but getting it out of Typepad in a way that it could be read and navigated easily without an internet connection was difficult, to put it mildly. Then I stumbled across a program called website2PDF from spidersoft in Australia. By providing a list of the pages, it created (as you may have guessed) a .pdf file of the blog.

At first it was 54 pages but, by removing the ‘recent comment’ list and tweaking the layout, it ended up as a 39-page 1MB file. The next step is to print it and bind it. The print quality looks good but the font is pretty small because the blog design doesn’t take advantage of the full width of the paper. I am still wrestling with that problem…

I had paid the publisher for a full licence, to see if I could gain more control over the pdf layout, but that’s yet to arrive. (I thought these things were automatic. And, no, it didn’t go to my spam folder.) I did the whole job with the free trial version which, I think, lasts for 15 days, but I can’t find that information anywhere.

Bottom line? It’s great that website2PDF does a good job of capturing website pages(doesn’t have to be a blog, by the way) to a pdf. You can choose to have hotlinks, automatic text and picture breaks, ActiveX, scripts and a host of different layouts. It was only $49, so it’s not a bank-breaking exercise and I felt it would have been worth it for this one job alone. But, of course, I do look forward to becoming a registered user because it’s sparked off some more ideas for easy eBook creation.

—–

Update: After failing to extract a response from the author (4 emails) I raised a dispute ticket with PayPal. This prompted an instant response from the author. Apparently, the automated licence system had failed.

If ‘semantic web’ annoys you, read on…

Say "semantic web" to a lot of people and the shutters on their brains come down. They may have lived through the disappointments of the AI or expert systems eras. Or they may simply know how impossibly tedious it would be to retrofit their web pages with semantic data.

Say "linked data" to them and they might ask "what's that?" with a reasonably open mind. At some point during the explanation, it will dawn on them that the terms are identical to those used in the semantic web. By then, of course, it's too late, they're hooked.

The basic idea is that web pages, html or otherwise, contain some information that links them to other web pages in a meaningful way. Nothing particularly new in that, you might say. But the meaningful bit in this context is not what the human reads – a bit of clickable text that takes you to another web page – but what a computer application can read and make sense of.

An example might be understood as: 'The prime minister is Gordon Brown'. This might be expressed as prime minister:Gordon Brown. And these elements, in turn might point to well-defined explanations of the two concepts elsewhere on the web. In dbpedia.org/page/ the links would be Prime_minister and Gordon_Brown, respectively. Other authentic sources include Freebase, the Guardian or the New York Times. The application might drill into these pages plucking out useful information and following other links, which would have been defined in a similar fashion.

Of course, because this page has been published, it becomes a potential resource for others to link to. It rather depends what the page was about. The Gordon Brown entry, in this case, was just one element. It might have been 'The British Cabinet in March 2010', for example. And others might have found that information useful.

(If you want to experiment a bit, go to <sameAs> where you can whack in terms and read their definitions in plain text.)

Many public and not-so-public bodies have been making their resource or link information openly available. Friend of a Friend (or FOAF) provides a means of defining yourself. The National Library of Congress has published its Subject Headings – a list of standard names which everyone may as well use to ensure consistency. But it's not essential, you (or someone else) can always declare equivalence using a SameAs or exactMatch type of relationship. e.g. 'Brown, Gordon' can be equated to 'Gordon Brown'.

As you rummage, you'll come across terms such as RDF, URI, graphs, triples and so on. These exist to clarify rather than confuse. The resource description framework (RDF) defines how information should be expressed. Fundamentally each item is a triple comprising: subject; predicate (or property); object, as in Gordon Brown; is a; politician. A uniform resource identifier (URI) might define each of those elements. And the collection of triples is referred to as an RDF graph. Of course, you'll get exceptions, and finer nuances, but that's the basic idea.

The point of all this is that, as with the rest of the web, it must be allowed to flourish in a decentralised and scalable way, which means without central control, although open standards are very important and make life easier for all participants.

With this general introduction, it's possible to see how data sets can be joined together without the explicit permission or participation of the providers. You could find a URI and, from that, find all the other datasets that reference it, if you wanted to. Because of the common interest, you (or your application, more like) would be able to collect further information about the subject.

Talis is a UK company that's deep into this stuff. It's been going for around 40 years and was originally a library services provider. It has spread its wings somewhat and now divides its attention between education, library and platform services. The platform element is the part that's deeply into linked data. It recently set up a demonstration for the Department of Business, Innovation and Skills (BIS) to show some of the potential of this stuff. It takes RDF information from three sources – the Technology Strategy Board (TSB), Research Councils UK (RCUK) and the Intellectual Property Office (IPO) – and produces a heat map of activity in mainland Britain. You can see how much investment is going in, how many patents are being applied for and so on. You can zoom into to ever finer-grained detail and use a slider to see how the map changes over time. You can play with the Research Funding Explorer yourself or follow the links in this piece by Richard Wallis to see a movie.

For you, the question in your mind must be, "All very well, but what's in it for me?" For a start, you can get hold of a lot of data which might be useful in your business – information about customers, sources of supply or geographic locations, for example. So, you may find value purely as a consumer. However, you may be able to give value by sharing data sets or taxonomies that your company has developed. This might sound like madness, but we've already seen in the social web that people who give stuff away become magnets for inbound links and reputational gains. In this case, you could become the authoritative source for certain definitions and types of information. It all depends what sort of organisation you are and how you want to be seen by others.

Are multi-touch surfaces heading your way?

In the days of black screens and green type, the arrival of colour was somewhat puzzling. If computers had got us so far without colour, who'd want it? Everyone, it seems.

Then came windows, icons, mice and pointers. Again, we were all happy with what we had. Why rewrite everything for some gimmicky whizzbang interface? As soon as you used an Apple Mac, you knew the answer. Ordinary people were suddenly able to do extraordinary things. But it wasn't until 11 years later when Microsoft finally got its act together with Windows 95, that this interface started to become more or less ubiquitous.

And there we've stalled for 26 or 15 years, depending whether you're a Mac or a PC fan. It works. Who wants more? Well, since the time the Macintosh came out, inventors have toiled in labs to bring us a more natural, direct, interface based on fingers, hands and, in the case of horizontal displays, objects placed on the screen. In recent years pioneering companies like Perceptive Pixel, Apple and Microsoft have been selling multi-touch surface devices.

In the abstract, it all sounds jolly fine (apart from the potential for the unselfish sharing of germs). You can access, open, expand, move, rotate and contract information artefacts right there on the screen. They could be images or documents inside the computer. Some of the systems can even interact with other things lying on the screen's surface. The external artefacts might be coded underneath so the system knows what to do with them or they could be simple things like business cards or other documents, which can be scanned. In one case, a library in Delft would whizz up pictorial information about your post code as it read your library card (video here). The Microsoft Surface can recognise and communicate with a suitably enabled mobile phone. It can show the contents of your mobile phone in a notebook. Just slide items to and from the on-screen notebook, in order to update the phone contents.

You could throw a keyboard up or, indeed, a facsimile of any kind of device but the main potential at the moment seems to be exploration, manipulation and mark-up. Fingers are better at some things but certainly not everything. However, if your organisation needs to surface information to any audience, regardless of their computer skills or application knowledge, then this might be a better way to do it than the usual single touch, keyboard or mouse controls.

The Hard Rock Café in Las Vegas has a number of Microsoft Surface tables through which visitors can browse a growing part of the company's collection of rock memorabilia. The National Library of Ireland uses the same product to show rare books and manuscripts which would otherwise be kept from public view due to their fragility or value. The US military uses Perceptive Pixel's huge displays for God-knows-what but you can bet that some of it involves 3-D terrain, flying things and weapons. Then Apple, of course, has made the iPhone exceedingly sexy with its own gestural controls.

While the technolgy and the functions are intriguing and seductive, the question is whether they give sufficient advantage over what's being used today. They cannot replace the present range of control devices except in special application-specific situations. Just as mice and pointers didn't replace keyboards, nor will multi-touch replace current devices. They may complement them though, especially as they become part of the repertoire of the everyday laptop or PC. 

Whenever new technologies come along, it's quite often the user department that takes them on board, side-stepping IT if possible. We saw it with PCs and spreadsheets. We saw it again with desktop publishing. And again with mobile phones and PDAs. But, eventually, either the users or the organisation realise that the greater benefit comes from integration. IT represents the great archive in the sky to which and from which intellectual artefacts can be stored and retrieved. And, once IT is involved, more things become possible; using the mobile phone as a terminal, access to and re-use of materials produced elsewhere in the company and, in the case of multi-touch, delivering the contents of information stores to the devices. Museums and libraries are, perhaps, obvious examples but some users would value a natural way to get at and drill into, say, statistical information by geography or find and explore whatever today's equivalent of a blueprint is.

Right now, you might see these multi-touch surface devices as a bit of a curiosity but, just as the mouse (first publicly demonstrated in 1968) moved into the mainstream eventually, so these things may become important to you and your organisation.

If you're interested, a great place to mug up on the background is Bill Buxton's Multi-Touch overview.

Knowledge Management: why not?

I will get back to 'proper' blogging soon. Honest. It's just been a bit mad round these parts as I've put my new life together. It's going well, I might add, but the blogging has been neglected. Another week or two should do the trick.

However, thanks to Computing, CIO and Information World Review, a few of my pieces have popped up in the last few days, all of them connected in some way to Knowledge Management. And, yes, I know that's a contradiction in terms. But put 'social' and KM together and magic starts to happen.

The three articles are:

Social tools take KM to a new level (Computing]

IT accumulates data but Web 2 shares knowledge (IWR)

Board level energy saving and environmental issues (CIO)

None of the titles is mine, of course. (They have sub-editors to dream these things up.)

The first was my response to a request to prepare a 'definitive guide to Knowledge Management'. The second was a column in which I postulate a kind of lifecycle of knowledge/information. And the third was created by plundering the business/CIO related threads in the recent IBM Eco-Efficiency Jam – another example of social/KM in action. It happily blended my two primary interests: human and environmental IT.

Happy reading. See you back here soon, I hope.

Is the eco-wind blowing your way?

It wasn't so long ago that all 'green' activity went under the heading of 'idealism'. Nothing wrong with that, but the so-called developed world is not big on that sort of thing. It doesn't put money in shareholders' pockets, satisfy fashion urges or keep children 'happy' at Christmas. (The quotes are because I remember our family's happiest festive times were when we were poorest.)

Now, with Copenhagen looming and the Carbon Reduction Commitment legislation just around the corner, large organisations especially are beginning to realise that a regulatory steamroller is heading their way and they will need to do something about it, quite regardless of their anthropogenic climate change beliefs, or otherwise.

In terms of business operations (as opposed to political lobbying or scientific research), it's probably best that organisations focus on what they can do which will deliver genuine benefits while minimising their impact on the environment. Generally speaking, benefits boil down to cost savings and increased revenues, although public bodies are likely to focus more on the former than the latter.

The tough bit is measurement. Like accounting and auditing, this is necessary to understand progress and to be able to report convincingly, both internally and externally. All the time the global warming, climate change and greenhouse gas discussions have been going on, it's all seemed a bit abstract to day to day life. Now that we're facing the prospect of being nailed by governments, councils, customers and investors, the need to act has crystallised. What a fantastic opportunity for the IT world. If nothing else, it's very good at collecting and processing information in very high volumes and that's exactly what organisations need as they contemplate their own sustainability, in all senses of the word. (We shouldn't be so dazzled by the CO2 story that we forget about raw material use, water, waste and other forms of pollution.)

Some software companies – Access and Microsoft Dynamics spring to mind, although I'm sure there are others – got into carbon accounting before their clients were fully aware that this was going to become important. Hats off to them, and others like them, who knew what had to be done and just got on with it. They will, hopefully, reap the rewards of their early efforts.

As ever, the pioneers aren't usually the ones that walk away with the big prizes. While journalists and bloggers have been complaining about the apparent lack of action, the big guys were getting on with the job. They were watching what's going on out there, putting their own houses in order while preparing new products and services for market. Now they're emerging from the woodwork.

For example, CA managed to secure the name ecoSoftware for its SaaS-based measurement and reporting system aimed at medium to large businesses. It looks into every nook and cranny of a business in order to assess and report on its environmental health. It takes the whole sustainability perspective, rather than concentrating on carbon or energy alone, for example. It takes its feeds from just about anywhere – hand entered meter readings, electronic feeds and inputs from other recording systems. It integrates with other software, especially the major ERP systems. Users can drill in and out of detail, and filter the information in a variety of ways – by business process, by GHG Protocol Scopes, per shipping unit, by floor area, and so on. The end result is action plans based on genuine insights and, of course, the ability to measure progress.

Another company, 1E, is probably best known for its NightWatchman system which minimises the power use of the desktop computing estate. It has recently announced a system for measuring data centre energy use and identifying how much useful work each server is doing. It can change the power profile of each device according to the work it is (or is not) doing. The system reports what's been going on in charts and tables and users can drill into any unusual patterns. It will integrate natively with hardware vendors' own tools or scripts can be created for exceptions or new developments. Crudely stated, the point of the exercise is to maximise the business value delivered with the least use of equipment and energy.

These examples serve to show which way the wind is blowing. Organisations will find that the IT industry will be key to helping them get a grip on their environmental obligations and costs. It is a case of a win all round but, as is frequently the case, no one will win bigger than the IT industry itself. 

Moving to a new app? Mind the data trap

If you're anything like me, you're always on the lookout for
software that will improve your life in some way. It might help you communicate
more effectively and more widely or simply get you through the work week more
productively.

You frequently alight on something new, play with it for a
bit, then decide that it's not for you. Probably because the user interface is
too clunky or maybe it's missing some favourite features of an otherwise
inadequate existing system.

The search goes on. And you put up with the restrictions of
what you've got.

When you find the right product, you then have a bunch of
decisions to make, not least of which is "how easy will it be to
switch?"

If you're talking about a move from one screencasting tool
to another, for example, the move is relatively straightforward. Your old
screencasts will still work, so introducing the new tool is largely a matter of
learning how to use it. And, if others are to use it, to boil the instructions
down to the essentials, in order to cut down the 'time to value'. They can
always pick up on the finer points as they go along.

If you're talking about a system that requires you to move
legacy information into it in order for it to become useful, then you have to
seriously consider whether the promised benefits are worth the effort. The
effort, of course, will vary according to the export/import capabilities of the
software. Some software vendors make a point of being able to import their
competitors' data, in which case you could be in luck. However, if your
existing vendor is a smaller player, you may be denied this, unless it provides
a standards-based export mechanism.

As an example, I've just spent many hours looking at
Microsoft's OneNote.
It held out the promise of organising my life and the information in it. But,
for this promise to be fulfilled, I had to a) learn how to use it and, b) move
enough of my life into it to keep the Tebbo show on the road. a) took a few
hours, but b) took many times that. The time consumed was my own. It wasn't the
sort of thing I would have done on the company shilling, in case it was wasted.

After many years of using organisers of various kinds –
ideas processors, outliners, mind mappers, databases and others, such as Lotus Agenda (1992) and Octave's
Web (1989) – I was
reacquainted with OneNote on a recent visit to Microsoft. It was incidental to
the briefing, but it will become more ubiquitous with the arrival of Office 2010. Perhaps I'd
dismissed it before because of its simple notebook-like interface. Or maybe I
didn't like the 'container' approach to content elements. Whatever the reason,
ignore it I did.

Yet, it does what so much of the other software fails to do:
it provides useful capabilities using a familiar metaphor. Everyone can
understand notebooks, sections and pages. And, on those pages: text; drawings; images
and hyperlinks. Getting stuff in and out is simple, in the main, but if it
isn't then add-ons and third party tools are available to help. It has some
shortcomings but, for me, the important thing is that it held out sufficient
promise that I gave up a huge chunk of weekend and holiday time to get my data
in. (Context: I already use Office Pro.)

Moving to new software is never easy but learning to use it
is often the easiest bit. The hardest bit is if you have to move heaps of
legacy data across. You can consider yourself successful if the systems and
people around you don't notice the change.

Mind-mapping with MindJet and MindGenius

Ever since Tony Buzan started popularising mind-mapping in 1974, it's had a bit of an uphill struggle to reach the mainstream. Over sixty commercial applications are available for the PC, the Mac and the web. A sprinkling of others are available for the Pocket PC, iPhone and BlackBerry. And you'll even find open source and freeware versions.

So mind-mapping is an industry, albeit a bit of a niche one. And the products/services keep on coming. October saw announcements from two well-known players, MindJet and MindGenius, which suggested that the mind-mapping world has yet to run out of puff.

MindJet has blended communications and mind-mapping into a single web-based collaboration service with Catalyst. Its premise is that most so-called collaboration tools are actually communication tools, completely lacking an application at their heart with which participants can engage. It feels, with some justification, that a mind-mapping application is exactly the right thing for this. It's useful, easy to understand and the nodes can activate files inside their own applications.

The counter to this might be that a generalised voice-video-IM-screen-sharing communication service allows you to run whatever applications you like at the desktop. Either a scribe can do updates or, more clunkily, control can be passed between participants.

The second announcement of the month fits the latter category. It is a desktop application. MindGenius claims that, with an addressable market of 400 to 500 million English-speaking users, it can focus uncompromisingly on improving the mind-mapping experience for this particular market. And it does a good job. Information entry is slick, navigation can be through the graphical image or through a separate 'outliner' pane (called Map Explorer) and any notes attached to the selected entry are visible in another pane. It offers smooth two-way integration with Office applications such as Word, Excel and Project.

Mind-mapping started out as a very personal thing. The aim was to enable you to take notes effectively, learn quickly and plan easily. When personal computers came along, outliners grabbed our attention first, then the more graphical mind-mappers came along. As screens got bigger and resolution improved, so the visual mappers came into their own. But most people were either ignorant of the technique or they saw nothing wrong with sticking with paper and coloured pens.

Once the vendors twigged that they could be used for project work and for effective communication, the brakes came off and MindJet, MindGenius and others offer some good tools for facilitating projects from inception to completion. They also offer varying degrees of data exchange with other applications.

The thing to watch out for is how many brain cycles are consumed with actually operating the application as opposed to getting something done with it. Ideally, you want the program to more or less fade into the background while information is quickly transferred to the screen, moved around, navigated and absorbed.

Bearing this in mind, of the two applications mentioned, I must confess to a slight leaning towards MindGenius.

Am I qualified to comment? Well, I started using mind-maps in the mid-70s and wrote a mind-mapping program in 1981 which, incidentally, is still being published today from somewhere deep in Colorado. I've been using my own program habitually for 28 years and others as and when they find their way into my computer. If you'd prefer to follow a couple of subject experts, then I'd recommend Chuck Frey and Vic Gee.

Collaboration and Control

Once upon a time, the boundaries of IT management were fairly straightforward. All your customers were inside the company and exchanging digital information with the outside world was highly controlled, if it happened at all. Not only that, but you sat down and figured out the business needs and then bought or developed the appropriate software which you then ran in-house. The users were obliged to take what they were given. Not quite easy peasy, but close.

Nowadays, users have their own views. They want to collaborate electronically with each other and with the outside worlds of business partners, suppliers and customers. They want to hold webinars, share screens, instant message each other, maybe even work on wikis together and comment on each others' blogs. You have to decide whether to allow these things to happen formally or informally. If formal, at least you have some control over what holes you allow in the firewall. If informal, you've probably given them web access and told them to behave themselves. Although the social media brigade will say, "Trust everyone," only you will know if that's going to work in your organisation.

If you do try to restrict what users can do, you'll be surprised at how inventively they'll sidestep your controls. Research suggests that if they can, they will. You are driven by the need to keep the enterprise system secure. They are driven, usually, by achieving results in the most effective way. These two drivers are not usually compatible.

Knowing that 'collaboration without travel' is at the heart of their needs, you start looking around at what's available. Broadly speaking, the bottom line is a choice between an externally hosted service and one you look after yourself. The externally hosted approach is a bit nerve-wracking because all your company's digital collaborations will be stored on someone else's servers. What if something goes wrong? The service provider could fold or you could simply fall out with it. Can you get all your records back? Will they be in a usable form? This is the stuff nightmares are made of. Some very major vendors are beginning to offer such hosted services. Perhaps you'd feel more comfortable entrusting your data to an IBM, a Citrix Online or a Microsoft, for example.

But the alternative, hosting it all yourself, brings its own problems. Scaling is one, but that's probably fairly easy to address. What about your own users, who are now merrily collaborating with each other, being able to collaborate with external partners of various kinds? Your lock down could end up as a lock-out. And, in these days of close collaboration between organisations, this could be greatly to your detriment.

If partners, suppliers or customers are running different collaboration systems to you (as many will), be wary of the glib salesperson who assures you that interoperability is a piece of cake. Ask to talk to real users with similar needs to your own. Find out if your licence terms allow you to extend membership of your collaboration systems beyond the firewall. Ask a few of your business partners if they would be happy to work in this way. After all, they may be just as nervous about engaging beyond their own firewall.

It's so easy to find private systems that satisfy internal collaboration and security needs. The danger lies in forgetting that, over time, the constituency you serve is increasingly likely to involve ever larger numbers of outsiders.

New Lotus?

If you were in the market for collaboration software, what would your reaction be if a major software publisher offered you an all-singing all-dancing suite of battle-hardened collaboration tools?

What if that publisher were IBM? What if it were Lotus? What if it were Microsoft? Bear in mind we're talking about the same set of tools, same quality in each case.

You have your preferences, right? And they have nothing to do with what's on offer. It's about perception of the brand. And that, as has been discussed exhaustively and exhaustingly, is an issue for a brand called Lotus. It can dance, sing, strip, do cartwheels and swing from a trapeze, but nothing it does will impress those who don't want to be impressed.

So why on earth does IBM persist in protecting the brand? Part of the answer lies in its existing base; let's not rock the boat for the 100M plus users. Part of it lies in a touching faith that the reality of the technical specs will trump the perceptions of the marketplace. As my colleague, Dale Vile, pointed out recently, the evidence suggests this is not the case. The respondents to the survey were readers of The Register, not best known for their love of Lotus, but this is the point – they are exactly the outsiders that IBM/Lotus needs to influence if it is ever to expand its market.

Let's forget any ideas of switching back-end servers and applications. If a company has Exchange and Outlook, or Thunderbird, or The Bat! (okay, I threw that in for good measure), then it's unlikely to change and, if Lotus ever thinks it will then its head needs examining. But some of the new Lotus offerings don't want you to switch anything. At best, it will run on your existing equipment and operating systems, at worst it needs a dedicated server – an appliance, in effect – and you don't need to fret too much about what's in it. Some of the offerings are provided as a service, so you don't even need to worry about installing, managing and updating the back-end, although you will still need to look after the clients.

You'd have thought that IBM/Lotus would be crowing about these things that don't depend on, let's say, a Domino server. You'd have thought it would be making the point right up front that the product is freestanding and can be popped onto a Windows or Linux server. But, no, it takes a while for non-Lotus folk to figure out just what can standalone and what has a dependence on a bit of Lotus-specific back-end stuff.

It's not like buying insurance, a camera or a car on the web. Some of these sites get you to 'radio-button' or 'check box' your desires and a shortlist appears, each showing its primary attributes. It's a matter of minutes to drill down to the product that best suits you. The Lotus site makes you drill and drill and drill. To give it credit, at the lowest level, all the information is eventually given, but finding it requires some diligence. You'll find no mentions of platforms on the Lotus product page unless you count the 'Collaborate in the Cloud' link to LotusLive. Drilling required. Click on 'Collaboration Software Products'. From there you can search by product category, product name or keyword. Since you're unlikely to know the names, then category seems best. Or you could use keywords. 'Microsoft' pulls up just two hits, Quickr and Quickr Content Integrator (team sharing tools). Following the latter reveals that it offers both migration from and coexistence with SharePoint and Exchange public folders. Hurrah! But this is hardly platform independence. More digging needed. And then, deep in the bowels of the documentation, is a list of platforms -including Windows.

When it comes to the Lotus Connections social software tools, once again no clues are given to its platform requirements. It takes a further seven clicks to reach the information you need. Lo and behold! it can run on two flavours of Linux and three flavours of Windows Server plus, of course, IBM's AIX.

At least with the Symphony page, it takes only one click to find out the platforms. But why is Lotus so coy about some of its products being multi-platform? A mystery, to be sure. Perhaps someone in Lotus would care to comment?

To summarise, we have a company here that wants to expand its user base into the wider world but it is, a) shy about telling us the information we need, b) makes it horribly complicated to discover, and c) hides the good stuff behind a brand that carries a lot of well-deserved baggage.

Given the company's irrational attachment to the Lotus name, here are some suggestions: 1) Improve the website, at least for the multi-platform and cloud stuff; 2) Make sure that the multi-platform credentials are at least hinted at on the home page. (Sure, it won't work with certain versions of operating systems, browsers, databases and so on, but this shouldn't affect the broad messaging. All it needs is a direct link to the detail.); and, 3) think about changing the name.

In view of the foregoing, may I take a leaf out of the politicians' book and suggest 'New Lotus'?

No, I thought not.