Under the covers of a hand-crafted interactive infographic

In my last blog, I said I’d share the experience of creating an interactive infographic which needed only CSS/HTML and some .gif image files. I used mainly low cost or free PC software.

Two bits took up a lot of time. One was figuring out how to deal with nested (and curved) shapes. The other was capturing all the coordinates for the hotspots. As I mentioned to a pal this morning, “It was mind-bending stuff. A bit like cleaning the windows of a skyscraper with a sponge and a bucket while learning to abseil.” Not one of my most eloquent analogies, perhaps, but it was off the cuff.

The start point was an image of the basic architecture for handling the media plus a header line. The image is quite old, but I’m pretty sure I created it using the GIMP. I typed the elements of the header line into Word, then took a screen shot, pasted it into IrfanView, cut out the bit I wanted and dropped it into the GIMP. This was the base, or background, image for all that followed.


I thought about tarting up the architecture illustration – making the lines three-dimensional, for example, but it was beyond my competence. Having said that, the interactive infographic is intended to mirror my business card and the ‘greybeards’ slide deck. (That’s my post-hoc justification, anyway.)

So, to the task at hand:

I used CSS to recognise and display the three layers: background image, hotspots and replacement image. I’d done it before, but in different ways. (I Googled ‘CSS HTML hover hotspot map’ or similar and found the interesting work of Stu Nicholls.) I liked his idea of redefining the <i> element in CSS to display each replacement image.

If you want to see the actual code, just ‘view source’ of the web page. The CSS and the HTML are all in there together. (You’ll notice some ‘z-index’ references: these determine the stack level of each element: cursor, hotspot, replacement image and background image. The higher the number, the nearer the user.)

I used the GIMP to create separate .gif files for the background and each of the replacement images. I felt it was cheating (compared with other image-switching work I’ve done before where I kept all the elements in a single file and shuffled them around, such as the grid here and the photo here.) Doing it with separate files was fairly quick and straightforward and the files are tiny – less than 50k each.

UPDATE 27 JULY: They’re over five times tinier now. Only the changes to the background image are stored in transparent .gif files.

I created the text boxes and drop shadows in PowerPoint for speed. I scaled the presentation pages to match the iPad-sized ‘architecture’ graphic, then took screen shots of eight boxes at a time and pasted them into IrfanView.

From there I cut and pasted each box in turn into the appropriate GIMP image. (Each lives in its own layer and they can be selectively merged, so they all start off transparent, apart from the background which shows through the whole lot.) Because the cut images were rectangular and the text boxes had curved corners, I had to use the GIMP’s tools to effect a repair when the underlying graphics were obliterated.

To create each .gif, I saved the visible image (the highlighted area and text box on a transparent background with the base, or background, image showing through).


The HTML simply bolted together the hotspots for each image. I used the TextPad text editor for the HTML and the CSS (which I embedded in the HTML header) so it’s all just one file.

The hotspots could only be determined in rectangles, which was okay for the triangles side but a nightmare for the circles side. (I admit I did try to figure out if I could nest HTML’s ‘circle’, ‘rectangle’ and ‘polygon’ image maps to achieve the desired effects – a long story, but I gave up.) The hotspots also needed to be ‘finger aware’ for iPad users, so I made some of them bigger than the underlying image element.

I worked out the hotspot coordinates by cursoring over the GIMP image and taking the top/left height/width readings. To keep myself on track, I temporarily gave the hotspots a light grey background colour and used TextPad’s ‘View/In Web Browser’ option to check progress.


Elegant it is not. But it works well enough.

And that, my friends, is it. No JavaScript, no Flash, no dependencies on anything except an HTML file (with embedded CSS) and a deck of .gif images.

Interactive Infographic: How To Handle The Media

Since 1988, Martin Banks and I have been running media skills training courses. Early on, we introduced an ‘architecture’ for the process. We drew it on flipcharts for a few years then, in 2004, we formalised it and started giving it out as part of our wallet-sized plastic business card. The model acts as an ‘aide memoire’ for all who’ve attended our training.


A few weeks ago, I was rummaging (as you do) some infographics – pictures that speak well over a thousand words – and took a shine to the interactive variety, where the graphic responds to the user’s actions.

I’d just been doing some training work with the Racepoint Group and, coincidentally, one of its US staff Kyle Austin wrote a blog post: Are Infographics the New Slide Shows?  Good point, I thought, having just taken someone through our ‘architecture’.

So I set to work to convert our flat image into something a little more lively. It’s aim is to refresh the memories of those who’ve attended our training and to give others an appreciation of how they might set about handling the media.

The first attempt was an animated .gif file with text boxes to expand on each element of the image. Horrible. Boring. Sequential. No user interaction. Didn’t lend itself to the web. Etc.

I wanted an interactive infographic that would work in more or less any browser and not depend on the presence of JavaScript, Flash or any other kind of plug-in. Just HTML and CSS. (I’d done some simple stuff before, here and here, so I was optimistic that it could be done.)

The second attempt was a graphic that the user could mouse over, highlighting image elements and showing the relevant text in a nearby box. The size was determined by my computer screen, which was a bit stupid because many of the people I’d like to share it with might have a smaller screen – an iPad for example.

So I reworked it with the iPad in mind. The hover can be achieved with a finger, even on the smallest graphical element. And while I was resizing everything, I added drop shadows and rounded corners to the text boxes.

If you’re interested, the end result is at How To Handle The Media


I hope you enjoy it.


PS If anyone wants the gory technical details of how to do this sort of thing, I’ll pen another post. Just ask.

Creating a book from a blog (unintentionally, for free)

Guy Kewney, who I’ve known for many years, kept a little-known blog from which he let rip on whatever was bugging him at the time. In the past year or so, a lot of his commentary was about the cancer – its symptoms and treatment – that claimed his life on April 8.

On March 1st, he wrote a particularly poignant entry which, in summary, showed that he’d finally given up hope. This gave me the idea of starting a tribute blog to which people could post comments and stories for Guy to enjoy while he still could. Guy read the blog comments until very close to the end.

Yesterday, his wife Mary wrote to me to say, “I could never explain to you what a positive thing it was for Guy. It was truly life changing.” Which is wonderful to hear. Thank you Mary.

After his death, the tributes poured in, many of which appeared online. These were duly listed and linked to in the blog. Eventually, things dried up and it seemed a good time to ‘freeze’ the blog.

I wanted to create a CD of the blog, but getting it out of Typepad in a way that it could be read and navigated easily without an internet connection was difficult, to put it mildly. Then I stumbled across a program called website2PDF from spidersoft in Australia. By providing a list of the pages, it created (as you may have guessed) a .pdf file of the blog.

At first it was 54 pages but, by removing the ‘recent comment’ list and tweaking the layout, it ended up as a 39-page 1MB file. The next step is to print it and bind it. The print quality looks good but the font is pretty small because the blog design doesn’t take advantage of the full width of the paper. I am still wrestling with that problem…

I had paid the publisher for a full licence, to see if I could gain more control over the pdf layout, but that’s yet to arrive. (I thought these things were automatic. And, no, it didn’t go to my spam folder.) I did the whole job with the free trial version which, I think, lasts for 15 days, but I can’t find that information anywhere.

Bottom line? It’s great that website2PDF does a good job of capturing website pages(doesn’t have to be a blog, by the way) to a pdf. You can choose to have hotlinks, automatic text and picture breaks, ActiveX, scripts and a host of different layouts. It was only $49, so it’s not a bank-breaking exercise and I felt it would have been worth it for this one job alone. But, of course, I do look forward to becoming a registered user because it’s sparked off some more ideas for easy eBook creation.


Update: After failing to extract a response from the author (4 emails) I raised a dispute ticket with PayPal. This prompted an instant response from the author. Apparently, the automated licence system had failed.

Is authority the first word or the last?

David Weinberger was one of the co-authors of the Cluetrain Manifesto which looked at the impact of the internet on markets and organisations. He wrote his first blog post in 1999 but didn't hit his stride until 2001. That's probably years before any of us had even heard of a blog, or social networking, which is where this particular blog post is heading.

He's a genuine guy and a deep thinker and he's written some great books and delivered some brilliant lectures.

Last week he came up with a gem about authority being increasingly about having the first word rather than the last. His exact words were:

"…in the old days, we took expertise and authority as the last word about a topic. Increasingly, the value of expertise and authority is as the first word – the word that frames and initiates the discussion."

This is at the heart of the dilemma that traditional management faces when trying to figure out the role of social networking in the organisation. Assuming the value is understood, it becomes mainly a cultural issue. Some organisations – especially knowledge-based ones – are predisposed to sharing, openness and self-direction, while others are more into command and control. In the first instance, management might define the desired outcome and leave employees to decide how to get there; Weinberger's "first word". And in the second, the process will be rigidly defined.

The management of each type of organisation adopts its style because it believes that is the most effective way to get things done. Although, it has to be said, persistence with a command and control approach could also be a function of fear (of the lunatics running the asylum) or laziness (because it's always been done this way).

Some organisations realise that some departments are well suited to a more collaborative, networked, style while others are best run in a traditional way. Marketing versus computer assembly, for example.

The problem such companies face is social networking creep. It's as if one lot of employees have freedom to work where they like and when they like providing they achieve their objectives and the other lot are chained to their desks or their workshops and have little discretion over their use of time and (in particular) online resources. The deskbound ones are the ones who will be eyeing the socially-networked crowd with some jealousy.

So what's a company to do? Widen access to online resources and social networking, even to those who don't really need it, and trust staff not to abuse it? Some companies who have done this believe that a lot of time and IT/communications resource is being wasted doing non-work-related things. Downloading and watching questionable movies probably being the most extreme example. (In philosophical terms, it's actually not so different to previous upheavals when companies abandoned switchboards for direct dial telephones or introduced email to all and sundry.)

One company solves the problem by allowing anyone to participate but it's thrown a firewall around all social activity – restricting it to dialogue between the company employees only, although it is about to invite some customers and business partners to participate. Another says, "do what you like, but achieve your work objectives." Another is trying to monitor all online activities in order to reprimand those who step out of line.

What's important is to figure out what strategy is best for your own organisation:

Do you keep communication behind the firewall? This would restrict access to valuable information and external contacts.

Do you let people do what they like as long as they achieve targets? This might work best if they pay their own communication costs.

Or do you monitor every move? Employees may find this distasteful, if not illegal in some countries.

In the end it comes down to trust. The social media evangelists will argue that it's only through openness, transparency and trust that organisations will move forward. Others will argue that this is nonsense and that some people simply cannot be trusted with these new tools.

The organisations that will have the toughest time are those that have embarked on the social thing, spread it too far and now wish they could stuff the genie back in the bottle. Perhaps an answer lies in ground rules and education…

To paraphrase Dr Weinberger, this blog is only the first word. If you have experience-based views on the foregoing, we'd love to hear them.

The business value of collaboration software

Originally published in CIO Online Feb 2009

At Lotus/IBM's recent Lotusphere the words 'business value' were repeatedly uttered by keynote presenters, but none really had time to expand, beyond talking in terms of 'efficiency'. CIO online went in search of answers that could help our readers in their assessment of collaboration initiatives.

The first and most obvious thing relates to culture. Not every organisation actually welcomes collaboration. It really does result in a flattening of hierarchies, the breaching of the silo walls and the by-passing of those who add no value. If you've been in the game long enough to remember the advent of email, you'll remember the fears of those middle managers who suddenly found themselves 'disintermediated'. So nothing really new, except that the kind of collaboration that is becoming increasingly popular has the potential to tie anyone directly to more or less anyone, regardless of internal or external boundaries.

Fortunately for the CIO, some systems are more manageable than others and, with the organisation's blessing, an approved collaboration system provides a measure of control without inhibiting the participants' legitimate actions. The alternative, to let anyone use whatever public systems take their fancy, is a recipe for inefficiency at best and trouble at worst. One thing that won't work is an outright ban. People who need to reach out will use this stuff anyway.

Many business folk fail to see the commercial benefits. Especially if they are used to seeing their kids using social software to superpoke their friends or share party pix. They might associate such software with frivolity and are afraid that their staff will use it just to waste time. In fact, within a business environment where full names and profiles are used, all posts are technically traceable and such abuse is fairly unlikely.

As the name implies, the whole point of social software is to help people find each other quickly and in a fashion most suited to the task at hand, while respecting the availability wishes of the participants. It's a wheel-oiling process on a grand scale. But what are the bottom line benefits?

Enter stage left, Luis Suarez, who's part of a team which provides guidance to a 600-strong volunteer social software evangelist community within IBM. This work is additional to their day jobs. While it's easy to explain and enthuse about the elements – profiles, communities, wikis, blogs, bookmarks, activities, instant messaging, and so on – it's quite another to remember to build the business case.

IBM reckons it saves £12.9M in improved search productivity and reduced travel per year. And this figure is probably growing. To get this in context, here are some usage statistics for IBM's social software activities from October last year: 515,000 profiles are accessed 6.4 million times a week; 1,800 online communities contain over a million messages and have 147,000 members; over 25,000 wikis are used by over 320,000 readers; 260,000 blog posts have been made and these have over 30,000 tags; 580,000 bookmarks have been stored by 20,000 users – these have over 1.4M tags; 50,000 activities (think of them as projects) have 425,000 entries and 80,000 users; and over four million instant messages are exchanged daily. That represents one heck of a lot of shared knowledge. IBM is something of a special case. Its sheer size pretty much guarantees that whatever resources staffers want, people or information, they'll be able to find it when they want it. But a company doesn't have to be that big to get similar advantages.

Suarez sat down with me to hammer through some of the main value-related benefits. Here are just six of them:

1) Find: people, places, information – quickly by using profiles, and other people's tags and bookmarks as accelerants.

2) Validate: people especially. What have they posted? What do others make of them? You could arrive at a shortlist for a project team much more quickly and at greatly reduced cost than before.

3) Direct dialogue: with customers (and suppliers), internal and external. This eliminates filtering and politics and leads to more rapid understanding. It could mean fixing things that have gone wrong or identifying new product and service opportunities.

4) Capture information: from people as they're working or reviewing online material. This could prove especially valuable if faced with staff churn or retirements.

5) Connections: spread internal innovation widely and rapidly – bad ideas don't get traction but good ones do.

6) Communities: increase staff morale and retention through a sense of belonging and recognition.

Every one of those has a business value. It may not be easy to calculate, and the effort may not be worth it. But it does require the organisation to have an open, collaborative and trusting culture. Without that, it can never work. But with it, social software can transform the way we collaborate and share information.

If ‘semantic web’ annoys you, read on…

Say "semantic web" to a lot of people and the shutters on their brains come down. They may have lived through the disappointments of the AI or expert systems eras. Or they may simply know how impossibly tedious it would be to retrofit their web pages with semantic data.

Say "linked data" to them and they might ask "what's that?" with a reasonably open mind. At some point during the explanation, it will dawn on them that the terms are identical to those used in the semantic web. By then, of course, it's too late, they're hooked.

The basic idea is that web pages, html or otherwise, contain some information that links them to other web pages in a meaningful way. Nothing particularly new in that, you might say. But the meaningful bit in this context is not what the human reads – a bit of clickable text that takes you to another web page – but what a computer application can read and make sense of.

An example might be understood as: 'The prime minister is Gordon Brown'. This might be expressed as prime minister:Gordon Brown. And these elements, in turn might point to well-defined explanations of the two concepts elsewhere on the web. In dbpedia.org/page/ the links would be Prime_minister and Gordon_Brown, respectively. Other authentic sources include Freebase, the Guardian or the New York Times. The application might drill into these pages plucking out useful information and following other links, which would have been defined in a similar fashion.

Of course, because this page has been published, it becomes a potential resource for others to link to. It rather depends what the page was about. The Gordon Brown entry, in this case, was just one element. It might have been 'The British Cabinet in March 2010', for example. And others might have found that information useful.

(If you want to experiment a bit, go to <sameAs> where you can whack in terms and read their definitions in plain text.)

Many public and not-so-public bodies have been making their resource or link information openly available. Friend of a Friend (or FOAF) provides a means of defining yourself. The National Library of Congress has published its Subject Headings – a list of standard names which everyone may as well use to ensure consistency. But it's not essential, you (or someone else) can always declare equivalence using a SameAs or exactMatch type of relationship. e.g. 'Brown, Gordon' can be equated to 'Gordon Brown'.

As you rummage, you'll come across terms such as RDF, URI, graphs, triples and so on. These exist to clarify rather than confuse. The resource description framework (RDF) defines how information should be expressed. Fundamentally each item is a triple comprising: subject; predicate (or property); object, as in Gordon Brown; is a; politician. A uniform resource identifier (URI) might define each of those elements. And the collection of triples is referred to as an RDF graph. Of course, you'll get exceptions, and finer nuances, but that's the basic idea.

The point of all this is that, as with the rest of the web, it must be allowed to flourish in a decentralised and scalable way, which means without central control, although open standards are very important and make life easier for all participants.

With this general introduction, it's possible to see how data sets can be joined together without the explicit permission or participation of the providers. You could find a URI and, from that, find all the other datasets that reference it, if you wanted to. Because of the common interest, you (or your application, more like) would be able to collect further information about the subject.

Talis is a UK company that's deep into this stuff. It's been going for around 40 years and was originally a library services provider. It has spread its wings somewhat and now divides its attention between education, library and platform services. The platform element is the part that's deeply into linked data. It recently set up a demonstration for the Department of Business, Innovation and Skills (BIS) to show some of the potential of this stuff. It takes RDF information from three sources – the Technology Strategy Board (TSB), Research Councils UK (RCUK) and the Intellectual Property Office (IPO) – and produces a heat map of activity in mainland Britain. You can see how much investment is going in, how many patents are being applied for and so on. You can zoom into to ever finer-grained detail and use a slider to see how the map changes over time. You can play with the Research Funding Explorer yourself or follow the links in this piece by Richard Wallis to see a movie.

For you, the question in your mind must be, "All very well, but what's in it for me?" For a start, you can get hold of a lot of data which might be useful in your business – information about customers, sources of supply or geographic locations, for example. So, you may find value purely as a consumer. However, you may be able to give value by sharing data sets or taxonomies that your company has developed. This might sound like madness, but we've already seen in the social web that people who give stuff away become magnets for inbound links and reputational gains. In this case, you could become the authoritative source for certain definitions and types of information. It all depends what sort of organisation you are and how you want to be seen by others.

Are multi-touch surfaces heading your way?

In the days of black screens and green type, the arrival of colour was somewhat puzzling. If computers had got us so far without colour, who'd want it? Everyone, it seems.

Then came windows, icons, mice and pointers. Again, we were all happy with what we had. Why rewrite everything for some gimmicky whizzbang interface? As soon as you used an Apple Mac, you knew the answer. Ordinary people were suddenly able to do extraordinary things. But it wasn't until 11 years later when Microsoft finally got its act together with Windows 95, that this interface started to become more or less ubiquitous.

And there we've stalled for 26 or 15 years, depending whether you're a Mac or a PC fan. It works. Who wants more? Well, since the time the Macintosh came out, inventors have toiled in labs to bring us a more natural, direct, interface based on fingers, hands and, in the case of horizontal displays, objects placed on the screen. In recent years pioneering companies like Perceptive Pixel, Apple and Microsoft have been selling multi-touch surface devices.

In the abstract, it all sounds jolly fine (apart from the potential for the unselfish sharing of germs). You can access, open, expand, move, rotate and contract information artefacts right there on the screen. They could be images or documents inside the computer. Some of the systems can even interact with other things lying on the screen's surface. The external artefacts might be coded underneath so the system knows what to do with them or they could be simple things like business cards or other documents, which can be scanned. In one case, a library in Delft would whizz up pictorial information about your post code as it read your library card (video here). The Microsoft Surface can recognise and communicate with a suitably enabled mobile phone. It can show the contents of your mobile phone in a notebook. Just slide items to and from the on-screen notebook, in order to update the phone contents.

You could throw a keyboard up or, indeed, a facsimile of any kind of device but the main potential at the moment seems to be exploration, manipulation and mark-up. Fingers are better at some things but certainly not everything. However, if your organisation needs to surface information to any audience, regardless of their computer skills or application knowledge, then this might be a better way to do it than the usual single touch, keyboard or mouse controls.

The Hard Rock Café in Las Vegas has a number of Microsoft Surface tables through which visitors can browse a growing part of the company's collection of rock memorabilia. The National Library of Ireland uses the same product to show rare books and manuscripts which would otherwise be kept from public view due to their fragility or value. The US military uses Perceptive Pixel's huge displays for God-knows-what but you can bet that some of it involves 3-D terrain, flying things and weapons. Then Apple, of course, has made the iPhone exceedingly sexy with its own gestural controls.

While the technolgy and the functions are intriguing and seductive, the question is whether they give sufficient advantage over what's being used today. They cannot replace the present range of control devices except in special application-specific situations. Just as mice and pointers didn't replace keyboards, nor will multi-touch replace current devices. They may complement them though, especially as they become part of the repertoire of the everyday laptop or PC. 

Whenever new technologies come along, it's quite often the user department that takes them on board, side-stepping IT if possible. We saw it with PCs and spreadsheets. We saw it again with desktop publishing. And again with mobile phones and PDAs. But, eventually, either the users or the organisation realise that the greater benefit comes from integration. IT represents the great archive in the sky to which and from which intellectual artefacts can be stored and retrieved. And, once IT is involved, more things become possible; using the mobile phone as a terminal, access to and re-use of materials produced elsewhere in the company and, in the case of multi-touch, delivering the contents of information stores to the devices. Museums and libraries are, perhaps, obvious examples but some users would value a natural way to get at and drill into, say, statistical information by geography or find and explore whatever today's equivalent of a blueprint is.

Right now, you might see these multi-touch surface devices as a bit of a curiosity but, just as the mouse (first publicly demonstrated in 1968) moved into the mainstream eventually, so these things may become important to you and your organisation.

If you're interested, a great place to mug up on the background is Bill Buxton's Multi-Touch overview.