Track what’s important with paper.li

To my (slight) shame, I periodically purge the list of people I follow on Twitter. I simply don't have the time to read the minutiae of some of their lives, despite the fact they occasionally come up with worthwhile gems.

Now, I've discovered a way of getting the best of both worlds: I can keep up with the more interesting/useful Tweets from whoever I like while keeping down the number of people I follow in Twitter itself.

So, three cheers for paper.li – a free service that builds online daily newspapers: from a Twitter user and the people they follow; from a twitter list; or from a hashtag. It looks for Tweets that contain links and publishes an extract from the destination, crediting the Tweeter at the foot of the piece. You can click the headline to go to the original article/site

Paper.li earns its money from small display ads dropped into your newspaper.

Paperli

'envirolist' is my Twitter list of people who specialise in environmental and ethical stuff.

You probably can't see the detail in the above picture, but it has a 'trending topics' cloud and a live Tweet stream from the people in the list over on the right.

I currently have two papers running and I can create a further eight. My two are paper.li/tebbo and the one above, paper.li/tebbo/envirolist.

As a quick way to catch up on what's going on, paper.li is a corker. It requires minimal effort to set up a paper and it will even announce each new edition to your Twitter followers if you want, complete with the inclusion of some contributors' names.

Envirolisttweet

[Update Sept 7: I switched the notification off two days ago. While no-one had complained to me, paper.li updates were beginning to annoy some Twitter users. This could only get worse as the service became more popular. Today, the company has changed the notification to top story only and it has dropped name plugs. It helps, but if anyone wants to follow my papers, the links are in my Twitter bio. I'm not switching notification back on.]

With paper.li's simplicity comes a lack of flexibility but, once you start complicating things, you have to start learning stuff. This turns (some) people off.

As it stands, paper.li reminds me of my first encounter with Google – it was a shock to just see a text box and a search button. And look what that led to…

One Tweet leads to an improved infographic

Although I was delighted with the infographic I ended up with in the last post, I knew it could be improved. If only I could figure out how to use just a single .gif, sliding the overlay elements into view as the finger or mouse hovered over them.

If you've not read the post, I used over 20 separate transparent .gif files, laying each one over the top of the background image in response to the mouse/finger hover position.

My CSS skills (or lack of them) meant I was spending days wrestling with the single .gif problem. And the work I'd already done was doing a good job anyway. The total size of all the files involved was about 180k and I figured I'd be unlikely to save more than a third of that. But, the elegance of a two-file solution (HTML/CSS in one, all the images in the other) appealed to the erstwhile programmer in me.

At the point of giving up, I thought I'd throw out a plea for a CSS wizard and, blow me down, one Ben Summers responded. I'd written a piece about his Information Systems company, OneIS, in 2008.

I explained pretty much what I said in the previous post and within the hour, he sent me an email outlining how he would tackle the problem. He broke the log jam. I'd already prepared the new graphic in anticipation, slightly jazzing up the images as I went along, so all that was left was to replace bits of my code with bits of Ben's and insert the coordinates of the various bits of the new image.

This is a shrunken version of the new image with a ghastly yellow to show up the transparent areas:

ArchImage

The layout looks a bit peculiar because each overlay element needed to occupy its own rectangular area.

Ben's CSS used the z-index attribute to make sure that the hover layer was nearest the user, the overlay layer was next and the background layer (taken from the top part of the gif) was at the bottom. In my fumbled attempts to achieve a result, I got the hover and overlay layers the wrong way round, which meant that the hotspots were often hidden by the overlay element. Ben's code did it right, of course.

The hover layer was the bit that was giving me the most grief because I wanted to associate multiple hover areas with one overlay element. Here's the blended Ben/David answer for the transition element (the loopy thing about halfway down the image on the right):

#tr  .p1  {left:168px; top:301px; width:74px; height:70px; }
#tr  .p2  {left:243px; top:330px; width:46px; height:35px; }
#diagram #tr a:hover span.img
{
  left: 172px; top: 307px;
  width: 407px; height: 124px;
  background-position: -553px -1222px;
}

The first two lines define the hotspots and the third determines the hover action which is to display the overlay element defined in the lower curly braces. The first two lines in these braces determine where the overlay should be placed and the third shows where it can be found in the .gif image.

The HTML part for this same element looks like this:

 

#tr stands for the transition element. The 'a href=' anchor goes nowhere on click (but it could take you off somewhere else). The first two spans deal with the hotspots while the third slides the overlay in between the hotspot layer and the background.

If you want to explore the CSS/HTML and .gif in detail, they're at http://www.tebbo.com/howtohandlethemedia/index.html  and  http://www.tebbo.com/howtohandlethemedia/newarch.gif

Of course, it didn't end there. I offered to give Ben a hand with something he's doing. And, who knows, that may end up the subject of another blog.

Cheers Ben. Cheers Twitter. And good luck to you if you plan to go down the interactive infographics route. I've quite got the taste for it and I'm already planning my next one.

Under the covers of a hand-crafted interactive infographic

In my last blog, I said I’d share the experience of creating an interactive infographic which needed only CSS/HTML and some .gif image files. I used mainly low cost or free PC software.

Two bits took up a lot of time. One was figuring out how to deal with nested (and curved) shapes. The other was capturing all the coordinates for the hotspots. As I mentioned to a pal this morning, “It was mind-bending stuff. A bit like cleaning the windows of a skyscraper with a sponge and a bucket while learning to abseil.” Not one of my most eloquent analogies, perhaps, but it was off the cuff.

The start point was an image of the basic architecture for handling the media plus a header line. The image is quite old, but I’m pretty sure I created it using the GIMP. I typed the elements of the header line into Word, then took a screen shot, pasted it into IrfanView, cut out the bit I wanted and dropped it into the GIMP. This was the base, or background, image for all that followed.

Bg

I thought about tarting up the architecture illustration – making the lines three-dimensional, for example, but it was beyond my competence. Having said that, the interactive infographic is intended to mirror my business card and the ‘greybeards’ slide deck. (That’s my post-hoc justification, anyway.)

So, to the task at hand:

I used CSS to recognise and display the three layers: background image, hotspots and replacement image. I’d done it before, but in different ways. (I Googled ‘CSS HTML hover hotspot map’ or similar and found the interesting work of Stu Nicholls.) I liked his idea of redefining the <i> element in CSS to display each replacement image.

If you want to see the actual code, just ‘view source’ of the web page. The CSS and the HTML are all in there together. (You’ll notice some ‘z-index’ references: these determine the stack level of each element: cursor, hotspot, replacement image and background image. The higher the number, the nearer the user.)

I used the GIMP to create separate .gif files for the background and each of the replacement images. I felt it was cheating (compared with other image-switching work I’ve done before where I kept all the elements in a single file and shuffled them around, such as the grid here and the photo here.) Doing it with separate files was fairly quick and straightforward and the files are tiny – less than 50k each.

UPDATE 27 JULY: They’re over five times tinier now. Only the changes to the background image are stored in transparent .gif files.

I created the text boxes and drop shadows in PowerPoint for speed. I scaled the presentation pages to match the iPad-sized ‘architecture’ graphic, then took screen shots of eight boxes at a time and pasted them into IrfanView.

From there I cut and pasted each box in turn into the appropriate GIMP image. (Each lives in its own layer and they can be selectively merged, so they all start off transparent, apart from the background which shows through the whole lot.) Because the cut images were rectangular and the text boxes had curved corners, I had to use the GIMP’s tools to effect a repair when the underlying graphics were obliterated.

To create each .gif, I saved the visible image (the highlighted area and text box on a transparent background with the base, or background, image showing through).

Se

The HTML simply bolted together the hotspots for each image. I used the TextPad text editor for the HTML and the CSS (which I embedded in the HTML header) so it’s all just one file.

The hotspots could only be determined in rectangles, which was okay for the triangles side but a nightmare for the circles side. (I admit I did try to figure out if I could nest HTML’s ‘circle’, ‘rectangle’ and ‘polygon’ image maps to achieve the desired effects – a long story, but I gave up.) The hotspots also needed to be ‘finger aware’ for iPad users, so I made some of them bigger than the underlying image element.

I worked out the hotspot coordinates by cursoring over the GIMP image and taking the top/left height/width readings. To keep myself on track, I temporarily gave the hotspots a light grey background colour and used TextPad’s ‘View/In Web Browser’ option to check progress.

Grey

Elegant it is not. But it works well enough.

And that, my friends, is it. No JavaScript, no Flash, no dependencies on anything except an HTML file (with embedded CSS) and a deck of .gif images.

Interactive Infographic: How To Handle The Media

Since 1988, Martin Banks and I have been running media skills training courses. Early on, we introduced an ‘architecture’ for the process. We drew it on flipcharts for a few years then, in 2004, we formalised it and started giving it out as part of our wallet-sized plastic business card. The model acts as an ‘aide memoire’ for all who’ve attended our training.

Card

A few weeks ago, I was rummaging (as you do) some infographics – pictures that speak well over a thousand words – and took a shine to the interactive variety, where the graphic responds to the user’s actions.

I’d just been doing some training work with the Racepoint Group and, coincidentally, one of its US staff Kyle Austin wrote a blog post: Are Infographics the New Slide Shows?  Good point, I thought, having just taken someone through our ‘architecture’.

So I set to work to convert our flat image into something a little more lively. It’s aim is to refresh the memories of those who’ve attended our training and to give others an appreciation of how they might set about handling the media.

The first attempt was an animated .gif file with text boxes to expand on each element of the image. Horrible. Boring. Sequential. No user interaction. Didn’t lend itself to the web. Etc.

I wanted an interactive infographic that would work in more or less any browser and not depend on the presence of JavaScript, Flash or any other kind of plug-in. Just HTML and CSS. (I’d done some simple stuff before, here and here, so I was optimistic that it could be done.)

The second attempt was a graphic that the user could mouse over, highlighting image elements and showing the relevant text in a nearby box. The size was determined by my computer screen, which was a bit stupid because many of the people I’d like to share it with might have a smaller screen – an iPad for example.

So I reworked it with the iPad in mind. The hover can be achieved with a finger, even on the smallest graphical element. And while I was resizing everything, I added drop shadows and rounded corners to the text boxes.

If you’re interested, the end result is at How To Handle The Media

IPad1

I hope you enjoy it.

 

PS If anyone wants the gory technical details of how to do this sort of thing, I’ll pen another post. Just ask.

Creating a book from a blog (unintentionally, for free)

Guy Kewney, who I’ve known for many years, kept a little-known blog from which he let rip on whatever was bugging him at the time. In the past year or so, a lot of his commentary was about the cancer – its symptoms and treatment – that claimed his life on April 8.

On March 1st, he wrote a particularly poignant entry which, in summary, showed that he’d finally given up hope. This gave me the idea of starting a tribute blog to which people could post comments and stories for Guy to enjoy while he still could. Guy read the blog comments until very close to the end.

Yesterday, his wife Mary wrote to me to say, “I could never explain to you what a positive thing it was for Guy. It was truly life changing.” Which is wonderful to hear. Thank you Mary.

After his death, the tributes poured in, many of which appeared online. These were duly listed and linked to in the blog. Eventually, things dried up and it seemed a good time to ‘freeze’ the blog.

I wanted to create a CD of the blog, but getting it out of Typepad in a way that it could be read and navigated easily without an internet connection was difficult, to put it mildly. Then I stumbled across a program called website2PDF from spidersoft in Australia. By providing a list of the pages, it created (as you may have guessed) a .pdf file of the blog.

At first it was 54 pages but, by removing the ‘recent comment’ list and tweaking the layout, it ended up as a 39-page 1MB file. The next step is to print it and bind it. The print quality looks good but the font is pretty small because the blog design doesn’t take advantage of the full width of the paper. I am still wrestling with that problem…

I had paid the publisher for a full licence, to see if I could gain more control over the pdf layout, but that’s yet to arrive. (I thought these things were automatic. And, no, it didn’t go to my spam folder.) I did the whole job with the free trial version which, I think, lasts for 15 days, but I can’t find that information anywhere.

Bottom line? It’s great that website2PDF does a good job of capturing website pages(doesn’t have to be a blog, by the way) to a pdf. You can choose to have hotlinks, automatic text and picture breaks, ActiveX, scripts and a host of different layouts. It was only $49, so it’s not a bank-breaking exercise and I felt it would have been worth it for this one job alone. But, of course, I do look forward to becoming a registered user because it’s sparked off some more ideas for easy eBook creation.

—–

Update: After failing to extract a response from the author (4 emails) I raised a dispute ticket with PayPal. This prompted an instant response from the author. Apparently, the automated licence system had failed.

If ‘semantic web’ annoys you, read on…

Say "semantic web" to a lot of people and the shutters on their brains come down. They may have lived through the disappointments of the AI or expert systems eras. Or they may simply know how impossibly tedious it would be to retrofit their web pages with semantic data.

Say "linked data" to them and they might ask "what's that?" with a reasonably open mind. At some point during the explanation, it will dawn on them that the terms are identical to those used in the semantic web. By then, of course, it's too late, they're hooked.

The basic idea is that web pages, html or otherwise, contain some information that links them to other web pages in a meaningful way. Nothing particularly new in that, you might say. But the meaningful bit in this context is not what the human reads – a bit of clickable text that takes you to another web page – but what a computer application can read and make sense of.

An example might be understood as: 'The prime minister is Gordon Brown'. This might be expressed as prime minister:Gordon Brown. And these elements, in turn might point to well-defined explanations of the two concepts elsewhere on the web. In dbpedia.org/page/ the links would be Prime_minister and Gordon_Brown, respectively. Other authentic sources include Freebase, the Guardian or the New York Times. The application might drill into these pages plucking out useful information and following other links, which would have been defined in a similar fashion.

Of course, because this page has been published, it becomes a potential resource for others to link to. It rather depends what the page was about. The Gordon Brown entry, in this case, was just one element. It might have been 'The British Cabinet in March 2010', for example. And others might have found that information useful.

(If you want to experiment a bit, go to <sameAs> where you can whack in terms and read their definitions in plain text.)

Many public and not-so-public bodies have been making their resource or link information openly available. Friend of a Friend (or FOAF) provides a means of defining yourself. The National Library of Congress has published its Subject Headings – a list of standard names which everyone may as well use to ensure consistency. But it's not essential, you (or someone else) can always declare equivalence using a SameAs or exactMatch type of relationship. e.g. 'Brown, Gordon' can be equated to 'Gordon Brown'.

As you rummage, you'll come across terms such as RDF, URI, graphs, triples and so on. These exist to clarify rather than confuse. The resource description framework (RDF) defines how information should be expressed. Fundamentally each item is a triple comprising: subject; predicate (or property); object, as in Gordon Brown; is a; politician. A uniform resource identifier (URI) might define each of those elements. And the collection of triples is referred to as an RDF graph. Of course, you'll get exceptions, and finer nuances, but that's the basic idea.

The point of all this is that, as with the rest of the web, it must be allowed to flourish in a decentralised and scalable way, which means without central control, although open standards are very important and make life easier for all participants.

With this general introduction, it's possible to see how data sets can be joined together without the explicit permission or participation of the providers. You could find a URI and, from that, find all the other datasets that reference it, if you wanted to. Because of the common interest, you (or your application, more like) would be able to collect further information about the subject.

Talis is a UK company that's deep into this stuff. It's been going for around 40 years and was originally a library services provider. It has spread its wings somewhat and now divides its attention between education, library and platform services. The platform element is the part that's deeply into linked data. It recently set up a demonstration for the Department of Business, Innovation and Skills (BIS) to show some of the potential of this stuff. It takes RDF information from three sources – the Technology Strategy Board (TSB), Research Councils UK (RCUK) and the Intellectual Property Office (IPO) – and produces a heat map of activity in mainland Britain. You can see how much investment is going in, how many patents are being applied for and so on. You can zoom into to ever finer-grained detail and use a slider to see how the map changes over time. You can play with the Research Funding Explorer yourself or follow the links in this piece by Richard Wallis to see a movie.

For you, the question in your mind must be, "All very well, but what's in it for me?" For a start, you can get hold of a lot of data which might be useful in your business – information about customers, sources of supply or geographic locations, for example. So, you may find value purely as a consumer. However, you may be able to give value by sharing data sets or taxonomies that your company has developed. This might sound like madness, but we've already seen in the social web that people who give stuff away become magnets for inbound links and reputational gains. In this case, you could become the authoritative source for certain definitions and types of information. It all depends what sort of organisation you are and how you want to be seen by others.

Are multi-touch surfaces heading your way?

In the days of black screens and green type, the arrival of colour was somewhat puzzling. If computers had got us so far without colour, who'd want it? Everyone, it seems.

Then came windows, icons, mice and pointers. Again, we were all happy with what we had. Why rewrite everything for some gimmicky whizzbang interface? As soon as you used an Apple Mac, you knew the answer. Ordinary people were suddenly able to do extraordinary things. But it wasn't until 11 years later when Microsoft finally got its act together with Windows 95, that this interface started to become more or less ubiquitous.

And there we've stalled for 26 or 15 years, depending whether you're a Mac or a PC fan. It works. Who wants more? Well, since the time the Macintosh came out, inventors have toiled in labs to bring us a more natural, direct, interface based on fingers, hands and, in the case of horizontal displays, objects placed on the screen. In recent years pioneering companies like Perceptive Pixel, Apple and Microsoft have been selling multi-touch surface devices.

In the abstract, it all sounds jolly fine (apart from the potential for the unselfish sharing of germs). You can access, open, expand, move, rotate and contract information artefacts right there on the screen. They could be images or documents inside the computer. Some of the systems can even interact with other things lying on the screen's surface. The external artefacts might be coded underneath so the system knows what to do with them or they could be simple things like business cards or other documents, which can be scanned. In one case, a library in Delft would whizz up pictorial information about your post code as it read your library card (video here). The Microsoft Surface can recognise and communicate with a suitably enabled mobile phone. It can show the contents of your mobile phone in a notebook. Just slide items to and from the on-screen notebook, in order to update the phone contents.

You could throw a keyboard up or, indeed, a facsimile of any kind of device but the main potential at the moment seems to be exploration, manipulation and mark-up. Fingers are better at some things but certainly not everything. However, if your organisation needs to surface information to any audience, regardless of their computer skills or application knowledge, then this might be a better way to do it than the usual single touch, keyboard or mouse controls.

The Hard Rock Café in Las Vegas has a number of Microsoft Surface tables through which visitors can browse a growing part of the company's collection of rock memorabilia. The National Library of Ireland uses the same product to show rare books and manuscripts which would otherwise be kept from public view due to their fragility or value. The US military uses Perceptive Pixel's huge displays for God-knows-what but you can bet that some of it involves 3-D terrain, flying things and weapons. Then Apple, of course, has made the iPhone exceedingly sexy with its own gestural controls.

While the technolgy and the functions are intriguing and seductive, the question is whether they give sufficient advantage over what's being used today. They cannot replace the present range of control devices except in special application-specific situations. Just as mice and pointers didn't replace keyboards, nor will multi-touch replace current devices. They may complement them though, especially as they become part of the repertoire of the everyday laptop or PC. 

Whenever new technologies come along, it's quite often the user department that takes them on board, side-stepping IT if possible. We saw it with PCs and spreadsheets. We saw it again with desktop publishing. And again with mobile phones and PDAs. But, eventually, either the users or the organisation realise that the greater benefit comes from integration. IT represents the great archive in the sky to which and from which intellectual artefacts can be stored and retrieved. And, once IT is involved, more things become possible; using the mobile phone as a terminal, access to and re-use of materials produced elsewhere in the company and, in the case of multi-touch, delivering the contents of information stores to the devices. Museums and libraries are, perhaps, obvious examples but some users would value a natural way to get at and drill into, say, statistical information by geography or find and explore whatever today's equivalent of a blueprint is.

Right now, you might see these multi-touch surface devices as a bit of a curiosity but, just as the mouse (first publicly demonstrated in 1968) moved into the mainstream eventually, so these things may become important to you and your organisation.

If you're interested, a great place to mug up on the background is Bill Buxton's Multi-Touch overview.