Out of band communication


We always learn that languages are for communicating, but people usually forget to specify what is meant to be communicated. A large fraction of what is communicated is not the official message, but some sub-text: emotional, social. That subtext is usually not explicit, you learn there is correct language, and the interpretation of deviations and style is something you acquire on the side, you learn that this kind of mistake is a sign of this social group, this style is from that region of the planet. Life would be pretty hard if you cannot judge people based on their writing.

Internet has made it so easy for people to communicate between group, that we are faced with the problem of understanding each other, without any context, sentences like I play football, or I live on the first floor cannot be interpreted. Figures of speech like irony or exaggeration further confuse the conversation, to the point where it was necessary to make implicit communication explicit: for instance by using emoticons. Emoticons are one example of out-of-band communication, a narrative on a different level, but of course you could make it the main channel: emojili is an app that lets you communicate using only emoji.

While we tend to think of text as a single flow, there is a layering of communication systems, consider an old-school book, we have the following layers (theoretically, each layer except the first could be omitted).

  • The text.
  • The style of the text.
  • First subordinate level: parentheses.
  • Text formatting: italics, bold, fonts.
  • Second subordinate level: footnotes.
  • Page formatting.
  • Inset, figures and sidebars
  • Third subordinate level: foreword, appendix, etc.

Like most things, those levels are linked to a culture, typography rules change from one country to another, so do the rules of layout, and even the meanings of typography and font styling. Punctuation, capitalisation, even spaces are in a sense out-of-band information, you do not strictly need them to read the text, and there was a point in time when such artifices were optional. Nowadays they are considered an integral part of the text in western languages.

One type of out-of-band channel I like is furigana, a type of phonetic annotation that is used to give phonetic hints for kanji that are not well-known to the public, either because they are obscure, not used in Japanese, or the word they are used in is not read in a Japanese mode. For instance 上海 (Shangai) would be read as Jokai in Japanese, or even Ueumi; furigana tells the reader the way the kanji ought to be read. This annotation is not just a help to the reader, Japanese has a phonetic alphabet, so you could just write シャンハイ (Shangai), but writing it 上海 (シャンハイ) emphasises that this place is in China, and the relationship with the Japanese writing system, and carries semantic information about the name of place: top of the sea…

I often wish that this system would more widely used, as many words and names, because they have been taken out of the original context, do not have a pronunciation that can be guessed from their writing. This is particularly bad in English.

The web has brought another level of subtext, as text is now not only read by humans, but by algorithms. The web browser is the simplest of these algorithms, it re-creates some of the subtext levels for the viewer: adapting the layout for the device the user has, but also translating some of the sub-text, if the user is blind, the whole sub-text must be transformed, as it cannot be expressed visually. Other algorithms transform, synthesise and aggregate the information, search engines are the most visible example, but the logic that builds the snippet for a page when you share in on a social network is another, so are systems that extract dates and tracking numbers from confirmation e-mails.

While many concentrated on features to build applications inside web-pages, HTML5 actually contains many changes around making the semantic information in a web-page explicit:

  • Tags like <u> changed meaning: this tag now marks text that is stylistically different, and indicates a proper noun in Chinese. Underline should be done using CCS
  • New tags like <time> are used to indicate elements with a specific semantic meaning, with an attribute specifying the time in machine readable format.
  • The <input> tag now supports many more semantically defined input types, like phone numbers.

This concept has always been present in the web (<address> was there from day one), but were mostly abused by people trying to do page-layout in web-pages. Two things have happened since: more and more algorithms are parsing web-pages, and more and more people on the web cannot handle the content in its original form, either because of a handicap, or because they do not understand the language.

What I find interesting is that the same thing is happening in reality, more and more things get annotated with barcodes and QR-Codes so that machines can make sense of them, they are just another form of sub-titles, but in turn people are now learning to make them pretty, to stylise them, playing with the sub-text once again…

Flattr this!

A Beginner’s Guide to HTML

NCSA Mosaic Logo

One of the basic ideas of the web is that you do not copy data to your machine, instead you keep pointers (urls) to stuff. The underlying assumption is that said pointer will remain valid in the future. When I first learned to do HTML, I clearly remember printing out the page A Beginner’s Guide to HTML written by Marc Andreessen, so I could work offline, at home. I had dialup access, but loading a web page over a modem was just to slow. As internet became faster and more prevalent, I stopped keeping paper versions of documentation, and moved on to more modern HTML features, like the ones supported by Netscape. I eventually lost the paper version.

In response to assorted requests and queries, I have written a simple “Beginner’s Guide” to writing documents in HTML. It’s up for grabs at http://www.ncsa.uiuc.edu/demoweb/html-primer.html at the moment; comments are welcome (but no complaints about my coverage or use of the IMG tag that Mosaic supports; it’s important internally).
The guide also points to a rudimentary primer on URL’s that might be of interest to Web beginners (certainly the number of people who have sent me Mosaic bug reports saying “URL ‘machine.com’ doesn’t connect to the ftp server”, etc., would seem to indicate that basic knowledge of URL’s is not yet a given on the net).

Finding that particular guide again was not completely trivial, so I’m now putting another mirror online here. This document confirms what I outlined in my post about image formats, that what was considered a reasonable image file in these days is not anymore. We are still struggling with the format of video data, although that subject is already touched upon in 1993. In a sense, HTML-5 is much closer to the spirit of HTML-1, with various teams trying to get something done instead of having a nice formalism.

Flattr this!

ILBM Animations

I have been hacking around on my small javascript library to load ILBM/IFF files, once I had fixed HAM decoding the next logical step was to add support for animations. This was for me a good opportunity to learn about animations in Javascript, in particular window.requestAnimationFrame. I’m not 100% sure that the code is correct, as the rare specs I found about the CRNG chunk are pretty thin on details, I have something that seems to work and does not kill the CPU of my laptop completely.

The animation on this page is For This, an ILBM file with an animated colour palette, created by Vector (Peter Szalkai). Interestingly this image has a pixmap of 320 × 256 which hints to a PAL Amiga.

Flattr this!

Ham decoding fixed

After sharing my library for displaying ILBM/IFF files, Amiga Graphics correctly pointed out that the HAM image handling was buggy, this is now fixed: I was not making a copy of the palette colour used in Hold & Modify, so it was getting modified, i.e. corrupted.

The following image, Angel, by Dennis Jacobson/JAKE Productions is a ILBM/IFF Hold and Modify image based on a picture of Karen Velez from 1985. Note that the source pixmap is 400 × 320 pixels with a rendering ratio of 20 ÷ 11, the Amiga had strange video modes.

Flattr this!

Amiga ILBM files

This image (MEDUSABL.IFF) is one of the few Atari ST IFF files I could find on the web, it is quite typical of the graphics of the time: low resolution, 4 bit palette.

The next big thing after the Commodore 64 was, of course the Amiga. While a 16 bit processor with a 8 7 Mhz clock is something you use for a washing machine today, in those days it was something awesome. The Amiga was renowned for its graphical capacities as it could display more than the 16 fixed colours of the C64.

One graphical mode that was particular interesting was Hold and Modify (HAM), which enabled the display of up to 4096 colours in a fixed image using only 6 bits per pixel (8 in later versions). HAM is basically a hack, where each pixel can either be looked-up in a colour table (the standard way of doing things in those days), or re-use (hold) the colour of the previous pixel and modify the value of one of the channels (Red, Green, or Blue), hence the name Hold and Modify.

The graphical image format of the Amiga was , a particular instance of the . I wanted to see if I could write a parser for the IFF/ILBM format that would read directly the original file and display it in an HTML5 canvas. It was a fun project, as the ILBM format stores image data in a way that is completely different from the way things are done nowadays: instead of storing all the bits for a given pixel together, they are split between bitplanes, each pixel is represented by a single bit in multiple bitplanes. This was related to the way Denise, the graphical chip of the Amiga, handled graphics. I also had to write code to decompress the data.

Interestingly this format shares some technology with the Macintosh: IFF reuses the idea of 32 bit type codes adopted by the Mac, and in return the Mac used a IFF based format for sounds, AIFF. ILBM also uses the compression introduced by Apple. The IFF format still survives in a way: the WebP format uses the RIFF container, which is basically the little endian version of IFF.

Anyway, I managed to get something that works, the code can load and display both palette and HAM ILBM files. I could not write code that handles HALFBRITE format because I did not find any example image. My code parses the palette animation tables, but does not perform the animation, although it looks like it is possible: this blog post shows some impressive colour table animations using the gorgeous art of Mark Ferrari, it also uses ILBM files as a source, but pre-converts the data in JSON.

You can find the standalone image visualisation web page here and the javascript library that does the work here, the library is distributed under the BSD license.

Edit: changed the frame page code to be a bit more generic.
Edit: the library now has its own page, it now supports animations and EHB images.

Flattr this!

Seamless iframes

HTML5 Logo by World Wide Web Consortium

One of the interesting additions of html5 is the the notion of seamless iframe, that is, an iframe that looks like it is part of the parent web-page, inheriting its styles and such. While this feature is not widely available, in fact the Webkit nightly only had it for a few days, I found some interesting use for it: testing new blog posts.

While WordPress offers some modicum of WYSIWIG editing, I tend to write the html myself, partly because I have heavily customise the CSS of the blog, partly because I tend to use features that are awkward to use in an editor, like tables, or are not supported at all, like using non-breaking thin spaces in front of punctuation in French texts, i.e. following the typographical rules.

More and more, I write the blog entry in a text-editor, which is more confortable and much faster than using a web-interface (I know, I’m old school). This also has the advantage of working offline, which in a country where trains tend to go through tunnels, is a nice advantage. Still it would be nice to be able to preview the general look of the page.

Now I can do this using a seamless iframe. To do this I wrote a minimal web-page that mimics the general structure of a WordPress blog, it basically loads the CSS, sets a few structural divs, and that’s it. This div in turn includes a minimal web page that contains the content of the blog entry, encapsulated in a single div that needs to be emulated.

You can have a look at the test frame, the actual blog-post snippet is here. Of course this will only work with a web-browser that supports seamless iframe.

Flattr this!

The web browser of the Playstation 3

PlayStation 3 Slim Model

This February, Sony did something unexpected: it did something smart. Included with the release notes for firmware update 4.11 is the following text

The Internet browser has been updated for improved content display.
Some websites that could not be displayed correctly, including interactive websites, are now supported, and page layouts are now displayed with better accuracy.

Playing around revealed something interesting: the web-browser is webkit based, version 531.22.8 according to the headers. Looking at the license acknowledgements, it looks like it is based on the GTK API. The browser has the same interface as the previous one, so the change is quite subtle, but CSS and Javascript supports has significantly improved, which is good because the previous browser was really bad. There are many missing features, including SVG image and video tag support, but this is a step in the right direction, and one can hope that Sony’s engineers can now iterate and add new features to the browser.
Mozilla 5.0 (Playstation 3 4.11) AppleWebKit/531.22.8 (KHTML, like Gecko)
Given the fact that Sony won’t release a new living room console for a least one year, keeping the venerable PS3 relevant sounds like a good strategy to me.

What I find interesting is that many HTML5 additions designed for mobile phones are actually very relevant for a gaming console. Consider the following:

The PS3 is typically connected to a TV or a big screen, and has hardware support for playing back H264 content.
Input forms
By default, the PS3 uses a soft-keyboard, so having specialised keyboards for numbers, e-mail addresses, and ranges would make it much more usable.
Most PS3 come with Dualshock controllers which can produce vibrations.

Media Capture
The PS3 can be equipped with a webcam and a microphone.
The PS3 has respectable Open GL ES rendering capacities

I suspect the main issue is going to be RAM, the PS3 is now six years old, and has only 256 MB or RAM, but smartphones those days don’t have much more. I will be interesting to see if Sony updates the browser and at which pace, but at least it is now in a situation where progress is possible.

Flattr this!

Html 5 Spring cleaning

HTML5 Logo by World Wide Web Consortium

I have done some spring cleaning on this blog, mostly simplifying stuff by using two nice html5 features: video and ruby. The first fixes a long standing mess in html, the lack of of standard way of putting video clips into web pages. There was a large debate about the codec to use, with Microsoft and Apple pushing for h264, Google pushing for WebM, and Firefox pushing for the OGG format. The dust has not settled on this issue, most video on this site are in h264 format simply because this is the format my mobile phone produced, and the format most mobile phones can render. While the current solution is not perfect, before I had to use a combination of embed and object tags, which was much worse.

I took advantage of the cleanup to fix a few links that were still pointing to the old site’s address, in one case there were still links to the address free.fr, from seven years ago. I also have been playing around with microdata those days, so I added the relevant meta-data to the video, I doubt this will have much of an impact, but I like the idea of embedding meta-data into the html, microdata is not a part of html5, by the way.


The ruby tags are more specialised, and bear no relationship with the ruby programming language. The tag is surprisingly old: it was first suggested in 1996, and implemented as an extension by Internet Explorer 5, the fact that it took 15 years to standardise that tag is quite depressing. Ruby tags allows the writer to add phonetic annotations to some html text. The classical use-case is adding furigana to japanese kanji, but it can basically be used for any kind of annotation, like for instance IPA phonetics. There are basically three tags: ruby which acts as a container, rt which contains the annotation, and rp which wraps the fallback code when ruby is not supported, that tag typically contains parenthesises. The source for the text in the box will look like this: <ruby>Zürich<rp>(</rp><rt>ˈtsyːrɪç</rt><rp>)</rp></ruby>

I have not back-converted all my existing content, the css tricks work, and it would be a lot of work, but I will use the standard tags from now-on.

Flattr this!