Power of 1 Voice

The Power of 1 Voice” is an amazing multi-platform production on the topic of Australian democracy. The project comes from the folks at the Museum of Australian Democracy (MOAD) and their fine cast of partners. Through the University of Canberra’s Digital Treasures program, I was able to work with colleague Mitchell Whitelaw on one component of the Power of 1 project; a tangible data visualisation.

Power of 1 is fundamentally a conversation about Australian democracy and a centrepiece of the production is a large survey representing 4 generations’ views. MOAD wanted to use the physical exhibition at Old Parliament House as a way to share some of the survey data, as well as keep the conversation going through inventive installation interfaces produced by the clever crew at ModProds. For Mitchell and I, the task was to consider the data within the exhibition space, and from the outset there was strong consensus on using physical representations rather than screens or projections. An excellent screen-based representation was produced by Small Multiples for SBS, so the on-site installation needed to provide a very different perspective of the survey results. Our aim was to provide a way for an audience to get up close and personal with the data – to literally get amongst it.

Generations room
The “Generations Room”.
Data columns
Data columns in the Generations Room.

The outcome, pictured above, is a landscape of “data columns” with each column representing a particular survey proposition and the coloured segments indicating the response according to generation; Builders (1925-45), Baby-Boomers (1946-64), Gen-X (1965-79), and Gen-Y (1980-94). The clusters of columns are grouped according to a particular theme or question. In the above example the columns relate to the question “Have you ever engaged with politics or society via…”.

The work makes clear reference to the column graph and the ballot box, both essential elements of the project. The columns are simple devices, yet their scale and tangibility offer a novel way to experience the survey data. The emphasis here is not only on a reading of the data but an understanding which is phenomenological. I can connect our approach to Pragmatist notions of aesthetics (phenomenological, emergent, situated) but more simply our proposition is that walking amongst a set of data provides a very different knowing to that gained from reading a chart on a page or computer screen.

Our work was inspired by some other excellent examples of tangible / physical data representations including: Sagmeister & Walsh’s “Happy Show“, Abigail Reynolds’ “Mount Fear“, and Sha Hwang & Rachel Binx’s “Meshu“.

“Happy Show”, Sagmeister & Walsh
mount fear
“Mount Fear”, Abigail Reynolds 2003

The American Museum of Natural History’s “Scales of the Universe” exhibit is also worth a mention; a wonderful example of how the sheer physicality of an installation can dramatically influence an audience’s appreciation of abstract data.

The data columns are gathered together in the “Generations Room”; a site representing an overview of the survey and featuring a number of interfaces (analogue & digital) for audience members to participate and share their views. In addition to the Generations Room, the exhibition consists of a custom installation for each of the 4 demographic groups, each themed accordingly and offering a unique mode for participating in the ongoing conversation about Australian democracy. The exhibition installations and the tech behind them have been artfully realised by ModProds.

It was a real privilege to be invited to contribute to the Power of 1 and to work with the fascinating dataset the production has generated. Tangible/non-screen data representations are something Mitchell and I have delved into previously (see for example Virga and Measuring Cup) and will continue to explore. It’s an area I see increasing in importance, particularly in the public sphere. The flat-screen (large and small) will continue to be a ubiquitous tool for data representation but there is great opportunity (and need) for alternative ways to bring data into public spaces.

Discover the Queenslander

Discover The Queenslander is a web-based interface to a wonderful collection of high resolution scans of “The Queenslander”; a weekly supplement to the Brisbane Courier newspaper published between 1866 and 1939. Mitchell Whitelaw and I conducted the project through Digital Treasures, a research program focused on developing new ways to represent, access and apply digital cultural collections. There is lots to report on with regard to this project (data manipulation, interface design, and working with Angular.js for starters) but for this entry I want to outline our investigation into colour. Colour is definitely one of the most striking characteristics of The Queenslander collection and we determined early in the project to use colour as a way to explore and classify the works – effectively employing colour as another form of metadata.

Massaging metadata

When working with image collections there is a heavy reliance on metadata. It’s the metadata that is most commonly used to describe and organise the items within a collection. Date, name, title, role, location, media, and format are typical meta keys that can be used to allow an audience to find specific items. These keys can also be used in concert to reveal connections and narratives enmeshed within a collection. For example, collect all images produced within a date-range. Or, all images produced by a particular artist, in a specific location, and within a date range.

An added advantage to working with this kind of metadata is that it is easily legible (being text) and also easy to work with computationally. For example, sorting and grouping by values such as date, title, author, role, media, or format is a cinch from a technical perspective but can provide valuable fresh perspectives on a collection. In addition to such meta information there is also the data of the digital items themselves. The actual words of a text document can be employed for searching and organising a collection. When it comes to images there is no easily legible text but there is the potential to work with pixels in much the same way.

Pixels as metadata

So while textual/numerical metadata is powerful and convenient it’s also worth considering less legible and non-textual forms of metadata. The pixels in an image for example. We are familiar with a pixel as a tiny tile in the mosaic which comprises a digital image, but in the context of digital collections pixels can serve as another valuable form of metadata. In the same way that we can filter a set of items based on their media type, or author, or publication date, we can filter items based on the values of their pixels.

Scale and fidelity

When working with colour as metadata two key problems immediately crop up. The first is scale. Searching through every pixel in every image in a collection quickly becomes untenable. For example, a collection of 10,000 images, each at 150 x 250 pixels = 375 million pixels. So each time you want to search or sort by a particular colour, you need to compare 375 million pixels. That’s an awful lot of data-crunching and would require serious computing power to return results in a timely fashion. Add more images to the collection and the problem worsens.

The second key issue that arises is that of fidelity. The standard 8-bit RGB colour space supports up to 16.8 million colours, which is great when it comes to showing rich colours but impractical as a set of meta keys. Even amongst the 375 million pixels in our 10,000 image collection there will be very few exact colour matches – the pixels will tend to always be slightly different.

Combining these issues, we have a process which is unreasonably slow (comparing 375 million pixels) producing a result which is unreasonably fussy (finding very few matches for any given colour).

Normalisation

One way to address both problems is with image normalisation. Normalisation reduces the fidelity of colour, making images visually coarse but more uniform. For example, instead of each pixel being one of 16.8 million possibilities we could restrict it to be one of the 139 prescribed colours in the CSS4 palette. 139 colours is much more manageable than 16.8 million and the incidence of colour matching in our 10,000 image collection is radically improved. You can see this approach in action at the Cooper Hewitt. A key advantage is that much of the colour matching can be pre-baked: the images can be processed offline and their CSS colours recorded just like any other metadata. So when a user selects CSS “crimson” (or #dc143c) the server can retrieve the relevant items just as it would with any other meta keyword (author, media, year, etc.). By comparison, in the live-computation example above the colours of 375 million pixels need to be compared every time a user selects a colour – there is no pre-baking and it is therefore much, much more computationally intensive (slow).

The Cooper Hewitt example shows how fast and effective the pre-bake approach can be. However, I think the main limitation is with regards to the subtlety of the colours – a lot of colour character is lost in the process of normalisation. This kind of loss was particularly concerning when considering the Queenslander collection because the subtlety of the colours is so integral to the character of the images and collection as a whole. Intent to find a solution that preserved that colour subtlety but without compromising speed and utility we decided to pursue a par-cook approach.

Par-cooked

Our work combines the pre-baked and live-computation approaches into a kind of par-cooked solution. As with the pre-baked approach, we prepare normalised palettes for each image but unlike the pre-bake we don’t prescribe the palette. Instead of forcing the image colours into a predefined CSS palette (or similar), we reduce them down to a 12 colour palette. The 12 colours of each image are determined by the image itself and as a result the individual local palettes are much truer to the character of each image.

Queenslander grid image
Grid image
Queenslander grid image - meta info
The image local palette

 

 

 

 

 

Because we tune our local palettes to each image the process produces many more colours than the 139 of the CSS palette. As a result, unlike Cooper Hewitt, we cannot pre-bake colour matches and instead need to live-compute colour matches. In the 10,000 image example above the live-compute approach did not scale – the more images/colours added the slower it became. However, in the case of the Queenslander the live-compute approach is feasible because of the modest collection size: 989 images, each with a palette of 12 colours = a maximum of 11,868 swatches to compare for each colour sort – not an issue for contemporary computers (even the mobile varieties).

When completing the live colour matching we also determine the colour “weight” of each matched image: items with a large quantity of the filter colour have a greater weighting than those with only small traces. It means we can sort the matched items by colour weight.

Queenslander colour sort
Items are sorted by colour weight – those with more red appear first.

In addition to colour matches (and weights) the live-compute approach allows us to generate global palettes dynamically. Instead of the predefined 139 colours of the CSS palette, we generate a global palette of 64 colours based on the colour swatches of the items in the current selection. The process is similar to how the local image palettes are produced and like them, the global palette offers a truer representation of a particular set of images.

Queenslander colour ribbon
Queenslander global colour ribbon for all 989 images

 

 

Autonomous Dynamic Visualisation

I am a strong proponent of speculative design. The simple argument is that unshackling design from the usual constraints and social expectations gives designers the headroom to create the unusual and unexpected, or to take a critical or even antagonistic position. For research, speculative experimentation can drive innovation and lead research in exciting new directions. When I create speculative work I do so without expectation of immediate practical or commercial application but I know that ideas and techniques developed during a speculative process inevitably find their way into more conventional works. A fine example of this is my collaboration with Crowd Convergence.

Crowd Convergence
Crowd Convergence social media aggregation and moderation service

Crowd Convergence provide a social media moderation service which allows their customers to aggregate various streams and filter the results. It’s perfect for large-scale events like sporting matches where organisers want to broadcast social media updates on a big screen and need to curate the content (ie. block anything offensive). Having seen some of my Twitter experiments, Crowd Convergence contacted me about creating visualisers for their social media service. I was stoked to receive the invitation and somewhat surprised that my speculative work had found such a directly relevant application.

“AIR”, the first of the Crowd Convergence pieces, extended techniques from one of my earlier experiments to create a 3D motion graphics sequence. After displaying a status post, the view zooms through a cloud of status updates, twisting and turning to arrive at the next post in the stream. Below, AIR in situ at London’s “Clothes Show Live” and at the FINA World Junior Swimming Championships in Dubai.

AIR example
AIR in situ at London’s Clothes Show Live. From Stylonylon.
AIR Example 2
AIR in situ at the FINA World Junior Swimming Championships in Dubai.

Photowall, as the name suggests, renders a social stream as a fullscreen tiled wall of photos, overlaying the screen name of the author and any associated status text.

Photowall example
Photowall in situ at the 2014 World Mobile Congress in Barcelona.

Both of these works make extensive use of CSS 3D transforms; AIR for its rendering of 3D space, and Photowall for the transitions of each image tile. As I’ve stated previously (here and here), the evolution of CSS is a brilliant example of the changing practice of graphic design. Concepts and techniques derived from print design are being redefined and extended to address the transient qualities of the computer screen. It’s an active area that is evolving quickly.

Another aspect of the project that is worth noting is the format of the works. While they behave like fullscreen desktop software, they are in fact standard web applications utilising html, css and javascript, and running in a browser window. As someone who has dabbled with all sorts of IDEs and programming languages, it’s marvellous what can today be achieved within the context of the humble browser.

Crowd Convergence case study video of their work at the 2014 World Mobile Congress in Barcelona. Features glimpses of my AIR and Photowall visualisers, and also shows some interaction with the moderation console.

Another novel aspect of the works is their autonomy – unlike typical web interfaces the Crowd Convergence pieces operate without typical user interaction. Instead of waiting for clicks or text input the pieces poll the Crowd Convergence server and retrieve data containing the content to display and the instructions for its playback. So, while not interactive in the typical sense (buttons, hyperlinks, text input etc.), these autonomous works can be controlled to some extent through Crowd Convergence’s moderation console; stop, start, duration of transition, duration of hold, ordering of status posts, as well as customising features such as colours, typefaces, etc.

Of course, an important consequence of their autonomy is that the works need enough smarts to cope with variation or disruption. It’s definitely the more time-consuming aspect of production – testing for “what-ifs” and edge cases.

I think there is massive potential for autonomously visualising networked data and information – you only have to note the sheer quantity and scale of the public screens you encounter on a daily basis. The norm for many of these large public screens is to serve as billboards, displaying a queue of static posters, but there is no reason they can’t be used in more dynamic and interactive ways.

The Crowd Convergence collaboration is just one example of my recent forays into autonomous forms of visualisation. It is an area that is ripe for innovation, and one that I am continuing to explore.

Virga

May 1 2013 saw the launch of “Virga”; a data lighting sculpture produced in collaboration with renowned Australian designer Robert Foster of F!nk Design. Pat Coppel, of Make Designed Objects, invited Rob and I to complete the installation as part of the renovation of the flagship store in Carlton. If you’ve not had opportunity to visit, Make Designed Objects is a wonderful slice of Melbourne retail, brimming with amazing designer wares. It also has a significant online presence and the concept for the data sculpture was to create some kind of bridge between the bricks & mortar and online manifestations of the Make retail business. Pat describes the work:

Data Aesthetics in Retail Space is a collaborative project between Make Designed Objects, Robert Foster of Fink & Co and Geoff Hinchcliffe of University of Canberra’s Faculty of Arts & Design.

Virga (an observable streak or shaft of precipitation that falls from a cloud but evaporates or sublimes before reaching the ground) is the product of that collaboration; an LED light and data sculpture formed by internationally acclaimed designer-maker Robert Foster that colourfully expresses itself based on the digital data fed into its environment.

What data?
Any data we choose!
Want to watch a colourful representation of the seasonal nature of Make’s sales data?
Feed in the data.
The change in inner Melbourne maximum average daily temperatures from 1913-2013?
Feed in the data.

With bricks and mortar retailers rapidly migrating to the World Wide Web why not bring a bit of the World Wide Web back into bricks and mortar retail? Why not feed the traffic data from the Make website into Virga and see what happens?

The real joy of Virga lies in its abstract representation of a digital world in a rapidly evolving bricks and mortar retail environment.

And it looks way cool…

Virga
Virga data lighting installation

The commissioning of Virga is clear evidence of Make’s ongoing commitment to building an exceptional retail experience – both on and offline. It’s refreshing to see when there is so much doom & gloom about the future of business for Australian retailers. As Pat Coppel explains:

 In a market and world where increasingly we have a physical and virtual version of almost everything; people, businesses, streetscapes… it’s not surprising that online retail is booming. And while the news reports a great deal of fear for bricks and mortar retail, this ever changing market presents opportunities for innovation in brokering an extended relationship between the virtual and physical aspects of a retail business such as Make.

It is from within this landscape that Pat Coppel, Director of Make has initiated this collaboration between retailer, designer-maker and academic. The result is Virga; a beautiful sculptural installation that translates our technological data into a spectacular visual language of light and colour. As such, Virga creates a playful narrative around the virtual and physical instances of Make Designed Objects.

Virga
Virga

Now that the Carlton refurbishment is complete, Make are turning their attention to their online store and I’ll no doubt be posting about it soon. But for now, some details about the Virga development…

Virga is comprised of eleven individual lighting forms, each with two individually controllable RGBW (Red, Green, Blue, White) LED units. Inside each of the lighting forms is an Arduino mega with Wifly card for wireless network connection. The lighting array is orchestrated by a server-side application which parses the data and transmits instructions to each of the lighting units.

Each of the Arduinos acts as a simple slave and the heavy lifting is performed by the web server. The reasoning is that the Arduinos have a very limited amount of memory whereas the web server is powerful, fast and also much easier to edit and update than the physically concealed Arduinos.

Programming the Arduinos was new to me and coming from web scripting, the Arduino code seemed extremely brittle at first. However, it didn’t take too long to become acquainted and I was soon entranced by the wonder of working with something outside the screen. Fortunately I had the help of colleague Chris Hardy when it came to wiring the Arduinos to the LED lighting units. The trick was in finding the correct drivers for the LED units. Once the hardware was configured, I was able to experiment with the lights and get a feel for their tolerances and capabilities.

Ultimately the code I created for the Arduinos is akin to an animation class – its role is to understand lighting values as well as timing and transition values. One of the more difficult aspects of working with 11 independent asynchronous network devices is synchronising their timing. My solution was to create a sync process which is executed on start-up. It works like this: on start-up each unit connects to the network, checks-in with the server and awaits further instructions. The server logs the arrival of each lighting unit and stores an adjustment value for each. Once all units have checked-in, the server commences the choreography, sending each unit its time-adjusted lighting sequence.

Virga development
Virga development at F!nk workshop. Rob Foster – Right.

In many respects, the delineation of roles and responsibilities on this project was determined by the particular expertise that Rob and I brought to the project. Rob crafted the lighting forms, including the custom mounts and housings for the LED units, while I took care of the data side of things; programming arduinos to control each lighting unit, and creating a server-side application to orchestrate the lighting array. But in this project the line between data and design was very blurry, and it required us both to be very hands-on at the meeting point of physical and digital. That meant lots of soldering of LEDs and testing in situ at the Make store.

On site installation
On site installation

I loved creating a work outside the LCD screen, working with hardware and collaborating with a designer whose knowledge of materials and material production is simply exceptional (I will have to do a post some time on Rob’s F!nk workshop and the astounding Jules Verne industrial contraptions he devises to realise his elegant design forms). It’s ironic that my “getting away from the LCD” project ultimately produces another screen analog; an array composition of pixel-like lighting forms. Mitchell Whitelaw’s eloquent article “AFTER THE SCREEN: ARRAY AESTHETICS AND TRANSMATERIALITY” highlights the interesting qualities and tensions in post-screen works such as Virga. Where the ubiquitous digital screen aims for generality (an ability to display any content at all) and self-effacing slightness (an attempt to disappear as a neutral substrate for content), Virga instead attempts to lower the resolution of the grid and emphasise the material presence of the array elements (Whitelaw). I think in the case of Virga, the tension between physical and digital is integral to its purpose: creating a dialogue between the physical and virtual manifestations of the Make business.

Virga opening night
Virga opening night

Pat Coppel, Rob Foster, Geoff Hinchcliffe
Pat Coppel, Rob Foster, Geoff Hinchcliffe

Desktop Magazine Interview

Desktop Magazine interviewed Mitchell Whitelaw and myself late in 2012 for their Dec/Jan issue. The theme of the issue was “the future” and our interview addressed the future of data-graphics / data-viz. It was fun to offer some thoughts and a real privilege to appear amongst the stellar list of contributors that editor Heath Killen assembled including Dan Hill, Casey Reas, Seb Chan, Stuart Candy, and Alexandra Daisy Ginsberg, among others. Danielle Neville wrote a great opener and the issue also featured some work from Patrick Stevenson-Keating, as well as fellow Canberran Paul Krix. Now that the dust has settled on the print version, the good folks at Desktop Magazine have published our interview on their website in two parts: Part one, and Part two.

Not long after its web publication Part two prompted an unequivocal rebuttal (via Twitter) from dataviz pro Ben Hosken of Flink Labs.

 

Twitter is not an ideal forum for in-depth exchange, but I believe that this was the offending statement from the interview:

Where are things headed in the immediate future for data visualisation?
GH: I predict data visualisation becoming a normal part of graphic design practice. Data viz cannot be considered a special case, something to be ignored or left to others to deal with. Data is a constant in our society and its significance is only increasing. It’ll take a bit of work for graphic designers to become completely comfortable with the practices of data representation but as they do I think we’ll see data being used more commonly and creatively – something that I look forward to.

I didn’t mean to underestimate what is involved in dataviz, particularly when dealing with large &/or complex datasets and creating dynamic interactive works (D3, Tableau, etc). My point was only that graphic designers will increasingly be expected to possess some data literacy – to have the ability to work with data and prepare engaging visual representations. The same thing happened with web design – it was initially treated as a curiosity by graphic designers and graphic design education but is today a staple of the graphic design profession and essential for graphic design graduates (IMHO). That is not to suggest that graphic design graduates need to be web dev experts, but only that they need a decent literacy in design for web. Ditto for dataviz. If nothing else, the exchange highlights the ambiguity of the term “dataviz”. It’s a huge domain with plenty of scope for contribution from a diversity of practitioners.

As well as some blue sky thinking on the future of data-graphics/viz the interview provides some context for the teaching and research that Mitchell, myself and our colleagues are conducting at the University of Canberra. If you’re interested, we have programs at all levels: Bachelors, Honours, Masters by coursework, Masters by research, and doctorate.

Links: Heath Killen interview | Flink Labs |

3D

In this work, the tweet stream is represented in a virtual 3D space with older tweets in the far distance and more recent tweets in the near distance. While the piece has some practical logic governing its arrangement, its primary aim is one of affect, creating an atmosphere in which the viewer floats amongst their tweet stream. It is most effective at closer proximity, where the sense of scale and depth of field becomes exaggerated. It presents an ambient form of browsing where tweets are discovered by chance as they float into view. The overlapping and blurring of tweets gives a sense of a cacophony of voices rather than the tidy timeline that is the norm of Twitter clients.

3D tweets
Tweets in space

Double clicking on a tweet shifts the focus and zooms the view to its proximity. Double clicking in space resets the view. Single clicking a tweet highlights it and others by the author. Clicking the time marker on the left of the screen reveals a histogram of the tweet stream. Dragging the time marker to a new position changes the focal depth to the tweets within the chosen time band.

The main 3D effect (scaling and parallax scrolling) comes courtesy of CSS 3D transforms and the depth of field is achieved using CSS text-shadow and box-shadow. The fill of the text and boxes is transparent, their form being made entirely of the CSS shadow. The level of shadow is determined by the Z-depth of the element relative to the current focal depth. The CSS is along the lines of:

color: rgba(0,0,0,0); text-shadow: 0 0 2px rgba(0,0,0,0.75);

In the above CSS, the text’s color attribute is set to transparent. Its text-shadow has no x and y offset, 2 pixels of blur and the shadow’s colour is set to black (0,0,0) with transparency of 0.75.

Example: 1 pixel blur, 2 pixel blur, 3 pixel blur, 4 pixel blur.

While the rendering of the general 3D effect (scaling and parallax) is really impressive (even on mobile devices) re-rendering the blur of elements is much slower and there is a noticeable delay each time the depth of field is adjusted. There’s no denying that it’s a crude technique but until depth of field is included in CSS 3D transforms it serves as an easy hack for simulating depth of field for text and simple shapes. For a bit of comparison, here is an early sketch using Flash and the Papervision 3D library for rendering.

http://gravitron.com.au/3d/flash/sketch/3d.swf

The sketch uses a small sample of cached tweets (from some years back) and does not interact with the live Twitter API. Clicking a tweet once zooms to its proximity. Clicking the tweet a second time resets the zoom. Despite obvious differences between the Flash/Actionscript and html/css/js contexts, the Flash version works in a very similar way to the html native version. As with the css version, the depth of field blurring is based on the z-index of the element relative to the current focal depth. The obvious difference is that the focal depth is adjusted dynamically on the rollover of elements. This is an effect that is only possible in the css version with a very limited number of elements. The Flash sketch also slows with the addition of more elements but it manages pretty capably with the selection of tweets included.

 

SOFT INDUCTION

Within the CSS language there is increasing reference to properties of time and space in rules for 3D transforms, animation, and transitions. From a graphic design perspective it’s evidence of how design and layout concepts borrowed from print are evolving in response to the computational context. I say “computational context” rather than “screen” because the changes are about much more than a shift from dots to pixels. In addition to references to time and space there is also an increasing incidence of computational concepts and techniques. For example, CSS already has limited support for variables which allow programmatic assignment of values for colour, size, etc.

For some, like the creative code scene, the engagement with computation is explicit and sees designers employing programming languages like Java and C++ using tools like Processing and openFrameworks. But the evolution of CSS points to a more subtle permeation of computational concepts and techniques, and a softer induction of designers into computational/programmatic practices. For example, even a simple concept like using CSS media-queries for different device widths represents a form of programmatic design. And once familiar with elementary programmatic concepts a curious designer can scaffold to more sophisticated computational concepts and techniques, and fully featured programming languages. More simply, I’m describing a “top-down” rather than “bottom-up” approach to programming – designers developing programmatic techniques through familiar tools and technologies rather than approaching programming as a foreign language to learn.

At this juncture it’s very tempting to segue into a discussion of Minecraft, and how it manages to gently induct kids into a broad range of computer activities well outside the bounds of its virtual 3D world… But, instead I’ll save that for a separate spiel and leave this at that.

Links: http://gravitron.com.au/3d

Tweet Report

Tweet Report takes a twitter stream and renders it as an interactive infographic poster. Instead of the usual focus on the textual content of tweets, the Tweet Report treats the entire tweet stream as a dataset. Looking at an example of the json data that the Twitter API delivers, it is apparent that it already contains various classifications and values: statuses count, friends count, followers count, created date, etc. In addition to using these existing values, the Tweet Report performs a significant amount of data processing of the json object, creating totals for things like hashtags, mentions, retweets, replies, etc, and recording relationships between totals and those who contributed to them.

With its infographic aesthetic, is particularly inspired by the work of Nicholas Felton whose “Feltron Annual Reports” are seminal works in the data graphics field. The Feltron Reports present the minutia of Felton’s daily life as beautifully crafted info-graphic posters. My Tweet Report draws some obvious cues from Felton’s work and shares his focus on quantifying the unlikely and overlooked.

While at first it presents as a static poster, the Tweet Report is an interactive work that rewards the inquisitive reader. Clicking different elements reveals their relationships to other elements within the polls, charts and lists.

Tweet Report
The red highlighting indicates the relationships between different items.

Attributes such as author, time, words, hashtags and links can all be used as keys. Once clicked, the Report uses CSS classes to target related elements within a group. The resulting index of CSS classes is quite complex but remarkably the HTML DOM manages it without issue. Using jQuery to highlight a group of elements via their CSS class is typically achieved like so:

$('.target_css_class').css('background-color', 'red');

However, jQuery cannot target SVG elements (such as the slices in a pie chart, or lines on a graph) via their CSS class. As a work-around I use temporary global CSS styles – adding a style to the head of the html document, adding styling attributes for a particular class and letting the browser do the highlighting work on both HTML and SVG elements. For example:

$('head').append('<style type="text/css" id="highlightStyles"></style>');
$('#highlightStyles').append('.target_css_class { background-color: red; }');

 

G R I D S

In addition to a typical horizontal column structure, the layout also employs a vertical grid. Implementing the vertical grid requires additional processing as standard CSS does not currently address modular vertical spacing. The default elastic boxes of the DOM expand as required unless given specific width/height values, resulting in ragged alignment across columns. In the case of a vertical grid we need to maintain elasticity to accommodate content but want to step any expansion to a minimum line-height value, avoiding the ragged vertical alignment across columns.

Vertical Grid
On the left, the height of each element is based on its content. On the right, the height of each element is conformed to a regular line-height.

The content is rendered to the DOM as per the illustration to the left. A script then checks the height of each element and normalises it to a set line-height increment, as per the illustration to the right. So, if the line height is 25 pixels, height 19 becomes 25, height 32 becomes 50, height 71 becomes 75, etc. You can also see the vertical grid at work in GRIDS.

 

SYSTEMS

Programming the vertical grid is just one example of computationally codifying graphic design conventions – looking for the patterns and formulas embedded within graphic artefacts and design processes and automating them with computer code. There are many examples of programmatic, systemised approaches in “traditional” graphic design (which I write about elsewhere in these Notes). For example, Gerstner’s 1964 book “Designing Programmes” describes programmatic approaches to design and layout. Tschichold was another pioneer of systemised design, with the Penguin Classics epitomising his approach (see Twitter Modern Classics).

Designing Programmes spread
A spread from “Designing Programmes” (www.thinkingform.com)

My point here is that graphic design has a rich tradition in systemised design approaches which are ideally suited to the computational context. By connecting with those traditions we can learn a great deal but also negate common print versus screen tensions. While I’m personally excited by notions of computational design, I don’t mean to suggest that imagination, expertise and aesthetic judgement can all be computationally codified – but I do think there is value in attempting to understand the rules inherent in design practices. Close examination of design practices can reveal implicit programmatic processes which we can then attempt to automate in the computer context. If all of this makes you feel paranoid that some software will soon be doing your design work, I think it’s worth highlighting that analysing traditional design processes tends to emphasise the sophistication of the discerning human designer – it’s common to find that things easily achieved by hand are extremely difficult to automate with code.

 

TECHY BITS

Tweet Report is entirely generative and does not use any external graphics (apart from fonts). The bar charts are made using HTML DOM elements and the pie charts are rendered in SVG with some help from the D3.js. The Report uses Masonry to assist with transitioning the layout in response to changes in the browser width, and also makes extensive use of Kernit for the fine-grained kerning of heading text and the large numerals.

 

Links: http://gravitron.com.au/report | D3.js | Masonry.js | Kernit.js

 

Physics

Physics on first appearance would seem to be one of the most abstract representations amongst my collection of Twitter works, however, it is a fairly straightforward data visualisation with a few twists. The basic logic of representation is much like a graph. Each tweeter appears as a disc featuring their avatar pic, the size of which is determined by the number of tweets made by the tweeter. Complexity grows with the addition of time; the Twitter timeline is replayed as a time-lapse sequence with discs arriving in the order that the tweets were made. A further increase in complexity occurs with the inclusion of physics; new tweeters fall from the top of the browser and bounce amongst the existing pool of tweeters, the discs rotating and jostling for position. You can grab discs and hurl them about the browser, causing explosive collisions and rebounds. Clicking and holding a disc causes it to act like a magnet, attracting all of its associated retweets. And again, there is a satisfying eruption as the retweet discs bustle their way free from the pack to find their magnetic reference disc.

A short video screencast of Physics in use.

While it is certainly satisfying to smash different tweeters about the screen and feel the forces of gravity and momentum at work, the physics simulator does also fulfil a more pragmatic role within the piece. The issue with visualising a large set of differently sized objects, like our collection of tweeters, is how to organise them in an economical way within the confines of the display area. There is much research and literature dedicated to addressing organisational problems such as this but instead of following the stacking algorithms route I used the Box2D physics simulator as a layout engine. Not only can the physics simulator deal with the size and space issue, it also handles dynamic changes in the size and quantity of objects. For example, when changing from Tweets to Retweets, Box2D dynamically repositions all elements to accommodate their changing scale. Box2D is an impressive physics engine and it’s easy to appreciate why it is so widely employed in computer gaming.

physics nav
Changing scale from total tweets to retweets

Working with Box2D adds another layer of abstraction to what is already a fairly crowded context; the web “page” employs HTML to define objects, CSS to style the appearance of objects, Javascript to control the behaviour of objects, and Box2D to determine the positioning of objects. The Box2D engine makes no reference to browser windows or pages and is completely context-independent, working with its own virtual world and coordinates. To position objects on screen the Box2D coordinates need to be translated for the display context. In a web setting designers typically render a scene using the HTML canvas element but in the Physics piece I work directly with the HTML Document Object Model (DOM). Each of the discs is actually a <div> element with CSS border-radius converting the box into a circle. It’s remarkable how good the DOM rendering is, both with regards to frame rate and to image quality (obviously, the larger the browser window the slower the rendering, but sizes under 1200×900 the performance is pretty decent). When viewing the Physics piece on a high resolution Retina screen, all of the objects are rendered with startling clarity and precision. From a graphic design perspective, the combination of physics + DOM is another fascinating evolution of the web “page”.

Links: http://gravitron.com.au/physics | Javascript port of Box2d | jQuery Box 2D version | Stats.js