Autonomous Dynamic Visualisation

I am a strong proponent of speculative design. The simple argument is that unshackling design from the usual constraints and social expectations gives designers the headroom to create the unusual and unexpected, or to take a critical or even antagonistic position. For research, speculative experimentation can drive innovation and lead research in exciting new directions. When I create speculative work I do so without expectation of immediate practical or commercial application but I know that ideas and techniques developed during a speculative process inevitably find their way into more conventional works. A fine example of this is my collaboration with Crowd Convergence.

Crowd Convergence
Crowd Convergence social media aggregation and moderation service

Crowd Convergence provide a social media moderation service which allows their customers to aggregate various streams and filter the results. It’s perfect for large-scale events like sporting matches where organisers want to broadcast social media updates on a big screen and need to curate the content (ie. block anything offensive). Having seen some of my Twitter experiments, Crowd Convergence contacted me about creating visualisers for their social media service. I was stoked to receive the invitation and somewhat surprised that my speculative work had found such a directly relevant application.

“AIR”, the first of the Crowd Convergence pieces, extended techniques from one of my earlier experiments to create a 3D motion graphics sequence. After displaying a status post, the view zooms through a cloud of status updates, twisting and turning to arrive at the next post in the stream. Below, AIR in situ at London’s “Clothes Show Live” and at the FINA World Junior Swimming Championships in Dubai.

AIR example
AIR in situ at London’s Clothes Show Live. From Stylonylon.
AIR Example 2
AIR in situ at the FINA World Junior Swimming Championships in Dubai.

Photowall, as the name suggests, renders a social stream as a fullscreen tiled wall of photos, overlaying the screen name of the author and any associated status text.

Photowall example
Photowall in situ at the 2014 World Mobile Congress in Barcelona.

Both of these works make extensive use of CSS 3D transforms; AIR for its rendering of 3D space, and Photowall for the transitions of each image tile. As I’ve stated previously (here and here), the evolution of CSS is a brilliant example of the changing practice of graphic design. Concepts and techniques derived from print design are being redefined and extended to address the transient qualities of the computer screen. It’s an active area that is evolving quickly.

Another aspect of the project that is worth noting is the format of the works. While they behave like fullscreen desktop software, they are in fact standard web applications utilising html, css and javascript, and running in a browser window. As someone who has dabbled with all sorts of IDEs and programming languages, it’s marvellous what can today be achieved within the context of the humble browser.

Crowd Convergence case study video of their work at the 2014 World Mobile Congress in Barcelona. Features glimpses of my AIR and Photowall visualisers, and also shows some interaction with the moderation console.

Another novel aspect of the works is their autonomy – unlike typical web interfaces the Crowd Convergence pieces operate without typical user interaction. Instead of waiting for clicks or text input the pieces poll the Crowd Convergence server and retrieve data containing the content to display and the instructions for its playback. So, while not interactive in the typical sense (buttons, hyperlinks, text input etc.), these autonomous works can be controlled to some extent through Crowd Convergence’s moderation console; stop, start, duration of transition, duration of hold, ordering of status posts, as well as customising features such as colours, typefaces, etc.

Of course, an important consequence of their autonomy is that the works need enough smarts to cope with variation or disruption. It’s definitely the more time-consuming aspect of production – testing for “what-ifs” and edge cases.

I think there is massive potential for autonomously visualising networked data and information – you only have to note the sheer quantity and scale of the public screens you encounter on a daily basis. The norm for many of these large public screens is to serve as billboards, displaying a queue of static posters, but there is no reason they can’t be used in more dynamic and interactive ways.

The Crowd Convergence collaboration is just one example of my recent forays into autonomous forms of visualisation. It is an area that is ripe for innovation, and one that I am continuing to explore.

Driving Forces

In March 2014 I had the pleasure of presenting at the “Driving Forces” conference at the excellent ANU Art School. The conference bi-line: “The Role of Artists and Designers in Interdisciplinary Research” gives a pretty precise idea of what the conference was all about. Mitchell Whitelaw and I based our presentation on our experiences developing exploratory web interfaces for a beautiful collection of high res scans of “The Queenslander”, an early 20th century rural magazine. Our work was commissioned by the State Library of Queensland, a fantastic institution renowned for its progressive outlook regarding both physical and virtual manifestations of the modern library.

Discover the Queenslander preview
Work-in-progress

The conference, organised by Erica Seccombe, was really inspiring and we were fortunate to share our session with Nola Farman and Leah Heiss – I strongly recommend checking the work of both.

Our work, “Discover the Queenslander”, will be live on the SLQ site soon. Here is an abstract of our presentation…

An Interdisciplinary Machine: Reflections on Digital Practice-led Research
Mitchell Whitelaw and Geoff Hinchcliffe
Centre for Creative and Cultural Research
University of Canberra

Abstract

In this paper we reflect on our own practice as designers, artists and programmers to argue that computation is a fertile site of interdisciplinarity, and software production is an inherently creative field ideally suited to practice-led enquiry.

Computation has been interdisciplinary since its inception; the earliest computing machines were used by physicists, meteorologists, cryptographers and biologists. Computation is in a sense indifferent to disciplines, reducible ultimately to a simple set of formal operations. This is not to reduce or dissolve disciplinary differences, but to create a common ground, a machine that in its indifference fosters connections between disparate domains. And computation is intrinsically pragmatic – it makes things happen, a verb, not a noun. It does not simply link different domains, but engages them in action, in joint projects and creations.

While code and programming have been professionalised and “disciplined” through computer science and software engineering, the explosion of the Web and its accessible programming languages drew broad participation in new forms of software production. Twenty five years later, we find computing at the heart of creative practice as evidenced by the Art+Code movement.

Our recent work with the State Library of Queensland demonstrates this interdisciplinary pragmatism in action. Our work relies on the practical affordances of both code and code culture: powerful software toolkits ready to be appropriated and recombined for novel outcomes. Data becomes a creative material as well as a shared language; in this project heritage collection data links us with the concerns and conventions of historians, librarians, archivists and information managers. Computation enables our joint project: creating rich new forms of exploration and engagement with digital collections. Computation also priveleges a practice-led approach. With code as a medium, we work through rapid ideation and experimentation towards an outcome that is only apparent in retrospect. We produce software, but we are not software engineers – rather we sketch, play, copy and paste; coding is a hands-on practice with its own pleasures and pitfalls, rather than a rationalised process. As artists and designers we respond to the specificity of each collection and the qualities of our materials; the solutions are bespoke, customised to each context but also produce generalisable knowledge.

3D

In this work, the tweet stream is represented in a virtual 3D space with older tweets in the far distance and more recent tweets in the near distance. While the piece has some practical logic governing its arrangement, its primary aim is one of affect, creating an atmosphere in which the viewer floats amongst their tweet stream. It is most effective at closer proximity, where the sense of scale and depth of field becomes exaggerated. It presents an ambient form of browsing where tweets are discovered by chance as they float into view. The overlapping and blurring of tweets gives a sense of a cacophony of voices rather than the tidy timeline that is the norm of Twitter clients.

3D tweets
Tweets in space

Double clicking on a tweet shifts the focus and zooms the view to its proximity. Double clicking in space resets the view. Single clicking a tweet highlights it and others by the author. Clicking the time marker on the left of the screen reveals a histogram of the tweet stream. Dragging the time marker to a new position changes the focal depth to the tweets within the chosen time band.

The main 3D effect (scaling and parallax scrolling) comes courtesy of CSS 3D transforms and the depth of field is achieved using CSS text-shadow and box-shadow. The fill of the text and boxes is transparent, their form being made entirely of the CSS shadow. The level of shadow is determined by the Z-depth of the element relative to the current focal depth. The CSS is along the lines of:

color: rgba(0,0,0,0); text-shadow: 0 0 2px rgba(0,0,0,0.75);

In the above CSS, the text’s color attribute is set to transparent. Its text-shadow has no x and y offset, 2 pixels of blur and the shadow’s colour is set to black (0,0,0) with transparency of 0.75.

Example: 1 pixel blur, 2 pixel blur, 3 pixel blur, 4 pixel blur.

While the rendering of the general 3D effect (scaling and parallax) is really impressive (even on mobile devices) re-rendering the blur of elements is much slower and there is a noticeable delay each time the depth of field is adjusted. There’s no denying that it’s a crude technique but until depth of field is included in CSS 3D transforms it serves as an easy hack for simulating depth of field for text and simple shapes. For a bit of comparison, here is an early sketch using Flash and the Papervision 3D library for rendering.

http://gravitron.com.au/3d/flash/sketch/3d.swf

The sketch uses a small sample of cached tweets (from some years back) and does not interact with the live Twitter API. Clicking a tweet once zooms to its proximity. Clicking the tweet a second time resets the zoom. Despite obvious differences between the Flash/Actionscript and html/css/js contexts, the Flash version works in a very similar way to the html native version. As with the css version, the depth of field blurring is based on the z-index of the element relative to the current focal depth. The obvious difference is that the focal depth is adjusted dynamically on the rollover of elements. This is an effect that is only possible in the css version with a very limited number of elements. The Flash sketch also slows with the addition of more elements but it manages pretty capably with the selection of tweets included.

 

SOFT INDUCTION

Within the CSS language there is increasing reference to properties of time and space in rules for 3D transforms, animation, and transitions. From a graphic design perspective it’s evidence of how design and layout concepts borrowed from print are evolving in response to the computational context. I say “computational context” rather than “screen” because the changes are about much more than a shift from dots to pixels. In addition to references to time and space there is also an increasing incidence of computational concepts and techniques. For example, CSS already has limited support for variables which allow programmatic assignment of values for colour, size, etc.

For some, like the creative code scene, the engagement with computation is explicit and sees designers employing programming languages like Java and C++ using tools like Processing and openFrameworks. But the evolution of CSS points to a more subtle permeation of computational concepts and techniques, and a softer induction of designers into computational/programmatic practices. For example, even a simple concept like using CSS media-queries for different device widths represents a form of programmatic design. And once familiar with elementary programmatic concepts a curious designer can scaffold to more sophisticated computational concepts and techniques, and fully featured programming languages. More simply, I’m describing a “top-down” rather than “bottom-up” approach to programming – designers developing programmatic techniques through familiar tools and technologies rather than approaching programming as a foreign language to learn.

At this juncture it’s very tempting to segue into a discussion of Minecraft, and how it manages to gently induct kids into a broad range of computer activities well outside the bounds of its virtual 3D world… But, instead I’ll save that for a separate spiel and leave this at that.

Links: http://gravitron.com.au/3d