Creating an algorithmic brand
By Kelly Milligan
Here at Thinkingbox we love technology. The collision of interesting tech and creative ideas is what gets our whole team excited. It keeps us engaged and inspired year in, year out.
Starting fresh
In late 2017 we kicked off an exciting new chapter for the Thinkingbox brand. Something interesting, exciting, engaging and inspiring that explores our curiosity for tech. Something that would be flexible and unexpected, maybe even unpredictable. We decided to build a generative artistic system that would enhance our brand’s creative representation, through technology; to build something that would be extremely difficult to create, without the help of technology.
The idea showed immediate potential: real-time generative graphics for our new website, for mobile, TV, or any screen in between, and print-resolution renders for business cards, slide decks, or full-wall prints. Creating the system once, then getting an infinite number of unique results out of it over time.
Working with our technology to create art, on a scale that would be impossible for mere human hands.
Generative Art
“Generative art refers to art that, in whole or in part, has been created with the use of an autonomous system.”
Generative and Algorithmic art have become an increasingly strong artistic force over the past several decades. Modern artists thrive in an environment where tools and concepts are more accessible than ever, and social media provides a platform for their work that extends to vast audiences.
In-browser graphics technologies like Canvas and WebGL enable complex visual exploration in a medium where anyone can experience it. The web is interactive by default, meaning the audience can, and is encouraged to, engage directly with the artwork, to participate, and help create the artwork themselves.
A unique challenge
The core creative direction for this new frontier of brand design was having real-time, interactive artwork running in the browser. Windows of artwork revealed through a solid background define the key visual concept, including primary typography instances displayed in a “knockout” style.
This came with technical challenges and performance concerns that required attention from the jump. The art needed to be lightweight enough that it wouldn’t have a large impact on the rest of the application. Browser standards don’t yet allow a way to create the text “knockout” effect we wanted here, we’d need to build this ourselves.
With our creative and performance requirements in mind, we decided upon a WebGL setup for the artwork, and for displaying it within our “stencil” elements that should reveal it:
- For the artwork, WebGL provides advanced rendering possibilities. It would allow us to obtain the desired effect while also running smoothly.
- To achieve the “knockout” effect, we’d need to somehow blend our text and background graphics together. If we could render our text in WebGL as well, this would be possible using a fragment shader.
Creating our Art
With our vision in mind, we knew we’d need to combine different types of noise to position and colourize a collection of particles or lines. They would move across the screen over time, and reset once they leave the viewport. We add data influences from outside sources, so that each person who views the artwork sees something perfectly unique, no two the same.
The particles are drawn each frame, moving slowly across the screen. Noise is applied to change their position in a natural, flowing style. Their colour uses nested noise functions to aim for an organic colour mix without any clear patterns.
In order to move from individual particles to strands, we add each frame (and the particle positions/colours) to the previous series of frames. The previous texture is slightly faded out each frame, creating soft trails for each particle.
By adjusting the fade out speed, we slowly “paint” each particle’s motion and colour changes across the screen over time.
Finally, we add some direct user interaction to control the brightness, speed, and timescale of each particle.
Advanced Typography
The canvas context is different to regular HTML in that it simply renders a bitmap image. Text characters do not follow the same rules as CSS, and can only be drawn using built in APIs. In WebGL, no such APIs exist out-of-the-box. Historically this was often achieved using predefined images of the text that was needed, they could be used as “textures”. In our case this wouldn’t do: we wanted to use the knockout effect on project names, driven by our CMS content.
SDF (Signed Distance Field) font rendering is a relatively new approach to rendering text in a graphics environment. It was first created by game developers to render text and UI in-game. It improves the flexibility of font rendering, allowing it to scale (like vector paths) while remaining great quality, and drastically reduced the file size associated with having all text instances packaged as image textures. All that’s needed is a font “atlas” which contains the distance fields for each character:
This font atlas is created with software that automatically calculates and draws the “field” for each character. In this case, it uses colour channels as well to increase accuracy and quality (known as MSDF, Multi-channel Signed Distance Fields).
When working with THREE.js there are some excellent tools for this approach, they make rendering text in WebGL much easier:
To bridge the gap between the DOM and our graphics-only WebGL scene, we add an HTML version of the text to the page as well. This HTML version is checked each time we resize the page, and it’s CSS rules are applied to the WebGL text in turn. This allowed our developers to style the text with CSS as they normally would, while still providing this advanced effect.
Blending the layers
With our artwork and typography working, we just needed a way to combine our two layers. The artwork would be revealed through rectangular or circular windows, and through pieces of text. When rendering in WebGL, we’re essentially constantly creating image textures, and these can be used however we see fit.
By rendering the artwork as one texture, then a second texture which just contained our “stencil” elements, we could combine the two together, and make calculations to perform our blend. A core feature of WebGL is the ability to use Vertex and Fragment shaders. These are small, low-level programs that run directly on the GPU, and are responsible for much of the benefit that WebGL offers. The fragment shader specifically calculates the colour for each pixel on the screen (or each pixel in our texture). By separating our stencil into the 3 distinct colour channels: red, green and blue, we’re able to easily calculate our blend between the two textures:
- Solid red elements will display the artwork, acting as a window to the background.
- Solid green elements display a washed out, almost white version of the artwork.
- Solid blue elements act as masks, allowing us to apply masking effects to our elements.
The result
A flexible, dynamic, real-time rendering setup which extends beyond what we can normally do in-browser.
Beyond the screen
Our final step, to bring our unique artworks from the screen to other mediums. The Canvas APIs in-browser are surprisingly powerful and capable. As we’re essentially just rendering an image, this can be scaled up beyond normal screen resolutions. For example, rendering the art at 300dpi for print usage
A simple tool allows us to choose a rendering size, then slowly paint our artwork at a larger size.
Zooming in shows the very fine detail we can capture:
Evolution
We consider this iteration of our artwork an MVP, our first cut. We’ve created something completely unique, for each person who sees it, each time it’s viewed. We would love for you to experience our new brand artwork for yourself, you can see it on our website at thinkingbox.com
With lessons learned and tools created, new art pieces are on the way.
Our brand continues to evolve!