Generative art 

- Where code and art meet

No matter if you're an animator, UX designer or developer you might have stumbled across the terms Generative Art, Algorithmic expression or maybe Code Poetry. We are in the century of data and it's becoming more and more common to use relevant data points to generate design for your digital product. We surrender a part of our control to process, to create a visual expression. Think about it this way, instead of painting an image, you tell a computer what you want to create and then the computer paints it for you.

Think about an algorithm as a recipe, let’s say for a pie. The recipe tells you how mush sugar, flour, butter etc you should add to the mix. An algorithm is simply a detailed recipe for the design and may include computer code, functions, expressions, math, or other input which ultimately determines the form the art will take.

If you want lots of dots in your artwork, you tell the computer to draw circle shapes, how big, how many, where to put them in the frame, what color, etc.

Once the algorithm is done and written you hit the render button and to computer starts plotting out your piece of art. Thing is, is you press that button again, the same artwork will appear once more, identical to the first. With algorithm theres the wonderful opportunity to create a unique image every time.

Back to our pie recipe algorithm. If we want more variations of pie, we have to add a dynamic factor. Let’s say that the pie recipe will adjust to what season it is and choose the fitting ingredients; Strawberry pie in the summer, Västerbotten in the autumn, and so on. We have now created a dynamic algorithm that will give unique result every time we revisit it. 

You visual expression can change thanks to the computer generating a random number, or an external piece of data, which can range from recorded heartbeats, the number of startups in Helsingborg or a Saturday afternoon at Systembolaget 10 minutes before closing.

That random value can be replaced with something meaningful, and then the artwork all of a sudden becomes a infographic that conveys a story. This map was made to clarify the scale of the current refugee crisis and shows the flow of asylum seekers to European countries over time. Each moving point on the map represents 25 people.

You are looking at a two very ordinary overlapping dots. But if I tell you that this dot represents how many text messages you sent this your, and the background dot represent how many you sent last year, it becomes interesting.

So to explain why this is a interesting topic and why is trending,  we have to take a look at generation Z. They are 14-21 year olds and they grew up with social media and quick internet access. They expect more out of their digital experiences.

The Gen Z are picky, they want the apps and services that they use to have a strong and clear enough brand to match their own. The graphics should adjust to me and become unique for each user. If they are based on an algorithm it's very easy to change a color depending on your age, eye color, favourite food, etc.

They new generation of users want an experience whose expressions makes them feel emotions and tell a strong story. The art might be generated digitally by a computer, but it can be based on organic data with a story that reflects human values.

They want an alive experience. Anything generated can easily animate and change depending on the users interactions and movement. A static experience is a boring one, the users of today wants a responsive experience that moves with them.

At my previous workplace, an interaction design studio in Malmö, we used algorithmic art as a part of our brand: for backgrounds in presentations, on our business cards, etc. It reflect the core of the company really well, since they are a design AND tech company. 'We are a bunch of strong personalities and individuals, but somehow when you zoom out there is still a red thread.'

So I'll take you through a quick design and thinking process of a generative piece of art.

This is something me and my previous colleague Emil made, we call it ’Mountains’. We wanted to combine sharp peaks and blurry areas to make it look a bit like water.

Let’s say we made it to be used as a wallpaper on your phones home screen. It’s good that it doesn’t have to mush distracting details, it makes the apps stand out. Using lot’s of color also shows off the great screen capabilities of our modern day smartphones.

We add on of our dynamic factors to the alorithm, the thing that is going to make it unique. Let’s say that late at night when you haven’t used you phone very frequently, the colors become more calm and soothing.

If you listen to music with high Beats Per Minute, the colors become more intense and the waves grow more aggressive and energetic to match your mood. 

When someone calls you, in this case me, the pattern could change with every vibration. 

Algorithmic expression is the first step towards a fully generated digital experiences. My job as a visual designer in the future will look very different, I wont be making any buttons in Photoshop, but rather explaining to a nifty artificial intelligence how to do it.

We are learning how to work with formulating the recipe rather than the actual product, it’s a new way of working. It’s also a challenge to collaborate and talk about something that you can’t really predict the outcome of. But thats whats makes it so exciting!