Video & imagesHello Reddit

Wednesday, October 19 2011.

Hello Reddit,


This page was created in response to replies from "probablyreadit", "line10gotoline10", "voyvf", "PencilAbuser" and "ironylocks" to an initial comment of mine, which can be found here.

First a very quick introduction and disclaimer to shamelessly cover my own ass: These scripts were not originally written for wide (read: any) dissemination, and so I'd caution against using them to learn good Python programming practices.

The scripts that can be found here are licensed under the CC-BY-NC 3.0 license.

A brief note on the output. While it is possible to embed in each script a predetermined background color which will show up in the output in any region not subsequently overdrawn (by filled the output buffer before drawing), it is much more practical to abstain from doing so. Since the buffer is initialized to 'nothing', i.e. alpha = 0, this will cause the output image to have an alpha channel, allowing for experimentation while post-processing (I use Gimp). As an aside, sometimes interesting effects can be achieved by filling in the background with a (possibly scaled) version of the reference image. Multiple passes can be generated and composited in a large number of ways (not all of which are aesthetically pleasing though).

Most of the scripts have a section for constant declarations, where magic numbers are kept for easy tweaking. All of them supports rendering the output at arbitrarily high integer multiples of the reference images size. Thus, output files of enormous size can be generated; useful for print media, billboards, murals or whatever else requires very high resolution output.

That's enough stalling, I think. Let's get down to brass tacks: For easy comparison (and topicality), each of the scripts will be accompanied with a demo rendering using the same source image, namely this photo of a young lady in a reddit t-shirt (Source), which I obviously picked completely at random and for no particular reason at all.

 
(Each headline links to the source of the script used to generate the given example.)

'Colors Natural'

This script is a simple gradient follower and restroker. Since brush impulses are computed from both the reference and output image, a feedback loop is established which makes the brush behavior increasingly erratic as the rendering progresses. A nice feature, since that means high frequency movement is rasterized on top of low frequency movement. This is what the output can look like, in this case with a modest 6.895.416 strokes in 1725 batches. 11 minutes 18 seconds rendering time:

 

Output on tainted white at 18.2% size.

 

Output on tainted white (face detail) at 100.0% size.

 

'Colors Spiral'

Many variations can be made over the same general theme. This script allows the brush agent free radial movement, as opposed to the more quantized previous example:

 

Output on tainted white at 18.2% size.

 

Output on tainted white (face detail) at 100.0% size.

 

'Quant'

We're not, of course, limited to restroking the reference image. In this case slanted capsules are rendered with a size based on the average luma of the current cell in a recursively processed quad-tree. Hideously inefficient, since none of the results are cached. I'll leave that as an exercise for the reader.

 

Output on dark grey at 18.2% size.

 

Output on dark grey (shoulder detail) at 100.0% size.

 

'Bloom'

Or more mundane operations, like this simple additive bloom filter. Rendering took three minutes and 19 seconds with 6.895.416 samples.

 

Output at 33% size.

 

Output (face detail) at 100.0% size.

 

'Regrain'

In a similar vein, stochastic resampling can in some cases be a more aesthetically desirable way of upscaling small reference images. It can also be used as a pre-pass filter to modulate the filters above. Rendering took 13 minutes and 37 seconds with 10.672.000 samples.

 

Output at 18.2% size.

 

Output (shoulder detail) at 100.0% size.

 

'EBS'

Moving in the opposite direction, more complex behaviors can be quite interesting as well. Here, a multi-agent emergent behavioral system (essentially boids) all trace the image while influencing each other. Rendering took three minutes and 28 seconds with 10.000 strokes.

 

Output (point-sampled) at 66.7% size.

 

Output (interpolated) at 66.7% size.

 

Drawing progress animation videos

To better visualize the way in which the final images are formed, I've made a few videos by framedumping some of the scripts above
and assembling them into 30 second animations.

 

 

Appleseed-2

 

Enki-Bilal

 

Moss-Trees

 

Tokyo-Lantern