Think of an image of 800 x 600 pixel and 24 bit of color (8 bit per each RGB component). Its trivial binary representation is a sequence of 11520000 bits (800 x 600 x 24) and we can think of each picture as being a natural number.

Imagine now that we write an computer program that generates all these pictures one by one, incrementing the natural number by one in each round.

Running this algorithm for enough time you would eventually get:

- a picture of your face

- a picture of you in the Moon

- a picture of you with Marlin Monroe and James Dean

- pictures of ancient Earth, with dinosaurs

- pictures of all the paintings of Leonardo da Vinci, Van Gogh or Picasso

- pictures of all the pages of Shakespeare’s writings

- pictures of proofs of all relevant mathematical theorems (already proved or not)

- pictures of all great music compositions (already written or not)

- pictures of Microsoft Office and Windows source code

- pictures/printscreens of all pages in the World Wide Web, including all the versions of Wikipedia

Warning: don’t do this at home unless you can wait for some billion years between each pair of interesting pictures you would get!

Still, it’s interesting to realize that you can compress all the world’s information to a short and trivial program, all you have to do is add enough useless data to it!

Except that it won’t actually contain any

information. If it generates all possible pictures, it encodes the same information as if it generates none.I will use only gray-scale to make it quicker. However, we still will wait quite long.

Anyhow, it is a cool idea. It is like having enough monkeys typing in writing-machines; they eventually would write The Lord of the Rings.

Jose.

I am interested in building this, not to generate every image, just to allow people to interact, by turning a wheel, and progress through the billions. It’s all about the trip being just as fun as any real result. Any ideas?

Dear Martin,

Nice to hear about your interest in this post!

However I really think its kind of unfeasible to make something decent. The probability of picking at random an image with some interest (meaning, with some pattern) is very very low. Note:

this can actually be quantified using Kolmogorov Complexity.

In other words, if you navigate freely through the space of images, you will basically always see random noise. Perhaps this is already cool?!

If you want to stay close to images with some structure, I would then suggest you to ask the user to chose one picture and then start moving away from it by changing some pixels. Still there are way too many ways you can change some pixels!!! Perhaps you could suggest the user to add another image and then try super-positions of the two… Or extend this to N-pictures…

I don’t know, I let it to your creativity!

And let me know if you did something fun to try!

If you apply this to code, it's interesting to see that a program p can generate another program q where K(q) > K(p). But I agree with Vladimir, it's impossible for program p to decide which of the programs is a specific program. For that K(p)>=K(q)

Another interesting thing is that we can have a program that generates all possible programs and runs them all. It's a really simple scheduler with quadratic decay.

For example:

0: INSTRUCTION1.PROGRAM1

1: INS1.P2 INS2.P1

2: INS1.P3 INS2.P2 INS3.P1

3: INS1.P4 INS2.P3 INS3.P2 INS4.P1

do you see what I mean?

Now, just code this. When skynet wakes up we probably will notice it somehow.

You say billions of years. Why? We could do it in much less time.

Let's say we do 800 x 600 at 24 bits per pixel.

This is 2^24 * (800 * 600) bits in total, or

8053063680000 bits.

This is only

8053063680000 / 1024 / 1024 / 1024 / 1024 = 7.35 terabytes (roughly). So only a few thousand dollars of storage.

Let's assume the NSA has funded us, and we have an array of 1000 8 core computers, each capable of generating 1000 bits worth of images per second (this is probably a really slow estimate).

Thus it will only take us

8053063680000 / 1000 / 1000000 / 60 / 60 / 24 / 365 = 0.25 years.

Or 3 months!

In fact, this is so fast, that we ought to up our resolution. We could use 3 megapixel images, for example, or even larger. If we had say a $100 million dollar budget from Google, for example, we could probably complete the project in 10 or 15 years.

The real problem is searching the database—it would have to be stored in some sort of search tree based on a content-aware calculation (shape recognition, perhaps—store all the face-shapes separate from the word-shapes, etc.) Organizing it in this database might significantly slow down the storage speed, extending the project.

But with enough systems working in parallel and a given input that we would like to find things that were similar to it, we could probably discover many interesting things (the schematics of new and unthought weapons systems and technologies.

But isn't this a project worth a century? It would contain within it the whole of the future. Every frame of a movie of the future, in fact, a movie shot from every viewpoint of every human living and also every human not living—except also a finite yet large number of counterfeit futures, from counterfeit perspectives.

Indeed, “science” could be reduced after this project to a database search. We could employ millions of people in the simple task of searching a single segment of the database for intriguing images, sending them to specialists who would study them for their possible application to increasing the store of human knowledge and human possibilities. Images of the future could be sorted according to their probability, a service of a new class of specialists who would rightly be called seers of the future.

Although the possibilities are endless, the database is finite.

In fact, what's so intriguing to me about this is that it's identical to the story in Borges' Ficciones, “The Library of Babel” (which is based on a fantastic picture from Leibniz's Theodicy). In both Borges and Leibniz, the library contains books of /infinite/ pages—the search is then interminable, the book of your life can never be found.

Here the search database is not only a finite size, but by contemporary standards, it's not even that large.

Honestly, I have no idea what we're waiting for.

Zack

Hi Zack,

Thanks for your comment, it would indeed be a nice entreperneuship, but I guess there is an error in your calculations that, when fixed, makes the whole thing impossible:

the number of different images is:

2^(24 * 800 * 600)

and not

2^24 * 800 * 600

And the (800*600) inside the exponent *really* makes it impractical!

Except you could never do this with a pseudo random number generator like the ones used in programming. The algorithm will repeat itself at some point due to it's mathematical nature and finite set of data it uses to generate pseudo random numbers.

The chance of not getting any recognizable picture using a program is astronomically higher than the chance of getting one.

After all it took the universe billions o years and uncountable permutations over a enormous data set to randomly position atoms into a structure that is your face.

How could a mere pseudo random generator using 128 or 256 or 512 bits or more of input and some basic mathematical formula have a chance of recreating that ?

@Pop Catalin: Yes, generating an image at random has *extremely* high probability of showing nothing of interest. However, in the thought experiment I describe in my post, we generate the set of *all* possible images one by one. But of course, as mentioned several times in the discussion, this is just completely unfeasible.

The idea cannot work for all your samples.

If we define the 800×600 Image as “information”, then we cannot obviously get “all the information in the world”, but only “2^(24 * 800 * 600) pieces of information”.

An information needs to be interpreted, in order to have some “meaning”, and interpreting images is very different from interpreting text/music.

So, for images, we could get all images recognizable as 800×600 (so, almost all 4:3 pictures/paintings in the world, at low resolution), but we would never “recognize” a picture of the whole sky map.

For texts we need to define how we translate “pixels” to “characters”. If we translate images to text as we read it, we would get all texts writable in a 4:3 screen (about anything you could write in a A4 paper). Even with the best possible conversion (1 pixel = 24 bits = 3 UTF-8 chars) we would get all possible texts shorter than 2^(3 * 800 * 600). I don't know if Dante's works would fit.

To get all the possible books in the universe, you need Terry Pratchett's L-Space (an infinte library), and possibly an Orangutan librarian…

since there is 2^(24 * 800 * 600) pictures at 800 * 600 reselution, there is posible to give every picture at 800 * 600 a number. it woud be cool to make a program where you could type the number and get that picture.

There is a way of inputting a number and getting a particular image, but the number only tells the program when to stop during its cycle, so in order to get that cycle number, yes, you must already have the image. I thought of a system for doing this, but tell me if I made a mistake. First, I thought that if you had an 800*600 grid where each cell can have 1 of 24 states, that the sum of total configurations would be 24^(800*600). Are pixels different from a simple grid, or did I miss something else? Anyway, so the program would systematically go through every possible state of the first cell. Once it went full cycle it would reset and then increment the next cell over by 1, and so on like this so that every cell over would go full cycle 1/n times as fast as the cell before, where n = the total number of possible values. In two dimensions, once the last cell of a row completes its cycle, the next increment happens at column 1 and down a row. Now at every cycle the full state of the grid can be assigned a number based on whatever cycle it is currently on so that the images are organized numerically, but the numbers get huge eventually so why try to store them all. However, there is a way of taking the state of a preexisting set (for instance, a set of ascii characters which make an image) and based on the value at and coordinates of a each individual cell, determine at which cycle of the aforementioned method that image appears.

First, assigning each possible value of a cell, v, a numerical value from 1 to n total values.

Then, assign each cell a numerical value, p.

Now solve for the values v and p in in this equation: (v-1)n^(p-1)+1 = c = cycle at which value makes first singular appearance.

Sum all the results, and then subtract n from the result. Now you have the label of that particular image, which you can now use to tell a program such as the one described above when to stop cycling, and it would land on that image. This however would be cool only because you actually would get to see the computer stopped on an image of yourself, but of course it would only be the exact same image you already had to solve for c in the first place.

This could be interesting to reverse an existing image/grid and calculate c (cycle) from that. Then we could compare similar images of say 1 person's face, or multiple peoples faces, to check for any relation in the numbers…

Yes, but the input number is the image itself!

Sometimes when I think about this, and I've been thinking about it for some time (as my first comment was posted here in 2008, I wonder if it isn't the wrong question we are asking. Isn't it true that what we are really looking for is a function the generates useful images and nothing more? Perhaps it's not brute force (generating all the images and sorting them) that we should be focused on but instead trying to write code that would never generate noise, but how do we do that? We would have to figure out (or rather clearly define) what makes a “good” or “worthwhile” image. After all, won't many of the images actually be photos of grains of sand or moon dust? We MUST FIRST establish a class of images that we find interesting.—- Let's do this, what are the fields of study that are working on these kinds of ideas? Are there books, or fields of mathematics that would help?

Hi, there. I recently found a paper that talks exactly about this: the existence (or non-existence) of a computer program that generates all “natural images” (and only those). You can find it here: http://www.cs.auckland.ac.nz/CDMTCS/researchreports/344cris.pdf

It seems interesting, because the author has an understanding of statistics / manifolds / kolmogorov complexity.

Maybe you can look at the papers it cites as well. Or just search in google scholar for “statistics of natural images”. That way you will find more machine learning and neuroscience papers, like this one:

http://www.cns.nyu.edu/~tony/vns/readings/simoncelli-olshausen-2001.pdf

Greetings,

Hugo

There's an even more interesting fact about the set of all possible images at a given resolution: time.

Although the number of images would be incredibly high, they *must* depict every subject *in time*.

Yet, the number of images is “finite”, while time is tought to be infinite – or it's not ?

the number of possible events which can occur in time are not infinite, so over the course of infinity a point at which all events which take place are events which have already taken place must occur. Cool because it probabilistically it almost guarantees an afterlife (whatever it is that makes you at some distant point in the future re-configuring into you again)

Is there a website where something like this has been done? I find it so intriguing to think of the possibilities, and who's to say we need a database to search all these images? Why not just get volunteers to look through all the images generated? I'd be more than willing. If you were to only glance at all the images you know are rubbish, like ones with solid block of colour or nothing more than a mess of it then surely you'd quickly be able to find useful images.

Actually, it wouldn't even be that painstaking. If you had a program that cycled that states of each pixel systematically and very fast so that one pixel changes per image, you wouldn't have to worry about a cohesive image going by too fast. It would take just about as long for the image to be completely reduced to noise as it probably took for it to appear. for quite some time you'd be staring at the image of a face let's say, for quite a while, as the pixels at the upper-left would be flickering at a mad speed, without even effecting the rest of the image.

Wow, I've been thinking about this lately.

I'll try to do it just to see how it progresses.

@anon June 11, 2010 6:36 PM

The resulting number of images would be finite because we're only generating all possible images in finite dimensions.

As you keep incrementing numbers, you can get all sorts of weird images(like bill gates with a body of an alien and the mirror-image of the IBM logo on his forehead, selling cannon-ball sized peanuts in a region like Antarctica).

The total number of all possible images would be 2^(24 * 800 * 600) = approx 3.548534756824903605544680295715… E+3467865 (A 3,467,866(exact) digit number)

I think it would take 1.125…* 10^3467857 years to see all those images at a rate of 10 images per second.

Why does it have to be in color?

If you use 256 values for each Red, Green and Blue, you would have 4 billion images with only 4 pixels, which is insane, if you plan on doing this for 800*600 images. It's a HUGE number

My idea is to use only black and white, and images of size 100*100. I've calculated that you could generate a total of about 2e3010 images in total. It's an awful lot, and 100*100 images are rather small, adding to the fact that it's only black and white.

I'm not ready to saturate my computer with a zillion of meaningless images, but I'm really interested in programming this, just for fun.

My idea is to just let the computer generate for about a day, then every night erase the ones that are not interesting.

I've been doing this recently aswell. I'm using the GMP library. I can process a single 50x50x16 images in seconds but a 300x200x256 took about 10minutes to generate What I'm also doing is given an image (which im assuming is grayscale) is converting it to its index value (key) and then trying to plot it on a graph of all posible keys jsut to see if there is a specific range of keys that mean something. It would also be good to distiguish between true images and fake one. I'm also trying to find a formula so I can upscale a key from a smaller set to a larger set. Could be useful for improving grainy CCTV image or something. Or just for archivings stuff. I was wondering if there was a relationship between keys (say you find the index for a pic of someone then a pic of them years later, could this difference in the keys to applied to someone elses picture to see how they would age? also a similar thing with image sequences of lottery tickets?) Although it does feel like looking for a needle in a haystack.

There exist many research papers on the topic of learning the statistics of natural images. However, I recently found one that specifically talks about the existence (or non-existence) of a computer program that would generate only the subset of “perceptually distinguishable” images:

https://researchspace.auckland.ac.nz/bitstream/handle/2292/3851/344cris.pdf?sequence=1

Might be an interesting reading as a follow up of this blog post.

http://code.google.com/p/every-picture