Conway's Game Of Life in BlinkScript (because why not)

I’ve finally started toying around with BlinkScript. After multiple unsuccessful attempts at understanding the Blink Reference Guide, I am now faced with an interesting enough challenge that I went back to it, for real. I’ve also started working with Guillem Ramisa De Soto, which, to be honest, feels like a huge cheat code (i r winner).

I’m not too far on my journey to mastering the power of the node, but I’m at least confident that I now understand the language well enough to work with it.

So, when I watched Veritasium’s video about math being flawed this weekend, I naturally had to try making Conway’s Game Of Life in BlinkScript.
And so, here it is:

The Setup

feedback_loop_screenshot.png

The main problem of Conway’s Game Of Life is that you need to know the current state of the cell at t to generate the next cells at t+1. On it’s own, that’s perfectly reasonable but you can’t really do that in Nuke. So, I had to render each frame to feed them back to the script via a time offsetted read, thus creating a feedback loop.

The comp looks like that.

Feedback loop logic

  • At frame == 1

    • Render Frame 1

  • At frame >= 2

    • Load frame-1 via the read and the framehold [frame-1] nodes

    • Go through the BlinkScript Node to get the state of the game at frame based on the state of the game at frame-1

    • Render it and start again for frame+1

The Code

kernel TheGameOfLife : public ImageComputationKernel
{
    Image src; 
    Image dst;

    param:
        // Color above which a pixel will be consider as alive
        float whitePoint;

    void init()
    {
        // To get the neighbouring pixels
        src.setRange(-1, -1, 1, 1);
    }

    void process(int2 pos)
    {

        float neighbours[9];
        int neighboursCount = 0;

        // Loop on every pixels neighbouring the current one
        int index = 0;
        for (int i = -1; i <= 1; i++) {
            for (int j = -1; j <= 1; j++) {
                neighbours[index] = src(i, j);
                index += 1;
            }
        }
        // Neighbouring pixel coordinates
        // (-1,  1) - (0,  1) - (1,  1)
        // (-1,  0) - (0,  0) - (1,  0)
        // (-1, -1) - (0, -1) - (1, -1)

        // Count the number of neighbouring pixels that are alive
        // Do not count the current pixel to avoid self reference and messed up results
        for (int i = 0; i <= 9; i++) {
            if (i != 4) {
                if (neighbours[i] > whitePoint) {
                    neighboursCount += 1;
                }
            }
        }

        // Output based on the rules of the game
        dst() = 0.0f;

        // Current is alive
        if (neighbours[4] > whitePoint) {
            if (neighboursCount > 3) {
                // Any live cell with more than three live neighbours dies, as if by overpopulation.
                dst() = 0.0f;
            } else if (neighboursCount >= 2) {
                // Any live cell with two or three live neighbours lives on to the next generation.
                dst() = 1.0f;
            } else if (neighboursCount < 2) {
                // Any live cell with fewer than two live neighbours dies, as if by underpopulation.
                dst() = 0.0f;
            }
        // Current is dead
        } else {
            if (neighboursCount == 3) {
                // Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.
                dst() = 1.0f;
            }
        }
    }
};

Have fun with it !

Eyes Ping from World Position Pass

Let's preface this post by stating that I'm fairly confident that the following, used as it is, is a pretty bad idea but it is interesting none the less. At least for the math behind it.

As I was watching Star Wars Rebels a few days ago, I noticed that the ping in the eyes was pointing towards a light source or at least a fixed point in space.

On the left, an extracted gif from saison's 3 episode 2 preview, Star Wars Rebels The Antilles Extraction, https://youtu.be/E0M2RC5ENLI

On the shows I worked on, we used two techniques to create the ping, a texture or a mesh. In both cases, the ping was following the eye and not the "light source". Which is something that always bugged me.

On the right, an extracted gif from Skylanders Academy's trailer, https://youtu.be/FeMStkCW2LY

Maya has a Closest Point constraint that lets you attach a locator to a mesh surface and moves it to the closed point to the target. On paper, it could be use to move the ping mesh toward the light. In practice, I was somewhat disappointed by the results I got.

The movements produced by that constraint are jittery and favor the vertices to the rest of the surface.

 
 

The idea

The pretty bad one, but the interesting one.

What Maya does is basically checking if the coordinates of the surface match the coordinate of the line passing by the center of the sphere and the target. Or at least, that's what it looks like to me. It also sounds like it could be done with a world position pass and some locators.

I'm not entirely sure my method is the simplest one, but here is how I get the coordinates of a line in 3D space knowing the coordinates of two of its points.

We know that the coordinates of the line \(D\) is be described by the parametric equation \({\displaystyle \left\{{\begin{matrix}x=at+x_{A}\\y=bt+y_{A}\\z=ct+z_{A}\end{matrix}}\right.\quad t\in \mathbb {R} }\) when \({\displaystyle A\left(x_{A},y_{A},z_{A}\right.)}\) is a point of \(D\) and \({\vec{u}}{\begin{pmatrix}a\\b\\c\end{pmatrix}}\) is one of its direction vectors.

What I have at my disposition in Nuke is two axis, one given by a fbx export of the position of the eye and one of the position of the light source, and the world position pass.

I can use one of the axis as point \(A\) but I still need the direction vector, which is pretty easy to get as it is \({\vec{u}}{\begin{pmatrix}x_B-x_A\\y_B-y_A\\z_B-y_A\end{pmatrix}}\) , with point \(B\) being the second axis.

Now, rather than checking each pixel of the world position pass and getting a unique white pixel if one is on the line (which would be pretty rare; we would need to approximate to get more results), I decided to draw the distance between the line and the pixel.

To do so, I need to first find the projection of each point of the world position pass on the line.

With \({\displaystyle C\left(x_{C},y_{C},z_{C}\right.)}\) a point of the world position pass and \({\displaystyle H\left(x_{H},y_{H},z_{H}\right)}\) the projection of \(C\) on \(D\) the segment length \({\overline {{\mathrm {AH}}}}\) is $${\overline {{\mathrm {AH}}}={\frac {(x_{{\mathrm {C}}}-x_{{\mathrm {A}}})x_{u}+(y_{{\mathrm {C}}}-y_{{\mathrm {A}}})y_{u}+(z_{{\mathrm {C}}}-z_{{\mathrm {A}}})z_{u}}{{\sqrt {x_{u}^{2}+y_{u}^{2}+z_{u}^2}}}}}$$ which gives the coordinates of \(H\) as $$\left\{{\begin{aligned}x_{{\mathrm {H}}}=\ &x_{{\mathrm {A}}}+{\frac {\overline {{\mathrm {AH}}}}{{\sqrt {x_{u}^{2}+y_{u}^{2}+z_{u}^2}}}}x_{u}\\y_{{\mathrm {H}}}=\ &y_{{\mathrm {A}}}+{\frac {\overline {{\mathrm {AH}}}}{{\sqrt {x_{u}^{2}+y_{u}^{2}+z_{u}^2}}}}y_{u}\\z_{{\mathrm {H}}}=\ &z_{{\mathrm {A}}}+{\frac {\overline {{\mathrm {AH}}}}{{\sqrt {x_{u}^{2}+z_{u}^{2}+z_{u}^2}}}}z_{u}\\\end{aligned}}\right.$$ The distance between the point of the world position pass and its projection is : $${\overline {{\mathrm {CH}}}}={\sqrt {(x_{C}-x_{H})^{2}+(y_{C}-y_{H})^{2}+(z_{C}-z_{H})^{2}}}$$

The alpha channel expression is :

clamp(r==0 && g==0 && b==0 ? 0 : 1 - sqrt((r-x)**2 + (g-y)**2 + (b-z)**2) / radius)

Which is one minus the equation above divided by the radius. This limits the size of the circle and sets the whitest point on the shortest distance instead of the farther.

Results

Specular pass from the render engine

Calculated specular from the expression

Difference between the two speculars

And that is it. I never put it to the test in production nor did I create a proper gizmo. I hope that you have found some interest in this first blog post!

 

EDIT : Cyril from 2019 here!

Here is the comp files, in case you’re interested.