Wow! I know I know it’s been a long time!

Ok sorry for the massive lack of updates.  I’ve had month after month of crazy amounts of work and then just haven’t had the energy to sit down and write something interesting.

It doesn’t help that I couldn’t actually think of many interesting things to talk about seeing as we haven’t been able to make any films for a while.

I was thinking about maybe talking about animation techniques or VFX techniques but didn’t want to stray too much from the actual core part of making films.  If you feel this might be of some interest then I’m more than happy to oblige?

I have decided one thing over the last few months and that is to scrap the part of 22 days later where by we have to use a costume, phrase and prop from our viewers.  It’s not that I didn’t like some of the ideas that we pulled out the hat. After all without them we wouldn’t of had the insane films we’ve made so far.  But to be honest we just weren’t getting enough ideas coming in and at the moment I’ve got quite a few short films I’d really just like to make without random input.

Maybe one day I’ll add that part back in but for now I’m going with just making a film in 22 days. If I get an influx of complaints then I will consider bringing that part back.

Although it has been pretty much work, work; work these past few months – by the way I can’t believe it’s been over 2 months since my last post; we did manage to shoot a short film.

The idea was to enter a short horror in a competition called: Short Cuts to Hell.  Unfortunately we left it far to late – the night before, and so had to make up the story, film it and edit it in around eight hours.

It actually came out ok, the only problem was that one of the rules of the competition was that the film couldn’t be more than three minutes. Our film no matter how we tried to cut it came in at around four minutes.  So we decided to call it quits.

I have decided to that it would be a shame to just leave it and so over the next few weeks I’ll finish it off and see what we get.

Here is a still. It’s worth noting that because there was only two of us we didn’t bother with any lighting and just used what we had in the room…which is why it looks a little bland.

Screen Shot 2014-09-11 at 23.37.28

In till next time – hopefully not two months.


Ouch…not really



In this blog I thought I’d continue on some of the techniques used in visual effects.

A lot of effects in movies are actually 2d mapped onto environments or 2d images used to hide unwanted items.  They’re also used to replace things like posters and number plates.

In my last blog I talked about the effects of creating the rabbit, the rabbit was one of the 3D effects in the film.  There aren’t that many 3D effects used in The Great Spielron but I had to use 3D for one of the more gruesome scenes where some of our characters get a knife in their heads.

I’ll break down one of these shots,  it’s where Katie throws the knife and it hits Mike square in the forehead.

When deciding the best approach for an effect I think it’s always best to ask the question “can this be done on set with real practical effects?” If the answer is yes then I always think this is the best approach as it doesn’t matter how good the effect is in CG, it’s never as good as a real prop.

Obviously for this particular shot that required a knife to be thrown, the answer was most definitely “no” we can’t use a real knife…well not unless we didn’t need our actor anymore, but we did so we decided it was best not to really kill him.   So the choice was to do it in CG (computer graphics).

This actually posed a problem when filming, as I realised we couldn’t even use a proxy object when Katie goes to stab Kevin in the head as even something soft like a foam knife would still hurt if it hit you in the eye.  So I had get them to act and react with nothing at all. It also didn’t help that it was about 12:00am and so we had about 10 minutes to wrap up the whole end scene. Everything in that last part of the movie was finished in about 15 minutes.  We just went handheld and tried to get as much shots as we could. Anyway I’m digressing.

Now on a big budget film there would be time set aside for the VFX supervisor to take measurements and measure lighting info, using things like a big chrome ball and a big grey ball.  You sometimes see these on the making of movies.  Basically what these do is allow the VFX artist to work out where the light is coming from and how intense it is.  By taking photos at different exposures they can then put this information into the computer and the lighting they get is pretty accurate to what was on set.  It also allows them to have reflections that match too, so the whole CG elements fit to the environment.

We didn’t have time to do this, I brought the chrome ball but we were so rushed that it never got used, I had to rely on my eyes to try and get the knife to match the real footage as best as possible.

One issue with using 3D elements is you can’t use the more basic technique of using 2D trackers to match the 3D element.  If you don’t know about 2D tracking you can view one of my latest blogs about it here

Actually that’s not completely true.  If the camera isn’t moving around the object too much then you’d get away with this, which is something we did for out low budget one day horror that you can see here   Because the real footage was being seen from a pretty flat on view I knew that I could use a 2D track to basically “tack” the 3D animation to a point on the screen.

But for the shot of Matt, his head moves quite a bit and we see it from quite a few angles, so I knew the knife would be seen from many different angles as well.

This is where a different technique has to be used which is 3D tracking.  Like 2D tracking it uses points on the screen to work out how things are moving. But unlike 2D trackers it triangulates  using special algorithms to work out things like the Z depth of where things are in the scene. Although a lot of 3D trackers have automated settings, these usually only work on simple scenes. If a scene has a lot going on, with a lot of camera movement, it’s sometimes necessary to give the tracker more information, for example the focal length of the camera.  Sometimes you’ll see on the making of movies little markers, especially on green screen sets. These markers are a good way of showing the computer points to lock onto.  3D tracking is a real art in itself and something that can take many attempts to get a good result.

To track Matt’s head, I imported a 3D mesh that was similar to his head and scaled it to fit the real footage. The 3D tracker could use this as a way of marking where it needed to be in the footage.  You can see this in the video below.  Excuse the “Demo mode” I only have a demo version of the capture software

Now that I had the information I could map the real footage onto the 3D mesh and add a 3D knife. Below is an image of the 3D knife un-textured.


It was then simply a case of animating the knife going into the 3D head.  I added lights that looked about right to where they would of been in the real set and also added shaders to the knife so it looked like its real life counterpart. Shaders are a way of telling the computer what material an object is made of, basically how it will react with light. So in this case it was a stainless steel knife so needed to be very reflective.

The last thing I needed to do was to add a trickle of blood that ran down Matt’s forehead. To do this I used some of the fake blood we’d made up for the scenes with Katie and the hat.  In case you’re interested, making fake blood is very easy and involves Syrup, red food dyes and coffee.I might do a blog on that at some point.

I shot various version of this fake blood pouring down a green screen as you can see below


It was then a case of removing the green and cropping just one trickle of blood.  I used a 2D tracker so that it would stick to the movement of Matt. And as a final touch I used a tool in After Effects to bend it slightly so that it looked like it was following the contours of his face.

The last touch was to add a shadow to the area around the knife.

Here is a clip showing the different layers.

And here is the final result

The other knife shots were achieved in a similar way.  The only other thing worth mentioning is I added a slight blood burst when Kevin gets the knife yanked out of his head. This was a mixture of using stock footage and also a dust hit that I tinted red as I wanted to get that faint spray of blood you’d get if it was real.

So that’s that till next time.








It’s all in the eyes…the zombie eyes

Right well now that the film is finished I feel I can share some of the techniques we used in the VFX process.

There was a hell of a lot of VFX in this short which is why it took so much time to complete. Although I can’t say that any of the effects were of amazing quality I am still proud of how it all came together.

One of the more time consuming effects was the eye replacement for both Ludwik and Katie.  I did toy with the idea of using 3D eyeballs to create the whited out look but decided to try a much simpler approach which worked surprisingly well.

The technique was to use 2D imagery tracked onto their faces.

Here is a close up of one the  more complex eye replacements.


Now I’m going to try and keep the techniques to how to achieve this as simple as possible but if you want a more in depth tutorial video co-pilot have got a great tutorial here

The basic idea is you take photo reference of a whited out eye.  If you need to create one the best way to do this is take a few photos of your eye.  The idea is to get as much of the whites of your eyes as possible.  To do this take one photo looking as far left as you can. Then take another looking as far right as you can. You can also do looking up and down as well.  You then use an image editing package to cut out the areas of the eye that are white and combine them so that appears to be one white eye. Like this



The next bit you will have to do is to track your footage.  I’ve explained tracking in one of my earlier posts But to sum up tracking is a technique where by you tell the computer to track a point on the image.  Once the point or points have been tracked you can use this information on another piece of footage or image and it will mimic the movement from source.

When tracking you have to make sure you track an area near the eye but it’s likely that you won’t be able to track the eye itself as people either move their eyes or blink which will throw the track off. The brow or the nose is usually a good place to track or the cheeks if they’re not talking.

I would grab a still from the footage that I’m replacing and load it into photoshop. Then adding a new layer  I would place my white eyes over the originals and colour correct them to match the lighting as shown here



I would then overlay this comped still image over the matching frame in After Effects.  Using the masking tools I could cut out just the eyes.  If I was to then scrub through the timeline I’d get a problem of the eyes getting left behind, like the example below.


This is where the tracking information comes into use.  If you look at the image above you should also notice a red square by his nose.  This is called a Null in After Effects and I applied the tracking date I had acquired earlier to this.  This now means that the Null will follow his head movement exactly.  I then parented the still images of the eyes to this Null and Voila! He now has evil zombie eyes.

If you’re wondering why I didn’t just add the tracking data to the images of his eyes instead of the Null? Well I could of and it would of worked just as well. The reason I use a Null is that it gives me that extra flexibility should I need to move or rotate the eyes slightly on top of the tracked data. I sometimes find that the tracking data although good may just stray in one place or another. By having the Null contain all the Tracking data it means I can tweak the eyes without messing with that data.

Now the trick isn’t quite finished although for some of the longer shots this would be fine, for the close ups the eyes still looked too flat.  They lack all the moving reflections and specular you get off a real eye because after all we are just looking a flat image of the eye.

This was one of the reasons why I was thinking of going the 3D route. Using 3D eye replacement would mean I could get the computer to do all the clever stuff with the reflections, refractions, sub subsuface scattering. But I realised I really didn’t need it. If I just added some simple specular highlights it was enough to sell the illusion.

So all I did was to look at the original highlights in his eyes and add some simple colours that matched the shapes of those original highlights.



By then keying them to match the position and shape of the original highlights I could give the illusion that his eyes were moving. It even made them look wet and slightly translucent. It’s amazing how some simple highlights can make things look real.

I did this same technique for the shot where Katie looks across the hall to where Laura has just run.

One final technique I used to really sell the look was on the shadow that fall across his left eye.  The still image of his eye was colour corrected to match the original plate while that eye was in shadow. But the problem was that as he tilted his head up that eye then had light case upon it as you can see in the image below

Screen Shot 2014-05-19 at 23.43.18


So to create this same look over the fake white eye I simply added a negative mask that matched the movement and shape of this light shaft that would cut into the image of his eye to reveal a brighter version underneath.


Screen Shot 2014-05-19 at 23.41.44


Screen Shot 2014-05-19 at 23.41.49

So I hope this has been some help and if there’s anything else you’d like to know just drop me a mail or comment below.

Next time I’ll be looking at how I achieved some of the rabbit effects.

See ya soon!








Even big budget movies seem amature without post production


I watched these B-roll clips of The Hobbit The Desolation of Smaug and it struck me on how without sound, music and VFX it actually made it almost seem comical.

Seeing elves, orcs and wizards running around, trying to talk seriously without all the music and effects, really made me realise how much these other elements add to the overall film.

Now of course having made a couple of short films I’ve seen first hand how these elements can really take a scene that, when on set I was worried wouldn’t come across in the finished version as I’d imagined. Only to see it come alive in post and actually be better than I’d hoped.

But it’s nice to see that even the big budget movies have that same problem. Even with great actors they really do lack so much.

Of course music and VFX are a huge part of this but I think this really demonstrates how even the little foley sound effects that you wouldn’t normally think would make much of a difference really can help to tell the story.

For example at around 2:46 of this video one of the dwarfs shuts someone in what looks like a prison (I haven’t seen the movie yet). My first thought was, he hasn’t locked that the guy could just get out. But of course by simply adding a sound of a latch going in post the audience has a key story element that wasn’t ever told on set.

Or another example is the fight scenes. Ignoring the fact that a lot of the times they’re hitting each other with green batons. These scenes still appear and sound funny because there’s no sounds to back up the power of the weapons they’re meant to be using. Just adding metal clashing sounds and thuds would add so much more dramas to this piece.

The other great thing about watching these videos is it really goes to show how much has to be added in post in terms of VFX.

The Hobbit and LOTR were known for shooting a lot on real sets and trying to keep as much stuff in camera as possible. I really believe this is the best approach. One only has to watch things like The Phantom Menace to realise the difference in an actors performance.

But no matter how much is shot on set, there’s still so much to do in post. There’s the obvious stuff like adding vast vistas to the background. Or adding a dragon to the pile of gold. These all take a hell of a lot of work and a hell of a lot of very talented artists.
But these are actually just a small part of the visual effects. There’s so many bits that people don’t realise need to be added or replaced. So many things that are invisible in the final film (as they should be) but are a vital part of creating the illusion.
For example when Legolas is being pulled along by wires to make him look like he’s sliding. Some one has to paint those wires out.
Or the real swords reflecting the green screen. I wouldn’t be surprised if they just ended up replacing those swords with CG versions.
Even removing an extra because the director felt that one was distracting from the main action. It’s these bits that make up 70% of the VFX.

Without these artists these films wouldn’t be half as grand as they are and it’s a real shame that so many great artists are out of work at the moment due to studios closing their doors through bankruptcy.

Till next time.

Episode 3 winners announced


Here we go again!

We’ve just pulled out the suggestions for Episode 3 and what a mixed bag they are.

PHRASE suggested by Eggnogonthebog “Excuse me waiter, there is a human toe in my soup!”

PROP suggested by Dave K A Commodore 64 computer

COSTUME suggested by Christine1948 An old fashioned strong man costume.

All our winners get a bubblegummonster T-shirt so well done all!

We now have a week to write a script based on these ideas.

I can tell you one thing, we won’t be basing it in two eras.

Till next time.

Behind the scenes of Episode 2 part 3

So here we are in the final part of this making of blog.

I finished off in the last blog by showing how we tracked an image to fit with the moving image below it.

So here is the problem we’re going to occur if we were to just stick to the exact same procedure as before.

See pic below (Ignore the coloured squares, they’re basically just visual guides for the tracking)


But this looks fine right?? Yep it’s not bad…apart from the small pipe I forgot at the bottom.  But it does the job pretty well.

Except when the boy gets to the point where he crossed the bits I’ve hidden.


Ah Crap!! Actually this would work quite well if it was a scene from Harry Potter, he’s even got the right costume…but unfortunately it’s not!

So what can we do?

Well this is where a technique called Rotoscoping comes in. What’s Rotoscoping? Photoshop would be a good example here but I know not everyone uses paint programs so instead I’ll use this example.

Remember having a scrap book, before the internet days…maybe you still do?  You’d find a photo you loved or maybe a picture from a magazine.  You get a pair of scissors  and cut around the photo and place in in your book.

It’s the exact same thing with rotoscoping except you have to do it to a moving image. And the problem with moving images is they have at least 24 frames per second, if you’re lucky and working in film.  But they could have 25 or even 30,  actually now days they could even have 48 if you’re Peter Jackson.

Anyhow I digress.  So now not only do have to cut out your one picture of the boy, you have to cut out 24, for every second he’s on screen. Or more precisely for every second that he passes over the area that you want him to pass over. In this case I think it was about 1 second so lets call it 24.

The good news is that you can get the computer to help you here and use the same tracking techniques we used before to allow you get your rough cutting to follow where he’s going.  But you still need to go in and refine as key elements change.

For example in the image below notice how  his cloak changes from one frame to the next.


The red lines around the boy are the tools that are used to cut out the areas I want.  This can look different in various packages.  This package is called Mocha Pro

Once I’d cut out the boy I could then put him on top of the wall.  I think the best way to imagine what’s going on is to think of panes of glass.  The very bottom image is your painting, you can’t change this as the paints set and you don’t want to ruin it. But you can make copies of it or even parts. Just like we’re doing here.

For each part you can place it on a pain of glass that sits above the original painting. So in our example we copied a bit of the wall which now sits on top.  The pain of glass would be the same size as the painting but because it’s see through we can only see the solid part we added, in this case the wall.  So to have the boy pass over the top of this wall we need to add a new pane of glass on top of the last pane. And this pane has the cut out boy on it.

So now we have this


Hurray….oh hang on, it still looks weird?

This is the extra problem we have with this particular scene.  It’s his shadow which is also now going behind the wall.  Now if I’d been very organized I may have though ahead and shot this so that no shadow went across areas that had things that needed to be removed. But I wasn’t

So now we have to do the same thing all over again for the shadow. Luckily this isn’t real time so we can skip ahead and show you one I prepared earlier.


Now we’re almost done.  There was another issue here that I won’t go into  too much. So I’d removed all the unwanted items from the wall. But remember this was a static image that I tracked to follow the original background.  The problem with the shadow was that it crossed a lot of these items. So even when I cut the shadow out into its own layer (pain of glass) it still had within it, some of those offending items.

Sorry these examples aren’t great but if you look at the images below you can see how the shadow has the pipe still running through the middle of it.

The first image is just the shadow cut out on it’s own layer (pain of glass) so you can see where the borders of the cutout are.

The second image, is the cutout overlaid onto the background.



So to solve this I use the trusty stamp tool. Remember the tool I mentioned in the first part, where by you can copy bits of an image and paint them over another area.  It’s not the tidiest way of doing things, you sometimes get flickers where the images don’t quite blend so well. On a static photo this isn’t so bad but because you’re seeing lots of frames very quickly this can sometimes be jarring.  But because this was in the shadow I managed to get away with it…I hope?


There were some other technical difficulties with this regarding trying to match the lighting of the static images to the moving ones but I won’t bore you anymore.

The final touches were to add some colour correction and Vignetting.

Vignetting in case you’re wondering it the darkening around the edges of a picture, like you see a lot in old photography.  It’s actually a flaw usually from the lens but because it frames the image so well (and hides rushed VFX) it’s used a lot.


Well I hope you enjoyed this brief glimpse into some of the techniques of the VFX world and I hope it wasn’t too boring or confusing.  If so just send me a message or comment below.

Till next time!

It’s time for Episode 3


Ok so Christmas is over, we’ve seen in the New Year and it’s time to get busy again.

Although to be fair I haven’t really stopped through the Christmas period with trying to complete all the shots for Episode 2 “The Great Spielron”

I’m slowly getting there but as I explain in this blog, it’s taking a lot longer than we originally planned.

The secrets of CG movies

I wrote this brief overview on computer graphics for a presentation I did last month.

Over the years I’ve found that there’s a lot of people that seem interested in how computer graphics and more precisly movies like Toy Story are made.

So I decided it might be of interest to some of  you as well.  It’s only a very rough overview and obviously there’s a hell of a lot more to the art of computer graphics but hopefully it will give you some idea of what goes into making these amazing movies.



I created this image in 2005. The face was based off of my Grandad. 

CG stands for Computer graphics

After the script has been finalised just like with traditional movies,  a CG movie should start with a story board.

Once a storyboard has been drawn, animated movies usually have what’s called an animatic or Pre-vis. This is basically a moving storyboard. It gives everyone a much better sense to what the final movie will be like.

This link below shows the rough version of the short animation I created Baggage. This started off as very basic blocked out animation (Animatic) As I completed the animation I’d replace the rough version shots with the completed animations.  In the video you’ll see that some of the animations are still very basic,  these are from the animatic.

You’ll also notice that none of the shots look very pretty, this is because they still need to through a process called rendering which I cover further down in this article.


Once everyone is happy with how the story flows production can start.

Unlike live action films in CG you don’t get anything for free, everything has to be made from scratch.  From huge skyscrapers right down to plug sockets on a wall. If it needs to be in the film then it has to be made.

Although just like sets on movies, you can cheat and  make facades. Below is a link to another blog post I did a while back on the creation of the street scene also featured in my short Baggage.

Making a CG street

Most computer graphic models are made up of shapes called Polygons, These are basically flat squares. The more polygons you have the more detail you can have in your models.  But with more polygons the longer it takes the computer to process. It’s always wise to use real world reference to aid in the building of your models but you can also use photos as a basis to texture (colour) your model


Once you have built your models you need to add colour and materials to make them look real.  These can be taken from photos or created using algorithms within the computer. The term for this process is called texturing and shading. Texturing refers to the colours while shading refers to how light will interact with the material.  Just like in the real world, where light bounces of materials in different ways, the same can be achieved in the computer.  For example think about the difference between a tennis ball and a pool ball.

Pool balls are shiny and reflective


Where as tennis balls have no reflections and thus no shine either.


In this following video I take a simple brick texture and apply it to a surface.  Using separate black and white images I can tell the computer how shiny the object it is as well as how bumpy it is.   These separate images are usually replicas of the original colour image but have various grey scale intensities that allow the computer to know how shiny or bumpy you want your surface to be.

NOTE: Please excuse the “Demo Mode” across the middle. I was using a capture software that I downloaded so they stick that across in till you decide to pay for it.

Texture shading demonstration

If the object being created is a character or something that needs to deform in an organic way, then it needs to have bones, very much like we do.  This allows the object to bend.

Where the bones are placed dictates how the object can bend.  And just like a puppet these characters need to have controls so that the puppeteer/animator can control the character.  This process is called rigging.


Once the character has been rigged it can be given to the animator to animate.

There’s a common misconception with computer animation.  A lot of people I speak to believe that computer animation takes a lot less time than hand drawn animation.  The reality is that although the computer does make some things easier, there are others parts that take longer and so both methods take a similar amount of time.

In the link below I show a brief demonstration of how an animator would use the controls on the characters to put them into a pose. Again please excuse the big “Demo Mode” across the middle.

Posing a character

The animator will of been given a shot to animate. They sometimes get the animatic as a template to where the character needs to move to and how long it should take.  If there  are a number of characters in the one shot it’s usually the responsibility of the one animator to animate all of the characters.

Usually an animator will shoot live action reference of them acting out the actions that the character needs to do.  They may also sketch some thumbnails to get an idea of how to add appeal to the poses.


They then need to create all the main key poses that express the movements and feelings of the character.  Once they’ve done that they will add in all of the “in-betweens” which is the part of the action between the main poses. Below is link to a shot from the short Baggage where Sam pulls his bag from some Tube train doors.  This stage is called blocking which is where all the main poses of the animation have been put in place.

Key poses

It’s also worth noting how some of the poses I’ve stretched his body more then would usually happen. This is a principle of animation called Squash and Stretch. There are 12 key principles in animation which were thought up originally by the nine old men at Disney. You can find out more information about these principles here Principles

The next part of the process is to pass the animation onto the rendering and lighting team.  This is where the sets and characters are lit and finally a process of what is called rendering.

Rendering is where the computer takes all of the information that has been created. So things like where the polygons sit in the virtual world, what moves where, how the lights affect the shading and textures, what’s solid what’s transparent, what’s reflective.  All this information is computed so that we the viewer get to see the final result.

And that is a brief overview of what goes into computer graphics.Like I mentioned at the start this is a very basic overview, if you’re interested in learning more then give me a shout I’m more than happy to talk about it or point you to some good websites.

In till next time!