I’ve been long intrigued by Amazon Web Services and the fervor surrounding “cloud computing”. Frankly I didn’t really understand what a bunch of servers have anything to do with puffy, white ethereal objects floating in the sky but I knew it’s reputed to be cost effective and on demand. A few weeks ago the Day Job sent me to an all day AWS 101 training session. After 8 hours of pure geek/tech/engineering talk I left salivating at the promise of virtually unlimited computing instances. The part of the session that really got my attention was when the trainer discussed using AWS for running batch jobs. “Since we only charge for total compute time there’s no reason not to run your batch jobs in parallel. If a batch job takes a server six hours to run, there’s no reason not to use two servers for three hours. Since it’s the same amount of compute time, it costs the same.”
With a little PowerShell script I can render each frame of a Maya scene file in parallel. That means my total render time for all frames equals the time it takes to render ONE frame (albeit, the slowest rendering frame) plus the overhead of starting up a render instance. That sounds pretty nice right? Once I’m finished hacking out the script and refactoring it I’ll be posting it on Source Forge, along with setup instructions, the PowerShell source, and a command line executable. Launching a render farm job from your laptop will look something like this:
renderzon –file ‘myscene.ma’ –frames ‘1-300’
And BOOM! The script commits your assets, starts up 300 render instances, syncs the assets, renders the frames, and copies them back to your lowly laptop. Sounds ethereal don’t it?