Been a while since I’ve made postage… As mentioned in my last post, I’ve been editing a web series, and have been quite busy with that, as well as a brief interlude on a TV MoW, which will be the subject of another blog post. The web show is called ‘The Sword and Laser‘, and it’s produced, and hosted, by my friends, veteran tech journalists, Veronica Belmont and Tom Merritt.
The Sword and Laser is a Science Fiction and Fantasy book club that has been around since 2008 as an audio podcast. It has developed a significant following, and includes a very large, and active, community on the Goodreads.com book site. In 2012, The Sword and Laser became a video show as part of the newly-launched Geek & Sundry YouTube Premium Channel. This year, Tom and Veronica successfully ran a Kickstarter campaign to fund an independently-produced second season of the show.
This is where I come in… Well sort of. I met Veronica and Tom a few years ago through a mutual friend, sat in on the shoot of one of the early episodes of the video show, and became hooked on the podcast—I’ve been a reader of SciFi and Fantasy since grade school, and the podcast really brought be back into reading something other than tech and VFX books on a regular basis. Earlier this year Tom and I were emailing back-and-forth about helping out on some of his new projects when the opportunity to edit Season 2 came up—and of course I jumped at it.
Season 1 used a format that alternated between episodes mirroring the audio podcast, and others in an author interview format. Season 2 is an Author Spotlight format that runs independent of the audio podcast and book club, focusing on in-depth author interviews. It’s a show that can be enjoyed whether or not you have the time to keep up with reading the monthly book, and really offers a lot of great discussion with some very interesting, and diverse, authors.
So that’s the show… The rest of this post will be largely about the technical challenges, and workflow approaches, that came out of editing the 12 weekly half-hours of Season 2. If you’re up for some real Post-Production Wing Nuttery talk, read on!
How it’s shot
Sword and Laser is shot up in Northern California at Alex Lindsay’s Pixelcorps studio in Petaluma, using a live-to-tape(less) multicam switch-feed format with 3-4 cameras and a Skype feed. The set design uses a ‘Castle in Space’ motif, and comes complete with it’s own cyborg dragon, Lem (Note: Coincidentally, Alex Lindsay, Fon Davis, creator of Lem the Dragon, and myself are all ex-ILMers.)
After the shoot in late January, Pixelcorps sent me all of the show materials for the twelve episodes on hard drive, including:
- Director’s line cut (the live, in-studio, switch feed)
- Source videos for each camera and Skype feed
- Master audio recordings (including a live mix-down and all source inputs)
- B-roll for doing the VFX shots simulating a TV screen on the castle wall.
- CF card back-ups for each camera in MXF format, just in case.
All of the video deliverables were recorded in ProRes 422 (LT) in 1080p @ 23.978fps, in order to have a more film-like look. That look was further developed in post.
Approaching the show
Although Season 2 is a bit different in focus from Season 1, it retained a lot of visual elements from that first season, including the set, opening and closing animations, lower thirds, and Aaron Potter’s Whiteboard videos. Some new graphic elements were added, and new typography was added for the lower thirds and credit sequences. The largest difference was the show format, and stepping away from content that duplicated the podcast, focusing on an author and their work.
The format for each episode is:
- Animated Intro
- Scene 1 — Veronica and Tom intro the episode and author
- Scene 2 — ‘The Backstory’: 7 or 8 key facts about each author
- Animated Whiteboard video — Further exploration of the author and their work with a fair bit of whimsy
- Scene 3 — Author interview (with guest author either in-studio or live via Skype call)
One of the fundamental changes in format was for the backstory section. In Season 1 author bios were done in a traditional TV news package format, using motion graphics to give biographical and bibliographical insights into an author. That was replaced with Tom and Veronica doing on-camera reads of author facts, supported by a motion graphic side bar designed by another friend, graphic artist and production designer, Adam Levermore.
Early on in pre-prod it was clear that Adobe Premiere, and Creative Cloud, was the way I wanted to go with this project, and my producers were very supportive of it (in fact, they have been very supportive across the board, very open to creative input, and have been wonderful to work with in general.) Coming from a Final Cut 7 and Avid Media Composer background, working in Premiere’s familiar timeline paradigm was very comfortable, and the tool, including the compositing features, worked in a way that made the transition from FCP 7 very easy. After spending about a week working through Rich Harrington’s “Premiere CC Classroom in a Book“, I was able to hit the ground running.
Additionally the en-suite integration of Creative Cloud, which was another factor in the decision, has proven to be a huge benefit. While I am eagerly anticipating the upcoming release of features announced at NAB, including the ability to truly template and integrate After Effects projects for inclusion in Premiere, the existing After Effects, Photoshop, and Audition integration has been a major time (and in the case of Audition, booty) saver.
I know there are a lot of editors and directors out there (as well as a good number of production companies) still living in Final Cut 7 land, but I have to state, again, that the inherent stability (in other words not having the frequent, and random, crashes associated with FCP 7), increased flexibility of the en-suite toolset, and processing power of a modern, 64-bit, architecture, along with an easy transition learning curve, make moving to Adobe Premiere a good decision. One that empowers the editor creatively and adds major productivity gains that will make clients and producers happy.
In addition to Adobe Creative Cloud, a couple of outside products have been very useful, including:
Red Giant DeNoiser II – An excellent tool for noise removal
Red Giant Knoll Light Factory – I had to add some lens flares to the opening animation because reasons, and while non-essential, this was fun
Digital Rebellion ProMedia Tools – A suite of very useful tools; Cut Detector proved very helpful for breaking the line cut into separate shots
Also need to call out Premiere’s Warp Stabilizer… It’s the best motion stabilization tool I’ve worked with yet, and really earned it’s keep on a number of episodes.
On the hardware side, everything is done on a 2013 iMac with quad core Intel i7 procs, 32Gb of RAM, and a 2Gb nVidia GPU. A CalDigit VR2 RAID with 2-4Tb drives in a RAID 0 configuration handle the I/O, with a 4 Tb CalDigit AVPro drive used to hold raw assets, and a 256Gb SSD used for Premiere and After Effects image caching. Two, bare, 4Tb HGST Deskstar Drives in a StarTech SATA dock are used for daily archives (and will be handed off to the client for final delivery.) A 32” LG LED TV, connected via a Thunderbolt-to-HDMI adapter, serves as the outboard monitor, and audio is handled through a FocusRite Scarlett 2i2 USB interface, Mackey mixer, and Tannoy studio monitors.
Delivering 12 weekly-half hours. How much more challenging can you get? The greatest challenge was figuring out how to deliver quality on-time, and on-budget with a post-production staff of one… Me. In the end, it came down to figuring out an optimal workflow which allowed meeting the production window while not compromising on quality of the deliverables… And isn’t that what every show is about?
By and large the largest technical challenges have been image and audio quality…
A big part of nailing down the look for the show has been bringing out the full dynamic range of the images, in both color saturation and contrast. The most important factor in that has been noise removal… The original footage has a lot of digital noise that looks like grain across the images, almost as if the image has a fine mist of grey paint across it. This has the effect of making everything look flat, and the colors less vibrant. By using Magic Bullet’s Denoiser II plugin, along with a some basic brightness and contrast-type curve manipulations, the images snap to life. Compositing a light gaussian blur across the final video further increases the contrast, while adding light diffusion adding to the filmic look we were shooting for… Probably a little more stylized than is typical for a talk-format show, but since this show happens on a castle in space, it works.
Raw footage before corrections.
After using Red Giant DeNoiser II, applying curves, and blur
On the down side, the key noise reduction plugin is extremely processor-intensive, and requires timeline rendering for real-time playback. It became clear very early on that the footage would have to be batch processed with the noise removal and curves applied prior to editing. This pre-processing resulted edit sessions are much faster, and tweaking color corrections prior to final transcoding is much easier, producing the most significant amount of time savings in the workflow.
Preliminary color timing scene one in Adobe Premiere’s Color Correction Workspace. The waveform monitor is provides key information on the brightness and contrast range in the scene, but visually matching all of the cameras is equally important.
Audio-wise, the biggest challenges were dealing with noise hits from wireless mics, and occasional difficulties with the Skype audio… Every show is different, and while most of the time it was a matter of cleaning up a couple of wireless hits here and there, we had one episode where there were about 20 nasty pops that seemed to come from the Skype feed. This was where Adobe Audition really came in handy.
Audition’s Spectral Analyser allows you to go into an audio clip, visually identify pops, clicks, and other noise, and erase them by dragging a selection box around them and hitting the delete key—you can visually see the difference between noise and dialog, which is incredibly useful at preserving voice quality. Additionally, the background noise removal tool and, in some cases, the automatic de-clicker, were used to pre-process every audio element.
It took a few episodes to get the workflow nailed down. It’s a bit unconventional by most post house standards, but it works, and allowed me to get the turn-around time on an episode down from a week to about 3 to 3 1/2 days. Basically we’re talking about an ‘online all the time’ approach to the work… Instead of doing an ‘offline’ cut at lower resolution, then uprising for finish, all the work is done at full 1080p from ingest to final transcode. Between the iMac’s power, and Premiere’s ability to take advantage of the GPU with its Mercury Playback Engine, real-time playback was never an issue, even when when stacking up to four or five video tracks in the timeline.
While a full string-out of the elements was used for the first two episodes, it became clear that it would be far more expedient to go with Premiere’s Multi-cam editing feature, which is very good, and use only the batch-processed footage in the cut, relying on the line cut as a visual guide. The reason being that even with using Cut Detector to mark all of the cuts in a take, breaking the line cut up, and applying noise removal and curves on a shot-by-shot basis just took too much time, so it was faster to just edit the pre-processed footage.
Shaping an episode looks like this:
1) Organize footage and review takes
2) Ingest selects, apply DeNoiser II plugin, and basic color timing, add Sword and Laser logo to the Skype footage, then batch process all the footage (about 10-15 hours rendering time, which I try to schedule for over-night.)
3) Perform basic noise removal on program audio (custom mix tracks if necessary)
4) Noise removal on Whiteboard video, convert to stereo
5) Create multicam groupings for each scene then edit
6) Assemble cut scenes with open, Whiteboard, and closing animations
7) Add lower thirds, update/add credit sequence
8) Assemble photos for sidebars, and prep graphics in Photoshop CC, along with YouTube thumbnail
9) Create and render sidebars in After Effects, along with “Backstory” graphic
10) Insert sidebars and add transitions
11) Add SFX
12) Add audio mixdown tracks, sweeten audio, add EQ and expander/limiter/compressor plugins as needed
13) Final color tweaking, including addition of blur element for ‘filmic’ look
14) QA pass and final viewing
15) Transcode and output ProRes and MP4 videos
16) Upload MP4 to YouTube
17) Iterate based on client notes
The old saying is every show, and every episode, is a science project… Issues requiring work-arounds come up on a episode-by-episode basis requiring minor modifications to the workflow, but that’s pretty much the overall process.
In short, it’s been a project with a lot of moving parts, but made manageable by having a great set of tools, and being open to approaching workflow from different perspectives.
Oh, and yeah… I’ve gotten to spend a lot of time with some really, really interesting authors, resulting in a Goodreads ‘to-read’ list that has grown exponentially. Best editing job, ever!