Case in point, I guess – classic British TV. Nearly every show is a weird clash between over-lit 50-field-per-second video interiors and grainly, unlit 25-frame-per-second exteriors. It’s so weird when Basil Fawlty runs out the door and everything gets dark and messy, yet suddenly feels more “expensive” for all its production issues.
The Doctor Who Restoration Team did a ton of work in its day restoring old episodes to their original specifications. In the case of episodes that only survived as film recordings, that meant recreating the “video look” for all studio scenes. It was a painstaking process that indeed made the episodes look much the way they were meant to… and yet.
The Tomb of the Cybermen was one of the first serials on DVD. They hadn’t finished the VidFIRE process yet, though they included a processed clip of one scene as an experiment. The original restoration was a bit clumsy; they didn’t have the tools they had later, and had to make compromises on repairing certain kinds of damage. Yet, the whole thing feels of a piece. It all seems to be shot on location, and for all the script issues it’s one of the most atmospheric serials of the original run.
Then they went back and revisited the story. The new version was cleaned up further, repaired better in most cases, had its luma balanced better, had more special features… and was processed to restore the video look, where applicable. And poof goes the atmosphere. Scenes that I had previously presumed were all shot together on location, you can now see where they cut and remounted in studio. You can see those aren’t real rocks and sand, and you notice everyone huddling together to keep on the same small corner of the set. The restoration may more accurately reflect what it looked like at the time, but in so doing it exposes the cheapness of the production.
I suppose this highlights a basic issue with more advanced capturing technology: it’s less forgiving, so it requires the production to be that much more expensive to avoid looking like a local rep company having a lark with a video camera. A higher framerate exposes the reality of the illusion just as much as high-def video.
Which gets to the crux of the issue, for me: verisimilitude. The more realistic you try to make a thing look, the less real it tends to become unto itself. Black-and-white silent film, you don’t question its world. It paints its own reality that you either buy into or you don’t. Color film, sound – they add more points of comparison to the world around us, making it that much more of a task to maintain the illusion of a coherent fictional world.
A while back I watched a totally cleaned-up, glossy version of John Carpenter’s Halloween – and it didn’t work. The atmosphere was gone, and it no longer came across as its own reality. Instead it was clearly a quick and cheap production, using a certain grade of actor, a certain kind of camera. For the movie to work, its reality has to fight against something. You need to fuzz up the portal between our world and the one we’re watching. Grain, grit, bad contrast – all of this signifies the film as its own thing, that one isn’t going to compare to our own world.
I don’t want to set up rules about what you can and can’t do with a medium, or say that one way of doing things is absolutely better than the rest, but there’s a certain wrongheadedness I think to the march of technical progress in terms of what it aims to accomplish. The point of fiction isn’t realism; it’s verisimilitude. It’s in being able to accept the reality of what you’re given, on its own terms. And that just gets harder, the closer you draw to that uncanny valley.
The question then becomes, how do you maintain the necessary illusion when your tools are fighting against you the whole way, threatening to expose the lie? There is maybe a worthwhile challenge in here, but it’s an expensive and totally avoidable problem…