Compressions Achilles heel
Sure you’ve got brilliant images on the subnet but
there’s only one way you’re going to get the bloody things off it in useful
volumes and that’s with a removable hard drive. If you want to monitor that
video over anything but line-of-sight microwave, dedicated fibre LANs or some
hunk of telco or government infrastructure you’re dreaming.
It’s a cul-de-sac but I can’t resist mentioning the
challenges of real world megapixel installation here. Anyone who has really
tried to run multiple megapixel cameras on a single workstation knows how much
peddling that machine is doing under water. All too often the result of the
strain is a view of the big blue – the big blue screen that is.
If a dedicated
duo-core processor workstation can’t shovel 4 megapixel cameras down a pipe at
the same time a quizzical operator zooms in on a single recorded scene, what
hope has the wider infrastructure got of handling the load? Zero, that’s how
So here’s our issue. MPEG-4 and H.264 give us the
resolutions and frame rates we need but out in the field they’re depriving us
of the ability to move images off our networks in a serious way. Is there an
answer? There may be. Let’s talk about scalable video coding, the moniker
applied to the H.264/MPEG-4 AVC video compression standard’s daughter code, SVC.
What SVC helps resolve is issues surrounding the fact
MPEG-4 and H.264 create a fixed quality video stream of a particular size without
the ability for dynamic resizing in the event a bitstream gets itself tangled
up with cruddy infrastructure. Right now if that happens, instead of achieving
what has been sweetly described as a ‘graceful degradation’ in real time, a
signal simply chokes and falls apart.
But SVC allows the encoding of a quality bitstream that
incorporates a number of subset bitstreams that can be decoded at levels all
the way up to the complexity and quality of the existing compressed video. The
process involves dumping packets from the larger bitstream to create a series
of layers with identical bandwidth budget to a single high-res stream.
it works is that each SVC encoder kicks off by processing a low frame rate, low-res
copy of the source video. Then a second layer of information is encoded using a
higher frame rate and a higher res starting off using the first layer as the baseline.
Once this is done, a third layer is encoded using the second layer as a
baseline and so on, with each new layer using the one before it as a start
point. You can see how it works.
Once the encoding is done, all the layers are shaped into
a single stream and off they go. Now, when the stream gets to the receiving end,
devices get down to decoding the stream beginning with the first low quality
layer and then moving on to the next layer, the ultimate resolution and frame
rate decoded being decreed by the processing power of the device doing the
Further down the line a powerful machine might resurrect
the entire video stream in all its glory but even if not, the video stream has
still been available everywhere it was required, locally or at remote monitoring
locations that might be making do with variable bandwidth on 128 or 500Kb DSL WANs.
Of course, you can get around the unintended isolation of
high quality video using raw horsepower. You’d allocate a dedicated server to
translate a chubby bitstream, thinning it out and re-encoding the video in your
server room before shoving it into a pipe but this is a clunky way to handle video
surveillance. If we plan on a megapixel future, micromanaging every feeble piece
of private or public infrastructure is not the way ahead.