AMAZINGLY, the camera’s resolution is 5 times better than 20/20 human vision over a 120 degree horizontal field. The new camera, which was reported in Nature magazine has the potential to capture up to 50,000 megapixels. By comparison, most consumer still cameras are capable of taking photographs with sizes ranging from 8 to 40 megapixels.
Pretty obviously, it’s a leap from gigapixel still cameras to gigapixel surveillance cameras but this sort of development shows the enormous potential CCTV technology is likely to offer installers and end users in the future. Things like processing, comms and compression will need to improve but don’t be fooled into thinking cameras will drop anchor at 1080p HD for all time.
The gigapixel camera was developed by a team led by David Brady, Michael J. Fitzpatrick Professor of Electric Engineering at Duke’s Pratt School of Engineering, along with scientists from the University of Arizona, the University of California – San Diego, and Distant Focus Corp.
“Each one of the microcameras captures information from a specific area of the field of view,” Brady said. “A computer processor essentially stitches all this information into a single highly detailed image. In many instances, the camera can capture images of things that photographers cannot see themselves but can then detect when the image is viewed later.
“The development of high-performance and low-cost microcamera optics and components has been the main challenge in our efforts to develop gigapixel cameras,” Brady said. “While novel multiscale lens designs are essential, the primary barrier to ubiquitous high-pixel imaging turns out to be lower power and more compact integrated circuits, not the optics.”
The software that combines the input from the micro cameras was developed by an Arizona team led by Michael Gehm, assistant professor of electrical and computer engineering at the University of Arizona.
“Traditionally, one way of making better optics has been to add more glass elements, which increases complexity,” Gehm said. “This isn’t a problem just for imaging experts. Supercomputers face the same problem, with their ever more complicated processors, but at some point the complexity just saturates, and becomes cost-prohibitive.”
“Our current approach, instead of making increasingly complex optics, is to come up with a massively parallel array of electronic elements,” Gehm said. “A shared objective lens gathers light and routes it to the microcameras that surround it, just like a network computer hands out pieces to the individual work stations. Each gets a different view and works on their little piece of the problem. We arrange for some overlap, so we don’t miss anything.”
The prototype camera is two thirds of a metre in diameter and about 50 cm deep. And only about 3 per cent of the camera is made of the optical elements, while the rest is made of the electronics and processors needed to assemble all the information gathered. Obviously, the researchers said, this is the area where additional work to miniaturize the electronics and increase their processing ability will make the camera more practical for everyday photographers.
“The camera is so large now because of the electronic control boards and the need to add components to keep it from overheating,” Brady said, “As more efficient and compact electronics are developed, the age of hand-held gigapixel photography should follow.”
While novel multiscale lens designs are essential, the primary barrier to ubiquitous high-pixel imaging turns out to be lower power and more compact integrated circuits, not the optics.