Feature

The future of digital cameras - memory is the new iris?

How can image sensors - the most complicated and expensive part of a digital camera - be made cheaper and less complex? Easy: take the lid off a memory chip and use that instead.

As simple as it sounds, that sums up a device being developed by a team led by Edoardo Charbon, of the Technical University of Delft, in the Netherlands. In a paper presented at an imaging conference in Kyoto, Japan, this week, the team say that their so-called "gigavision" sensor will pave the way for mobile phones and other inexpensive gadgets that take richer, more pleasing pictures than today's devices. Crucially, Charbon says the device performs better in both very bright light and dim light, conditions which regular digital cameras struggle to cope with.

While Charbon's idea is new and has a patent pending, the principle behind it is not. It has long been known that memory chips are extremely sensitive to light: remove their black plastic packages to let in light, and the onrush of photons energises electrons, creating a current in each memory cell that overwhelms the tiny stored charge that might have represented digital information. "Light simply destroys the information," says Martin Vetterli, a member of the EPFL team.

A similar effect occurs aboard spacecraft: when energetic cosmic rays hit a cell in an unprotected memory chip they can "flip" the state of the cell, corrupting the data stored in the chip.

What Charbon and his team have found is that when they carefully focus light arriving on an exposed memory chip, the charge stored in every cell corresponds to whether that cell is in a light or dark area. The chip is in effect storing a digital image.

All very clever, you might say, but why would anyone want to do that? The answer is that the two types of sensor chips used in today's digital cameras store the brightness of each pixel as an analogue signal. To translate this into a form that can be stored digitally, they need complex, bulky, noise-inducing circuitry.

The charge-coupled device (CCD) sensors used on early cameras and camcorders, and the cheaper and more modern complementary metal oxide semiconductor (CMOS) type both operate on a similar principle. On each, the area that forms an individual pixel can be thought of as a small charge-containing "bucket". The size of the charge contained in one of these buckets depends only on the amount of light falling on it.

In a CCD, the contents of each bucket of charge are "poured" into the bucket next door, and then the next until the signal reaches the edge of the chip. There, an analogue-to-digital converter (ADC) typically assigns it an 8-bit greyscale value, ranging from 0 to 255. In a CMOS sensor, the charge is converted to a voltage local to each pixel before being shunted off to an ADC at the edge of the chip - where it too is assigned a greyscale value between 0 and 255.

A memory chip needs none of this conversion circuitry, as it creates digital data directly. As a result, says Vetterli, the memory cell will always be 100 times smaller than CMOS sensor cells; it is bound to be that way because of the sheer number of signal-conditioning transistors the CMOS sensor needs around each pixel. "Our technology will always be two orders of magnitude smaller," he says.

So for every pixel on one of today's sensors, the memory-based sensor could have 100 pixels. A chip the size of a 10-megapixel camera sensor will have 100 times as many sensing cells if implemented in memory technology - hence the choice of the gigavision name.

But don't expect a gigapixel camera any time soon. Unlike the pixels in a conventional sensor, which record a greyscale, the cells in Charbon's memory-chip sensor are simple on-off devices: they can only store a digital 0 or 1, for which read either light or dark. To build a sensor that can record shades of grey, EPFL engineer Feng Yang, who presented the Kyoto paper, is developing a software algorithm that looks across an array of 100 pixels to estimate their overall greyscale value.

It's a technique called spatial oversampling - and while it's early days, he's getting somewhere. "It's turning out to be a lot more accurate than the greyscale values you get from regular CMOS sensors," says Vetterli. "Analogue to digital conversion gives only poor estimates of the actual analogue light value."

The EPFL team have found that the more binary pixels they have, the better their chips perform in rendering deep shadow and bright highlights. "Gigavision cameras do not saturate anywhere near so easily, so we'll be able to use it for high dynamic range applications like medical imaging," says Yang.

"This is not pure academic interest," says Vetterli. "We're hoping to have a big version of a gigavision memory chip fabricated late this year and working early next."

They'll have their work cut out, observers say. A major problem they will have to overcome is that of the poor sensitivity of their pint-sized pixels. Their size means the number of photons that can be scooped up by each of them will be small - and that can make for a very noisy signal.

The prospect of producing image sensors as cheaply and easily as memory chips is bound to attract attention, says Alexis Gerard, chief executive of the consultancy Future Image in San Mateo, California, which organises a conference on imaging technology called 6Sight. "It will be pretty interesting if they can make these sensors using regular memory-chip-making technology."


Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in October 2009

 

COMMENTS powered by Disqus  //  Commenting policy