CCD. The physical principle of the CCD. CCD-matrix. What is a CCD

What is a CCD?

A bit of history

Previously, photographic materials were used as a light receiver: photographic plates, photographic film, photographic paper. Later, television cameras and photomultiplier tubes (photomultiplier tubes) appeared.
In the late 60s - early 70s, the so-called "Charge Coupling Devices" began to be developed, which is abbreviated as CCD. On english language it looks like "charge-coupled devices" or CCD for short. In principle, CCDs were based on the fact that silicon is capable of reacting to visible light. And this fact led to the idea that this principle can be used to obtain images of luminous objects.

Astronomers were among the first to recognize the extraordinary ability of the CCD to capture images. In 1972, a group of researchers from JPL (Jet Propulsion Laboratory, USA) established a CCD development program for astronomy and space research. Three years later, in collaboration with scientists from the University of Arizona, the team acquired the first astronomical CCD image. In the near-infrared image of Uranus, using a 1.5-meter telescope, dark spots were discovered near the planet's south pole, indicating the presence of methane there ...

The use of CCD-matrices today has found wide application: digital cameras, video cameras; It has become possible to embed CCD-matrix as cameras even in mobile phones.

CCD device

Typical cCD device (Fig. 1): on the semiconductor surface there is a thin (0.1-0.15 μm) layer of dielectric (usually oxide), on which strips of conducting electrodes (made of metal or polycrystalline silicon) are located. These electrodes form a linear or matrix regular system, and the distances between the electrodes are so small that the effects of the mutual influence of neighboring electrodes are significant. The principle of operation of a CCD is based on the emergence, storage and directional transfer of charge packets in potential wells formed in the near-surface layer of a semiconductor when external electric voltages are applied to the electrodes.



Figure: 1. The basic structure of the CCD-matrix.

In fig. 1, C1, C2 and C3 represent MOS capacitors (metal oxide semiconductor).

If a positive voltage U is applied to any electrode, then an electric field arises in the MIS structure, under the action of which the majority carriers (holes) very quickly (in a few picoseconds) leave the semiconductor surface. As a result, a depletion layer is formed at the surface, the thickness of which is fractions or units of a micrometer. Minority carriers (electrons) generated in the depleted layer under the action of any processes (for example, thermal) or that got there from the neutral regions of the semiconductor under the action of diffusion will move (under the action of the field) to the semiconductor-insulator interface and localize in a narrow inverse layer. Thus, a potential well for electrons appears at the surface, into which they roll from the depletion layer under the action of the field. The majority carriers (holes) generated in the depletion layer are ejected into the neutral part of the semiconductor under the action of the field.
During a given time interval, each pixel is gradually filled with electrons in proportion to the amount of light that has entered it. At the end of this time electric chargesaccumulated by each pixel are in turn transmitted to the "output" of the device and measured.

The size of the photosensitive pixel of the matrices is from one to two to several tens of microns. The size of silver halide crystals in the photosensitive layer of the photographic film ranges from 0.1 (positive emulsions) to 1 micron (highly sensitive negative).

One of the main parameters of the matrix is \u200b\u200bthe so-called quantum efficiency. This name reflects the efficiency of conversion of absorbed photons (quanta) into photoelectrons and is similar to the photographic concept of photosensitivity. Since the energy of light quanta depends on their color (wavelength), it is impossible to unambiguously determine how many electrons will be born in a matrix pixel when it absorbs, for example, a stream of one hundred dissimilar photons. Therefore, the quantum efficiency is usually given in the passport for the matrix as a function of the wavelength, and in some parts of the spectrum it can reach 80%. This is much more than a photographic emulsion or an eye (approximately 1%).

What are the types of CCDs?

If the pixels are lined up in one row, then the receiver is called a CCD ruler, if the surface area is filled with even rows, then the receiver is called a CCD matrix.

The CCD ruler had a wide range of applications in the 80s and 90s for astronomical observations. It was enough to hold the image on the CCD line and it appeared on the computer monitor. But this process was accompanied by many difficulties, and therefore, at present, CCD arrays are increasingly being replaced by CCDs.

Unwanted effects

One of the unwanted side effects of charge transfer on the CCD that can interfere with observations is bright vertical stripes (pillars) in place of bright areas in a small area of \u200b\u200bthe image. Also, the possible undesirable effects of CCD matrices include: high dark noise, the presence of "blind" or "hot" pixels, uneven sensitivity across the matrix field. To reduce dark noise, autonomous cooling of CCD matrices is used to temperatures of -20 ° C and below. Or a dark frame is taken (for example, with a closed lens) with the same duration (exposure) and temperature with which the previous frame was taken. Subsequently special program the computer subtracts the dark frame from the image.

The great thing about CCD TV cameras is that they can capture images at up to 25 frames per second with a resolution of 752 x 582 pixels. But the unsuitability of some cameras of this type for astronomical observations is that the manufacturer implements internal image preprocessing (read - distortion) in them for better perception of the received frames by vision. This is AGC (automated control adjustment), and the so-called. the effect of "sharp borders" and others.

Progress…

In general, the use of CCD receivers is much more convenient than the use of non-digital light receivers, since the obtained data immediately appear in a form suitable for processing on a computer and, in addition, the speed of obtaining individual frames is very high (from several frames per second to minutes).

IN currently The production of CCDs is developing and improving at a rapid pace. The number of "megapixels" of matrices increases - the number of individual pixels per unit area of \u200b\u200bthe matrix. The quality of images obtained using CCDs, etc. is improved.

Used sources:
1. 1. Victor Belov. Accurate to tenths of a micron.
2. 2. S.E. Guryanov. Meet - CCD.

| CCD (Charge coupled device) or CCD matrix (in English. Charge-Coupled Device) Is an analog integrated circuit, which includes light-sensitive photodiodes based on silicon or tin oxide. This chip uses CCD (Charge Coupled Device) technology.

History of the CCD

The first CCD was developed in 1969 by George Smith and Willard Boyle at AT&T Bell Labs in the United States. Developments were carried out in the field of video telephony (Picture Phone) and the development of the then actual "semiconductor bubble memory" (Semiconductor Bubble Memory). Soon, charge coupled devices began to be used as memory devices in which charge could be placed in the input register of a microcircuit. But later, the ability of a device's memory element to receive a charge due to the photoelectric effect made the use of CCD devices mainstream.

In 1970, Bell Lab researchers learned how to capture images using the simplest linear devices.

Soon, under the leadership of Kazuo Iwama, Sony began to actively develop and engage in CCD technologies with huge investments in this, and was able to establish mass production of CCD matrices for its video cameras.

Kazuo Iwama passed away in August 1982. To perpetuate his contribution, a CCD chip was installed on his tombstone.

In 2006, Willard Boyle and George Smith received awards from the USA National Academy of Engineering for their work on the CCD.

Later, in 2009, the creators were awarded the Nobel Prize in Physics.

How the CCD works

The CCD matrix mainly consists of polysilicon, separated from the silicon substrate by a membrane, in which, when the supply voltage is applied through the polysilicon gates, the electrical potentials near the conductor electrodes change greatly.

Before exposure and applying a certain combination of voltages to the electrodes, all charges formed earlier are released and all elements are converted to an identical or original state.

Then the combination of voltages on the electrodes creates a potential reserve or well in which electrons accumulate, formed in a certain pixel of the matrix as a result of exposure to light rays during exposure. The more intense the intensity of the luminous flux during exposure, the more accumulates the supply of electrons in the potential well, respectively, the higher the power of the final charge of a certain pixel.

After exposure, successive changes in the supply voltage at the electrodes are formed in each individual pixel and a potential distribution occurs next to it, which leads to a charge flow in a given direction to the output pixels of the CCD matrix.

An example of an n-type CCD pixel

Note: the subpixel architecture is different for each manufacturer.

CCD pixel designations in the diagram:

1 - Particles of light (photons) that passed through the lens of a video camera;
2 - Subpixel microlens;
3 - Red subpixel filter (a fragment of the Bayer filter);
4 - Light-transmitting electrode made of tin oxide or polycrystalline silicon;
5 - Insulator (consists of silicon oxide);
6 - Special n-type silicon channel. Internal photoelectric effect zone (carrier generation zone);
7 - Zone of possible stock or pits (n-type pocket). The place where electrons are collected from the carrier generation zone;
8 - p-type silicon substrate.

Full-frame CCD transfer

The video image fully formed by the lens falls on the CCD matrix, that is, the light rays fall on the light-sensitive surface of the CCD elements, the purpose of which is to convert the energy of particles (photons) into an electric charge.
This process proceeds as follows.
For a photon hitting a CCD element, there are three scenarios - it will either "fly off" the surface, or be absorbed by the semiconductor thickness (the composition of the matrix material), or break through its surface. Therefore, developers are required to create such a sensor in which losses from reflection and absorption would be minimized. The same particles that were absorbed by the CCD matrix form an electron-hole pair if there was a weak interaction with an atom of the crystal lattice of a semiconductor, or the interaction was with atoms of donor or acceptor impurities. Both of the above phenomena are called internal photoelectric effects. But, the work of the sensor is not limited to the internal photoelectric effect - the main thing is to keep the charge carriers "taken away" from the semiconductor in a specialized storage, and then read them.

The structure of the elements of the CCD-matrix

In general, the design of a CCD element looks like this: a p-type silicon substrate is equipped with channels of an n-type semiconductor. Above these channels are electrodes made of polycrystalline silicon with an insulating silicon oxide membrane. After applying an electric potential to this electrode, a potential trap (well) is created in the weakened zone under the n-type channel, the task of which is to save electrons. A particle of light, penetrating into silicon, leads to the generation of an electron, which is attracted by a potential trap and "gets stuck" in it. Huge amounts of photons or bright light provide more charge to the trap. Then it is necessary to calculate the value of the received charge, also called photocurrent, and then amplify it.

The readout of the photocurrents of the CCD elements occurs with the so-called sequential shift registers, which convert a string of charges at the input to a series of pulses at the output. The created pulse train is analog signal, which is then fed to the amplifier.

So, using a register, it is possible to convert the line charges from CCD elements into an analog signal. In practice, the sequential shift register in CCD matrices is implemented using the same CCD elements combined into one line. The operation of this device is based on the ability of charge coupled devices to exchange charges of their potential traps. This exchange takes place due to the presence of specialized Transfer Gate electrodes located between adjacent CCD elements. When an increased potential is applied to the nearest electrode, the charge "migrates" under it from the potential trap. Between CCD cells, there are usually two to four transfer electrodes, and the phase of the shift register, which is also called two-phase, three-phase or four-phase, depends on their number.

The supply of different potentials to the transfer electrodes is synchronized so that the overflow of the charges of potential traps of all CCD elements of the register occurs almost simultaneously. So in one transfer cycle, CCD cells transfer charges along the chain from right to left or from left to right. And the extreme CCD element gives its charge to the amplifier located at the register output.

So, a serial shift register is a serial output and parallel input device. After reading absolutely all charges from the register, it becomes possible to apply to its input new line, then the next one and so form a continuous analog signal based on a two-dimensional array of photocurrents. Then, the input parallel stream for the serial shift register is provided by a set of vertically oriented serial shift registers, which is called a parallel shift register, and the whole assembly is exactly a device called a CCD-matrix.

Solid-state photovoltaic converters (TPVC) images are analogous to transmitting CRTs.

TFEPs date back to 1970, with the so-called CCDs and are formed on the basis of individual cells, which are capacitors of MOS or MOS structures. One of the plates of such an elementary capacitor is a metal film M, the second is a semiconductor substrate P ( p- or n-conductivity), the dielectric D is a semiconductor applied in the form of a thin layer on the substrate P. As the substrate P, silicon doped with an acceptor ( p-type) or donor ( n-type) impurity, and as D - silicon oxide SiO 2 (see Figure 8.8).

Figure: 8.8.MOS capacitor

Figure: 8.9.Moving charges under the action of an electric field

Figure: 8.10.The principle of operation of a three-phase CCD system

Figure: 8.11.Moving charges in a two-phase CCD system

When a voltage is applied to a metal electrode, a "pocket" or potential well is formed under it, in which minority carriers (in our case, electrons) can "accumulate", and the majority carriers, holes, will repel from M. At some distance from the surface , the concentration of minority carriers may be higher than the concentration of major carriers. An inversion layer appears in the substrate P near the dielectric D, in which the type of conductivity is reversed.

The charge packet in the CCD can be introduced electrically or by means of light generation. During light generation, photovoltaic processes occurring in silicon will lead to the accumulation of minority carriers in potential wells. The accumulated charge is proportional to the illumination and the accumulation time... Directional charge transfer to the CCD is ensured by placing the MOS capacitors so close to each other that their depletion regions overlap and the potential wells are connected. In this case, the mobile charge of minority carriers will accumulate in the place where the potential well is deeper.

Let the charge accumulate under the electrode under the influence of light U 1 (see Figure 8.9). If now the adjacent electrode U 2 apply voltage U 2 \u003e U 1, then another potential pit will appear nearby, deeper ( U 2 \u003e U one). An electric field region will arise between them and minority carriers (electrons) will drift (flow) into a deeper "pocket" (see Fig. 8.9). To eliminate bidirectionality in the transfer of charges, use a sequence of electrodes, combined in groups of 3 electrodes (see Fig. 8.10).

If, for example, a charge is accumulated under electrode 4 and it is necessary to transfer it to the right, then a higher voltage is applied to the right electrode 5 ( U 2 \u003e U 1) and the charge flows to it, etc.


Almost the entire set of electrodes is connected to three buses:

I - 1, 4, 7, ...

II - 2, 5, 8, ...

III - 3, 6, 9, ...

In our case, the "reception" voltage ( U 2) will be on electrodes 2 and 5, but electrode 2 is separated from electrode 4, where the charge is stored, by electrode 3 (in which

U 3 \u003d 0), so there will be no bleed to the left.

Three-stroke operation of the CCD assumes the presence of three electrodes (cells) per one TV-image element, which reduces the useful area used by the light flux. To reduce the number of cells (electrodes) of the CCD, metal electrodes and a dielectric layer are formed in a stepped shape (see Figure 8.11). This makes it possible, when voltage pulses are applied to the electrodes, to create potential wells of different depths under its different sections. Most of the charges from the neighboring cell drain into a deeper pit.

With a two-phase CCD system, the number of electrodes (cells) in the matrix is \u200b\u200breduced by one third, which favorably affects the reading of the potential relief.

CCDs were initially proposed to be used in computing as storage devices and shift registers. At the beginning of the chain, an injection diode was placed, introducing a charge into the system, and at the end of the circuit, an output diode, usually n-p- or p-n-junctions of MOS structures, forming with the first and last electrodes (cells) of the CCD chain field-effect transistors.

But it soon became clear that CCDs are very sensitive to light, and therefore it is better and more efficient to use them as light detectors, rather than as storage devices.

If the CCD matrix is \u200b\u200bused as a photodetector, then the accumulation of charge under one or another electrode can be carried out by the optical method (light injection). We can say that CCDs are essentially light-sensitive analog shift registers. Today, CCDs are not used as storage devices (memory), but only as photodetectors. They are used in facsimile machines, scanners (CCDs), cameras and video cameras (CCDs). Usually, so-called CCD chips are used in TV cameras.

We assumed that all 100% of the charges are transferred to the adjacent pocket. In practice, however, losses have to be reckoned with. One of the sources of losses is "traps" capable of capturing and holding charges for some time. These charges do not have time to flow into the adjacent pocket if the transfer rate is high.

The second reason is the overflow mechanism itself. At the first moment, the charge transfer occurs in a strong electric field - a drift in E... However, as the charges flow, the field strength decreases and the drift process dies out, so the last portion moves due to diffusion, 100 times slower than the drift. Waiting for the last portion means lower performance. Drift gives over 90% carryover. But it is the last percentages that are fundamental in determining losses.

Let the transmission coefficient of one transfer cycle be k \u003d 0.99, assuming the number of cycles to be N \u003d 100, we determine the total transfer ratio:

0,99 100 = 0,366

It becomes obvious that with a large number of elements, even insignificant losses on one element become of great importance for the chain as a whole.

Therefore, the issue of reducing the number of charge transfers in the CCD matrix is \u200b\u200bespecially important. In this respect, a two-phase CCD matrix will have a slightly higher charge transfer coefficient than a three-phase system.

General information about CCDs.

Currently, most imaging systems use CCD (Charge Coupled Device, English CCD) sensors as photosensitive devices.

Principle cCD work matrices are as follows: on the basis of silicon, a matrix of photosensitive elements (accumulation section) is created. Each photosensitive element has the ability to accumulate charges in proportion to the number of photons hitting it. Thus, for some time (exposure time) on the accumulation section, a two-dimensional matrix of charges proportional to the brightness of the original image is obtained. The accumulated charges are initially transferred to the storage section, and then line by line and pixel by pixel to the matrix output.

The size of the storage section in relation to the accumulation section is different:

  • per frame (matrices with frame transfer for progressive scan);
  • half-frame (matrices with frame transfer for interlaced scanning);

There are also matrices in which there is no storage section, and then line transfer is carried out directly along the accumulation section. It is obvious that such matrices require an optical shutter to operate.

The quality of modern CCD matrices is such that the charge practically does not change during the transfer.

Despite the apparent variety of television cameras, the CCDs used in them are practically the same, since the mass and large-scale production of CCDs is carried out by only a few companies. These are SONY, Panasonic, Samsung, Philips, Hitachi Kodak.

The main parameters of CCD matrices are:

  • dimension in pixels;
  • physical size in inches (2/3, 1/2, 1/3, etc.). In this case, the numbers themselves do not determine the exact size of the sensitive area, but rather determine the class of the device;
  • sensitivity.

Resolution of CCD cameras.

The resolution of CCD cameras is mainly determined by the dimension of the CCD in pixels and the quality of the lens. To some extent, this can be influenced by the electronics of the camera (if it is poorly done, this can worsen the resolution, but frankly it is rarely done badly now).

One point to make here is important. In some cases, high-frequency spatial filters are installed in cameras to improve the apparent resolution. In this case, the image of an object obtained from a camera with a smaller dimension may look even sharper than an image of the same object obtained objectively by a better camera. Of course, this is acceptable in the case when the camera is used in visual observation systems, but it is completely unsuitable for constructing measuring systems.

CCD resolution and format.

Currently, various companies produce CCD matrices covering the widest range of dimensions from several hundred to several thousand. So it was reported about a matrix with a dimension of 10000x10000, and in this message it was not so much the problem of the cost of this matrix, but the problems of storing, processing and transferring the obtained images. As we know, matrices with dimensions up to 2000x2000 are now more or less widely used.

The most widely used CCD matrices, to be more precise, should certainly include matrices with a resolution oriented towards a television standard. These matrices are mainly of two formats:

  • 512*576;
  • 768*576.
512 * 576 matrices are usually used in simple and cheap video surveillance systems.

Matrices 768 * 576 (sometimes a little more, sometimes a little less) allow you to get the maximum resolution for a standard television signal. At the same time, in contrast to matrixes of the 512 * 576 format, they have a grid of the arrangement of photosensitive elements close to a square, and, therefore, equal resolution horizontally and vertically.

Often manufacturers of TV cameras indicate the resolution in TV lines. This means that the camera allows you to see N / 2 dark vertical strokes against a light background, laid out in a square inscribed in the image field, where N is the declared number of television lines. With regard to a standard television table, this assumes the following: choosing the distance and focusing the table image, it is necessary to ensure that the upper and lower edges of the table image on the monitor coincide with the outer contour of the table, marked by the vertices of black and white prisms; then, after the final focusing, the number is read in that place of the vertical wedge, where the vertical strokes for the first time cease to be resolved. This last point is very important because and in the image of the test fields of the table having 600 or more strokes, intermittent stripes are often visible, which, in fact, are moiré formed by the beating of the spatial frequencies of the strokes of the table and the grid of the sensitive elements of the CCD matrix. This effect is especially pronounced in cameras with high-pass spatial filters (see above)!

I would like to note that, all other things being equal (mainly the lens can affect this), the resolution of black-and-white cameras is uniquely determined by the dimension of the CCD matrix. So a 768 * 576 format camera will have a resolution of 576 television lines, although in some avenues you can find a value of 550, and in others 600.

Objective.

The physical size of the CCD cells is the main parameter that determines the requirement for the resolution of the lens. Another such parameter may be the requirement to ensure the operation of the matrix under conditions of light overload, which will be discussed below.

For 1/2 inch SONY ICX039 matrix, the pixel size is 8.6μm * 8.3μm. Therefore the lens must have better resolution than:

1 / 8.3 * 10e-3 \u003d 120 lines (60 line pairs per millimeter).

For lenses made for 1/3 inch matrices, this value should be even higher, although this, oddly enough, does not affect the cost and such a parameter as aperture, since these lenses are made taking into account the need to form an image on a smaller light-sensitive field of the matrix. It also follows that lenses for smaller matrices are not suitable for large matrices due to significantly deteriorating characteristics at the edges of large matrices. At the same time, lenses for large sensors can limit the resolution of images obtained from smaller sensors.

Unfortunately, with all the modern abundance of lenses for television cameras, it is very difficult to obtain information on their resolution.

In general, we do not often deal with the selection of lenses, since almost all of our Customers install video systems on existing optics: microscopes, telescopes, etc., therefore, our information about the lens market is in the nature of notes. We can only say that the resolution of simple and cheap lenses is in the range of 50-60 line pairs per mm, which is generally not enough.

On the other hand, we have information that Zeiss special lenses with a resolution of 100-120 line pairs per mm cost more than $ 1000.

So, when buying a lens, it is necessary to conduct preliminary testing. I must say that most of the Moscow sellers provide lenses for testing. Here again it is appropriate to recall the moiré effect, the presence of which, as noted above, can be misleading about the resolution of the matrix. So, the presence of moire in the image of sections of the table with strokes above 600 television lines in relation to the lens indicates a certain reserve of the latter's resolution, which, of course, does not hurt.

Another, perhaps, important note for those who are interested in geometric measurements. All lenses have more or less distortion (pincushion distortion of the image geometry), and the shorter the focus of the lens, the greater these distortions, as a rule. In our opinion, lenses with focal lengths greater than 8-12 mm have acceptable distortions for 1/3 "and 1/2" cameras. Although the level of "acceptability", of course, depends on the tasks that the camera must solve.

Resolution of image input controllers

The resolution of image input controllers should be understood as the conversion frequency of the analog-to-digital converter (ADC) of the controller, the data of which is then written to the controller's memory. Obviously, there is a reasonable limit to the increase in the sampling rate. For devices with a continuous structure of the photosensitive layer, for example, vidicons, optimal frequency digitization is equal to twice the upper frequency of the useful signal of the vidicon.

Unlike such light detectors, CCD matrices have a discrete topology, so the optimal sampling frequency for them is defined as the shift frequency of the matrix output register. It is important that the controller's ADC works synchronously with the output register of the CCD matrix. Only in this case can the best conversion quality be achieved both from the point of view of providing "rigid" geometry of the obtained images and from the point of view of minimizing noise from clock pulses and transient processes.

Sensitivity of CCD cameras

Since 1994, we have been using SONY card cameras based on the ICX039 CCD matrix in our devices. In the SONY description for this device, a sensitivity of 0.25 lux is indicated on the object at a lens aperture of 1.4. Already several times, we have come across cameras with similar parameters (size 1/2 ", resolution 752 * 576) and with a declared sensitivity 10 or even 100 times higher than that of" our "SONY.

We have checked these numbers several times. In most cases, we found the same ICX039 CCD sensor in cameras from different companies. Moreover, all the microcircuits "strapping" were also SONY-vskie. And comparative testing showed almost complete identity of all these cameras. So what's the question?

And the whole question is at what signal-to-noise ratio (s / n) the sensitivity is determined. In our case, the SONY company conscientiously showed the sensitivity at s / w \u003d 46 dB, and other companies either did not indicate this, or indicated that it is not clear under what conditions these measurements were made.

This is, in general, the common scourge of most manufacturers of television cameras - not to specify the conditions for measuring the parameters of television cameras.

The fact is that with a decrease in the requirement for the S / W ratio, the camera sensitivity increases in inverse proportion to the square of the required S / W ratio:

where:
I - sensitivity;
K is the conversion factor;
s / w - ratio of s / w in linear units,

therefore, many companies are tempted to indicate the sensitivity of cameras with an underestimated S / W ratio.

We can say that the ability of a matrix to "see" better or worse is determined by the amount of charges converted from photons falling on its surface and the quality of delivery of these charges to the exit. The amount of accumulated charges depends on the area of \u200b\u200bthe photosensitive element and the quantum efficiency of the CCD, and the quality of transport is determined by many factors, which often boil down to one thing - readout noise. The readout noise for modern matrices is on the order of 10-30 electrons and even less!

The areas of the CCD elements are different, but the typical value for 1/2 inch matrices for TV cameras is 8.5 µm * 8.5 µm. An increase in the size of the elements leads to an increase in the size of the matrices themselves, which increases their cost not so much due to the increase in the cost of production itself, but due to the fact that the serial production of such devices is several orders of magnitude smaller. In addition, the area of \u200b\u200bthe photosensitive zone is affected by the topology of the matrix to the extent that the percentage of the total surface of the crystal is occupied by the sensitive area (filling factor). Some special matrices claim a fill factor of 100%.

The quantum efficiency (by how much, on average, the charge of the sensitive cell changes in electrons when one photon falls on its surface) for modern matrices is 0.4-0.6 (for some matrices without anti-blooming it reaches 0.85).

Thus, it can be seen that the sensitivity of CCD cameras, referred to a certain s / w value, has come close to the physical limit. According to our conclusion, typical values \u200b\u200bof the sensitivity of cameras for general use at s / w \u003d 46 are in the range of 0.15-0.25 lux of illumination on an object with a lens aperture of 1.4.

In this regard, we do not recommend blindly trusting the sensitivity figures indicated in the descriptions of television cameras, especially when the conditions for determining this parameter are not given and, if you see a sensitivity of 0.01-0.001 lux in television mode in the passport of a camera with a price of up to $ 500, then before you sample, to put it mildly, incorrect information.

On ways to increase the sensitivity of CCD cameras

What if you need to get an image of a very faint object, for example, a distant galaxy?

One of the solutions is the accumulation of images over time. The implementation of this method allows you to significantly increase the sensitivity of the CCD. Of course, this method can be applied to stationary objects of observation or in the case when the motion can be compensated for, as is done in astronomy.

Fig1 Planetary nebula M57.

Telescope: 60 cm, exposure - 20 sec., Temperature during exposure - 20 C.
In the center of the nebula, a stellar object is 15th magnitude.
Image received by V. Amirkhanyan at SAO RAS.

It can be argued with sufficient accuracy that the sensitivity of CCD cameras is directly proportional to the exposure time.

For example, the sensitivity at a shutter speed of 1 sec in relation to the initial 1 / 50s will increase 50 times, i.e. it will be better - 0.005 lux.

Of course, there are problems along this path, and this is, first of all, the dark current of the matrices, which brings charges that accumulate simultaneously with the useful signal. The dark current is determined, firstly, by the technology of manufacturing the crystal, and secondly, by the level of technology and, of course, to a very large extent by the operating temperature of the matrix itself.

Usually, to achieve long accumulation times, of the order of minutes or tens of minutes, the matrices are cooled to minus 20-40 degrees. C. The problem of cooling the matrices to such temperatures itself has been solved, but to say that this simply cannot be done, since there are always design and operational problems associated with fogging protective glasses and heat dissipation from the hot junction of the thermoelectric refrigerator.

At the same time, technological progress in the production of CCD matrices has also affected such a parameter as the dark current. Here the advances are quite significant and the dark current of some good modern matrices is very small. In our experience, cameras without cooling allow exposures within tens of seconds at room temperature, and up to several minutes when compensating for the dark background. As an example, here is a photograph of the planetary nebula M57, obtained by the VS-a-tandem-56/2 video system without cooling with an exposure of 20 s.

The second way to increase the sensitivity is the use of electro-optical converters (EOC). Image intensifier tubes are devices that enhance the luminous flux. Modern image intensifier tubes can have very large gain values, however, without going into details, we can say that the use of image intensifier tubes can only improve the threshold sensitivity of the camera, and therefore its gain should not be made too high.

Spectral sensitivity of CCD cameras


Fig. 2 Spectral characteristics of various matrices

For some applications, the spectral sensitivity of CCDs is an important factor. Since all CCDs are manufactured on the basis of silicon, the spectral sensitivity of the CCD in the "naked" form corresponds to this parameter for silicon (see Fig. 2).

As you can see, with all the variety of characteristics, CCDs have a maximum sensitivity in the red and near infrared (IR) range and see absolutely nothing in the blue-violet part of the spectrum. The sensitivity of the near-infrared CCD is used in covert surveillance systems with illumination by infrared light sources, as well as when measuring the thermal fields of high-temperature objects.


Figure: 3 Typical spectral characteristic of SONY black-and-white matrices.

The SONY company releases all its black-and-white matrices from the following spectral characteristic (see fig. 3). As you can see from this figure, the sensitivity of the CCD in the near-IR is significantly reduced, but the matrix began to perceive the blue region of the spectrum.

For various special purposes, matrices that are sensitive in the ultraviolet and even X-ray range are being developed. Usually these devices are unique and their price is quite high.

About progressive and interlaced scanning

The standard television signal was developed for the broadcast television system, and has one big drawback from the point of view of modern image input and processing systems. Although the TV signal contains 625 lines (of which about 576 with video information), 2 half-frames are shown sequentially, consisting of even lines (even half-frame) and odd lines (odd half-frame). This leads to the fact that if a moving image is entered, then the analysis cannot use the Y resolution more than the number of lines in one half-frame (288). In addition, in modern systems, when the image is rendered on computer monitor (which has a progressive scan), the image entered from an interlaced TV camera while the object of observation is moving causes an unpleasant visual effect of a split.

All methods of dealing with this drawback lead to a deterioration in the vertical resolution. The only way to overcome this disadvantage and achieve a resolution that matches the resolution of the CCD matrix is \u200b\u200bto switch to progressive scan in the CCD. CCD manufacturers produce such matrices, but due to the small serial production, the price of such matrices and cameras is much higher than that of conventional ones. For example, the price of the SONY ICX074 progressive scan matrix is \u200b\u200b3 times higher than the ICX039 (interlaced scan).

Other camera parameters

These others include such a parameter as "blooming" ie. spreading of the charge over the surface of the matrix with overexposure of its individual elements. In practice, such a case may occur, for example, when observing objects with glare. This is a rather unpleasant effect of CCDs, as a few bright dots can distort the entire image. Fortunately, many modern matrices contain anti-blooming devices. So, in the descriptions of some of the latest SONY matrices, we found 2000, which characterizes the permissible light overload of individual cells, which does not lead to spreading of charges. This is a fairly high value, especially since this result can be achieved, as our experience has shown, only with a special adjustment of the drivers that control directly the matrix and the video signal preamplification channel. In addition, the lens also contributes to the "spreading" of bright points, since at such high light overloads even a small scattering outside the main spot gives a noticeable light support for neighboring elements.

It should also be noted here that according to some data, which we ourselves did not check, matrices with antiblooming have a 2 times lower quantum efficiency than a matrix without antiblooming. In this regard, in systems requiring a very high sensitivity, it may make sense to use matrices without anti-blooming (usually these are special problems such as astronomical).

About color cameras

The materials of this section are somewhat beyond the framework of consideration of measuring systems that we have established, nevertheless, the widespread use of color cameras (even more than black and white) forces us to clarify this issue, especially since customers often try to use with our black and white cameras. white frame grabers, color television cameras, and are very surprised when they find some streaks in the images obtained, and the image resolution is insufficient. Let us explain what is the matter here.

There are 2 ways to generate a color signal:

  • 1.Using a single array camera.
  • 2. Using a system of 3 CCDs with a color separation head to obtain R, G, B components of the color signal on these matrices.

The second way provides the best quality and only it allows you to get measuring systems, however, cameras working on this principle are quite expensive (more than $ 3000).

In most cases, single-array CCD cameras are used. Let's consider how they work.

As it is clear from the sufficiently wide spectral characteristics of the CCD matrix, it cannot determine the "color" of the photon that hits the surface. Therefore, in order to enter color image a light filter is installed in front of each element of the CCD matrix. Wherein total number matrix elements remains the same. The SONY company, for example, produces exactly the same CCD matrices for black and white and color versions, which differ only in the presence of a color matrix with light filters applied directly to sensitive areas. There are several schemes for coloring matrices. Here is one of them.

There are 4 different light filters used here (see fig. 4 and fig. 5).


Fig 4. Distribution of light filters on the elements of the CCD matrix



Fig 5. Spectral sensitivity of CCD elements with different light filters.

Y \u003d (Cy + G) + (Ye + Mg)

Line A1 receives the "red" color difference signal as:

R-Y \u003d (Mg + Ye) - (G + Cy)

and in line A2 a "blue" color-difference signal is received:

- (B-Y) \u003d (G + Ye) - (Mg + Cy)

Hence, it is clear that the spatial resolution of a color CCD matrix compared to the same black-and-white one is usually 1.3-1.5 times worse horizontally and vertically. Due to the use of light filters, the sensitivity of the color CCD is also worse than that of the black and white one. Thus, we can say that if there is a 1000 * 800 single-matrix receiver, then in reality you can get about 700 * 550 for the brightness signal and 500 * 400 (700 * 400 is possible) for the color one.

Aside from technical issues, I would like to note that for advertising purposes, many manufacturers of electronic cameras report completely incomprehensible data on their technology. For example, the company "Kodak" announces the resolution of its electronic camera DC120 1200 * 1000 with a matrix of 850x984 pixels. But gentlemen - information does not arise from empty space, although it visually looks good!

The spatial resolution of a color signal (a signal that carries information about the color of an image) can be said to be at least 2 times worse than the resolution of a black-and-white signal. In addition, the "calculated" color of the output pixel is not the color of the corresponding element of the original image, but only the result of processing the brightness of various elements of the original image. Roughly speaking, due to the sharp difference in the brightness of the neighboring elements of the object, a color can be calculated that is not there at all, while a slight camera shift will lead to a sharp change in the output color. For example: the border of a dark and light gray field will look like it consists of multi-colored squares.

All these considerations concern only the physical principle of obtaining information on color CCD matrices, while it should be borne in mind that usually the video signal at the output of color cameras is presented in one of the standard PAL, NTSC formats, less often S-video.

The PAL and NTSC formats are good because they can be immediately reproduced on standard monitors with a video input, but you must remember that these standards provide for a much narrower band for the chrominance signal, so it is more correct to speak here about a colored image, not a color image. Another unpleasant feature of cameras with video signals that carry a color component is the appearance of the above-mentioned streaks in the image obtained with black-and-white frame grabers. And the point here is that the chrominance signal is almost in the middle of the video signal strip, creating interference with the input of the image frame. We do not see this interference on a television monitor because the phase of this "interference" after four frames changes to the opposite and is averaged by the eye. Hence the bewilderment of the Customer, who receives an image with an interference that he does not see.

It follows from this that if you need to carry out any measurements or deciphering objects by color, then this question should be approached taking into account both the above and other features of your task.

About CMOS sensors

In the world of electronics, everything is changing very quickly and although the field of photodetectors is one of the most conservative, new technologies have been on the way lately. This primarily refers to the emergence of CMOS television matrices.

Indeed, silicon is a light-sensitive element and any semiconductor article can be used as a sensor. The use of CMOS technology offers several obvious advantages over traditional technology.

Firstly, CMOS technology is well mastered and allows you to produce elements with a high yield of suitable products.

Secondly, CMOS technology allows placing on the matrix, in addition to the photosensitive area, various framing devices (up to the ADC), which were previously installed "outside". This allows cameras with digital output to be produced "on one chip".

These advantages make it possible to produce much cheaper television cameras. In addition, the circle of firms producing matrices is significantly expanding.

At the moment, the release of television matrices and cameras on CMOS technology is only getting better. Information about the parameters of such devices is very scarce. It can only be noted that the parameters of these matrices do not exceed the achievements now, as for the price, then their advantages are undeniable.

I will cite as an example a single-chip color camera from Photobit PB-159. The camera is made on one crystal and has the following technical parameters:

  • resolution - 512 * 384;
  • pixel size - 7.9μm * 7.9μm;
  • sensitivity - 1 lux;
  • output - digital 8-bit SRGB;
  • body - 44 legs PLCC.

Thus, the camera loses in sensitivity four times, in addition, it is clear from the information on the other camera that this technology has problems with a relatively large dark current.

About digital cameras

IN last time a new market segment has appeared and is growing rapidly, using CCD and CMOS matrices - digital cameras. Moreover, at the present moment there is a sharp increase in the quality of these products simultaneously with a sharp decrease in prices. Indeed, 2 years ago, only a matrix with a resolution of 1024 * 1024 cost about $ 3000-7000, and now cameras with such matrices and a bunch of bells and whistles (LCD screen, memory, zoom lens, convenient body, etc.) can be bought for less than $ 1000 ... This can only be explained by the transition to large-scale production of matrices.

Since these cameras are based on CCD and CMOS matrices, all the discussions in this article about sensitivity, about the principles of forming a color signal are valid for them.

Instead of a conclusion

The practical experience we have accumulated allows us to draw the following conclusions:

  • the production technology of CCD matrices in terms of sensitivity and noise is very close to physical limits;
  • cameras of acceptable quality can be found on the television camera market, although adjustments may be required to achieve higher parameters;
  • one should not be deluded by the high sensitivity figures given in the camera brochures;
  • and yet, prices for absolutely the same quality and even for just the same cameras from different sellers can differ more than twice!

(lang: ‘ru’)

I continue what I started in previous publication talk about the device.

One of the main elements digital camera, which distinguishes it from film cameras is a photosensitive element, the so-called image intensifier or photosensitive digital camera... We have already spoken about camera matrices, but now we will consider the device and the principle of the matrix operation in more detail, although rather superficially so as not to tire the reader too much.

Most digital cameras nowadays are equipped with CCD matrices.

CCD-matrix. Device. Principle of operation.

Let's take a look at the device CCD sensors.

Semiconductors are known to be divided into n-type and p-type semiconductors. In an n-type semiconductor there is an excess of free electrons, and in a p-type semiconductor there is an excess of positive charges, "holes" (and hence a lack of electrons). All microelectronics is based on the interaction of these two types of semiconductors.

So, the element Digital camera CCD is arranged as follows. See Figure 1:

Fig. 1

Without going into details, then a CCD element or a charge-coupled device, in English transcription: charge-coupled-device - CCD, is a MIS (metal-dielectric-semiconductor) capacitor. It consists of a p-type substrate - a silicon layer, an insulator of silicon dioxide and electrode plates. When a positive potential is applied to one of the electrodes, a zone depleted in major carriers - holes, is formed under it, since they are pushed aside by the electric field from the electrode deep into the substrate. Thus, a potential well is formed under this electrode, i.e. the energy zone is favorable for the movement of minority carriers - electrons into it. A negative charge accumulates in this pit. It can be stored in this well for a long time due to the absence of holes in it and, therefore, the reasons for the recombination of electrons.

In photosensitive matrices the electrodes are films of polycrystalline silicon, transparent in the visible region of the spectrum.

The photons of the light incident on the matrix fall into the silicon substrate, forming a hole-electron pair in it. Holes, as mentioned above, are displaced into the depth of the substrate, and electrons are accumulated in the potential well.

The accumulated charge is proportional to the number of photons falling on the element, i.e. to the intensity of the light flux. Thus, a charge relief corresponding to the optical image is created on the matrix.

Movement of charges in the CCD matrix.

Each CCD element has several electrodes to which different potentials are applied.

When a potential higher than that on a given electrode is applied to a neighboring electrode (see Fig. 3), a deeper potential well is formed under it, into which the charge moves from the first potential well. In this way, the charge can move from one CCD cell to another. The CCD element shown in Fig. 3 is called three-phase, there are also 4-phase elements.

Fig. 4. Scheme of operation of a three-phase device with a charge coupled - a shift register.

To convert charges into current pulses (photocurrent), serial shift registers are used (see Fig. 4). Such a shift register is a string of CCD elements. The amplitude of the current pulses is proportional to the amount of charge transferred, and is thus proportional to the incident luminous flux... The sequence of current pulses generated by reading the sequence of charges is then fed to the amplifier input.

Rows of closely spaced CCD elements are combined into CCD... The work of such a matrix is \u200b\u200bbased on the creation and transfer of a local charge in potential wells created by an electric field.

Fig. 5.

The charges of all CCD elements of the register are synchronously moved to adjacent CCD elements. The charge that was in the last cell is fed to the output from the register, and then fed to the input of the amplifier.

The serial shift register receives charges from perpendicularly spaced shift registers, which are collectively referred to as a parallel shift register. The parallel and sequential shift registers make up the CCD matrix (see Fig. 4).

Shift registers perpendicular to the serial register are called columns.

The movement of charges in the parallel register is strictly synchronized. All charges of one row are shifted simultaneously to the next. The charges of the last line go into a sequential register. Thus, in one working cycle, the line of charges from the parallel register enters the input of the serial register, freeing up space for newly formed charges.

The work of the serial and parallel registers is synchronized by a clock generator. Part digital camera matrix also includes a microcircuit that supplies potentials to the register transfer electrodes and controls their operation.

The image intensifier tube of this type is called a full-frame CCD-matrix. For its operation, it is necessary to have an opaque cover, which first opens the image intensifier for exposure to light, then, when it receives the number of photons necessary to accumulate a sufficient charge in the matrix elements, closes it from light. Such a cover is a mechanical shutter, as in film cameras. The absence of such a shutter leads to the fact that when the charges move in the shift register, the cells continue to be irradiated with light, adding extra electrons to the charge of each pixel that do not correspond to the luminous flux of a given point. This leads to "smearing" of the charge, respectively, to the distortion of the resulting image.

Did you like the article? To share with friends: