Not only Pixel Density. CCTV design with VideoCAD. Factors of camera effectiveness in CCTV systems

Not only pixel density Factors of camera effectiveness in CCTV systems. Spatial resolution also known as pixel density is widely used as a criterion how detailed a camera captures objects at different distances. Spatial resolution determines how many pixels of the camera covers unit of length in space. The spatial resolution is measured in pixels per meter or pixels per foot. The spatial resolution at a given distance from the camera.. is obtained if we divide the number of pixels of the camera horizontally or vertically.. by height or width of the camera field of view at this distance. To measure the spatial resolution at the location of an object on the image with resolution equal to resolution of the camera it is necessary to divide the number of pixels that cover the object vertically or horizontally by height or width of the object, respectively. The spatial resolution or its derivatives are used in numerous recommendations, according to which the camera view area is divided into zones of identification, recognition, detection and so on. To visualize, these zones are painted in different colors on the horizontal projection. On the fact of entering an object in the appropriate zone we conclude of probability of detection, recognition or identification of the object. However, despite the simplicity and convenience of this technology, it is useful to know about limitations of its application And existence of other factors that significantly affect camera’s ability to solve problems facing it. First of all, let’s separate Live surveillance and Analysis of records Obviously, the spatial resolution calculated based on the number of pixels of a camera is applicable only when we are working with an image in its full resolution, equal to resolution of the camera This condition is usually satisfied in analysis of records and not satisfied during live surveillance In case of live surveillance the number of pixels occupied by the image from the camera on the monitor, , the number of images on the monitor, the number of monitors on the operator, distance from the operator to the monitor are of great importance. In case of live surveillance, criterion “part of the monitor or the frame occupied by the object” may be more important than the pixel density calculated from the camera resolution For detection of anything the scene complexity is of great importance It is obvious that detection of human is easier on an empty scene. Detection on a complex scene is much more complicated. The detection task is complicated by several times when motion is present on the scene Now pay attention to the distribution of the spatial resolution in space. Spatial resolution is typically displayed on the horizontal projection. But it depends not only on the horizontal distance from the camera, but on the height above the ground. However the height is not shown on the horizontal projection. The red line indicates the height of measurement of the spatial resolution. The picture shows that the human head is displayed with spatial resolution of about 240 pixels per meter, But feet about 110 pixels per meter, Although the head and feet are at the same point on the horizontal projection. The dependence on the height is more pronounced when the camera is highly tilted The distribution of the spatial resolution of real cameras can have more complex form because of influence of lens distortion The distortion is more pronounced with wide-angle lenses Under the effect of barrel distortion the spatial resolution decreases from the center to the edges of the Field of view. For complete visualization of the distribution, three-dimensional representation is required. The number of pixels involved in display of an object depends on the angle at which the object gets in the frame. This angle is usually regulated in recognition systems of license plate and people’s faces. The man’s face is in the same crimson region of 240 pixels per meter. But the number of pixels and possibility of recognition differ greatly depending on the vertical angle. The horizontal angle has an influence too. Now consider the factors that affect the camera capability Scene illumination The overall level of illumination is important Effect of lack of illumination depends on the camera sensitivity Direction of lighting is also important Uniformity of scene illumination has a great influence Let’s simulate the situation. We place an illuminator to create high illumination difference We are increasing power of the illuminator, increasing illumination created by it, but the possibility of identification of shaded object is decreased. Influence of the illumination unevenness depends on dynamic range of the camera The illumination is associated with contrast and noise; we will now talk about them more detailed. Contrast The contrast on the image – is the range of brightness of the entire image, of a separate area on the image or of an object on the image or brightness difference between an object and background. On the black and white image only the luminance contrast is important On the color image the color contrast has a value For purposes of detection, the contrast between object and background is important. For identification the contrast of details of the object itself is the most important. With lack of illumination the contrast of entire image decreases sharply. With uneven illumination of the scene the contrast is also distributed unevenly. We place an illuminator on the scene to create uneven illumination. Closest to the illuminator, maximum illuminated areas have too high brightness and clipped contrast. Minimum illuminated areas have low brightness and insufficient contrast. Models with intermediate illumination have optimum contrast. If we reduce the exposure of the camera to 1/200 of a second. then the maximum illuminated areas are not clipped But low-light areas become even less contrast. If we increase the maximum camera exposure up to half a second, the dark areas become more contrast, But the bright areas become more clipped. Turn off the illuminator and contrast of all faces is aligned as the background illumination is uniform and enough. But if we go back to exposure of 1/50 of a second, the contrast decreases as the illumination is not enough for such exposure. Low-contrast regions are the first to suffer from compression, noise and noise reduction. Contrast of target affects the probability of its detection or identification no less than the spatial resolution. Unlike the spatial resolution, the contrast is less predictable. We can control the contrast by means of lighting and camera settings. Noise Under low light conditions on the scene, on the image the contrast decreases and noise increases. Noise hinders detection or identification of a target when the noise becomes comparable to the contrast of the target. A little noise is not greatly interfere with detection of high contrast targets, but it hides targets of low contrast. With modern cameras the noise is visible only when the Noise reduction system switched off and the Gain is increased. In normal mode, the noise is effectively suppressed by the Noise reduction system. But we must remember that the 3D Noise reduction improves image quality of motionless objects only. Images of moving objects do not increase quality and become blurred. These distortions are less noticeable than the noise, but moving objects usually are of greatest interest. Lens resolution The actual resolution of megapixel cameras is often limited by the lens. We reduce the resolution of the lens model in line pairs per millimeter and the image becomes blurred When the lens resolution is not enough, the camera does not realize the resolution, provided by its pixels. Loss of resolution during analog transmission. Loss of horizontal resolution occurs during transmission of analog signal by a long cable and as a result of conversion into analog signal and then into digital. Losses take place with analog cameras of previous generations as well as with modern high definition analog cameras. During transmission of analog signal only the horizontal resolution degrades. Vertical resolution is determined by the number of lines and does not degrade during analog transmission. Compression All video data is subjected to compression. Effect of compression depends not only on its level, but the presence of other distortions: contrast of the image, noise and blur. Compression has little effect on the high-contrast lines of the test chart. What can delude that compression does not degrade the resolution But compression is greatly distorts low-contrast areas of the frame. One and the same compression level may be acceptable in the daytime, and significantly reduce image quality at night. Distortions due to movement of the target or the camera itself. Let’s model the situation. Set the camera’s exposure time And vehicle speed. The speed and direction of the movement is specified in the form of vector The greater the speed is the greater the blur. Effect of the movement depends on the exposure time. The exposure time is usually set automatically by the camera, depending on the illumination and camera sensitivity. Moving objects are distorted also due to Interframe compression and 3D noise reduction. Depth of field. Let’s model the situation. We increase the focal length. Depth of field is smaller with long focus lenses. Set up the camera position. Copy the 3D model. Enable modeling of depth of field. Open the Depth of field box Change the focus distance and watch the 3D image. The image shows that in fact the pixel density limits resolution only at the focus distance. On the other distances resolution may be worse than was possible. The smaller the physical size of pixel, larger aperture and longer focal length is, the greater the resolution degrades with moving away from the focus distance. Limited visibility In bad weather, the visibility deteriorates with increasing distance from the camera due to snow, rain, fog, and dust. The limitation of visibility also depends on the position of light source and direction of illumination. Let’s summarize We briefly reviewed the important factors that are often overlooked in designing. Firstly, it is factors limiting applicability of the pixel density The need of full-resolution images And influence of scene complexity We examined the principles of pixel density distribution in space Dependence on the height Influence of lens distortion The dependence on the angle at which the target gets in the frame Factors limiting capability of the camera Scene illumination Contrast Noise Lens resolution Loss of resolution during analog transmission Compression Distortions due to movement of the target or the camera itself Depth of field Limited visibility Each of considered factors is more pronounced in certain circumstances. To study and simulate these factors there are special tools in VideoCAD Professional. But even with special tools taking into account all these factors is difficult and time consuming. This is justified in sufficiently serious projects and with highly professional approach to designing. In normal cases, the Spatial resolution is the basic criterion of camera capabilities in space. Mainly because of its simplicity and ease of application. However, in cases where significant influence of other factors is expected, it is necessary to amend the calculation results. Thank you for your attention

Posts Tagged with…

Write a Comment

Your email address will not be published. Required fields are marked *