Machine Vision Camera Technology Center

Choosing the best camera for your project doesn't have to be complicated. If you'd like a professional's input, please contact us.

Machine Vision Cameras
Bayer filter

If you will be using the camera for automated inspection, odds are you want a monochrome camera.

Monochrome cameras inherently have better resolution. Since camera pixels cannot sense color, most color cameras use a clever trick — something called a Bayer filter.

A Bayer filter clusters pixels in groups of four in a 2 x 2 matrix. Each pixel has a microscopic bandpass filter in front of it. Two pixels in each cluster will detect only green, one pixel detects only blue, and one pixel detects only red. The result is that, if imaging a pure blue or red object, only 25% of the pixels detect it. When imaging green objects, 50% of the pixels detect it.

To determine how much red (or green, or blue) was received from the object in the space between the red-sensing (or green-sensing, or blue-sensing) pixels, the value is interpolated from nearby pixels. This software process is called “domosaicing” or “debayering.”

Because of the interpolated values, the end result is far less true resolution that a comparable monochrome camera. Furthermore, demosaicing results in color artifacts along color transition edges. These issues make monochrome cameras a better choice for most measurement, OCV, and flaw detection tasks. Note that monochrome cameras can also be used to differentiate between two colors by carefully selecting a monochromatic source of light.

On the other hand, if you need to determine an object’s color from a wider number of possibilities, then a color camera is the way to go. Color cameras are also often used when imaging objects simply for display or archival purposes. One exception is low-light environments where monochrome cameras, without the Bayer filter, will inherently be more sensitive.

Assuming you're working on automated inspection, it is usually the smallest feature to be detected, combined with the necessary field of view, that determines the necessary image resolution. Let’s look at a few examples.

Defect Detection

If your task is to image small defects that appear as specs in the image, then in general the resolution must be such that a defect will map to an area at least three pixels square. If imaging an area 100 mm square, and the defect is 0.2 mm in size, then the necessary image resolution might be 100 / 0.2 x 3, or an image 1500 pixels square.

1D and 2D Codes

If reading 1D barcodes, many software libraries suggest having the width of the thinnest bars map to at least two pixels. So, if the thinnest bar is 0.5 mm wide, and you need to image an area 1000 x 500 mm, then you’ll need an image resolution of at least 4000 x 2000 pixels. If reading 2D codes, many software libraries suggest having each cell map to at least three pixels.

Reading Characters

If reading or verifying characters, many software libraries recommend having characters be at least 30 to 40 pixels tall. So, given a font size 20 mm tall in an area 1500 mm high x 1000 mm wide, consider an image resolution between 2250 x 1500 and 3000 x 2000 pixels. (That’s 1500 / 20, multiplied by either 30 or 40.)

The exact requirements will vary with the software you’re using and the overall image quality. It also assumes use of a lens well-matched to the camera’s sensor and, as always, appropriate lighting.

Absolutely. Too much resolution increases the processing load, possibly slowing the inspection rate. It also increases the storage needed when saving images.

A common mistake is to assume a camera having more resolution is better. A camera having lots of tiny (e.g. 1.5 micron) pixels is not necessarily better than a camera having fewer but larger (e.g. 3.5 micron) pixels. One reason is that it may be impossible to source a lens having sufficient performance to deliver sharp contrast to the tiny pixels, at least at a reasonable cost. (As you’ll read below, larger pixels have their advantages.) The net result may be a greater processing and storage load, combined with a fuzzy image.

See Basler’s white paper “Are More Pixels Better?” for further information.

Yes, pixel size matters. Larger pixels capture more photons, making them more sensitive to light. In general, larger pixels can also hold a greater electrical charge, improving the resulting image’s dynamic range. And finally, larger pixels demand less performance from a lens, saving you money when buying a lens and/or improving image sharpness.

There are a couple ways to get larger pixels. You could buy a camera having a larger sensor, such that each pixel is larger, for a given resolution. You could buy a camera having less resolution, so that each pixel can be larger, for a sensor of a given size. Or you could bin pixels together (typically) in a 2 x 2 matrix, effectively making each “bin” act like a single pixel four times the actual pixel size.

A rolling shutter exposes the image sensor one row at at time, from top to bottom. The first row is exposed, then the second row is also exposed, then the third is also, and so forth. Depending on the exposure time, there may be a short interval when all pixels are exposed at once, before the first row's exposure is complete. Or there may not.

The advantage of a rolling shutter is that the pixels can continue to gather photons during the acquisition process, increasing overall sensitivity. The disadvantage is that not all rows are exposed at the same instant in time. This can result in unexpected effects when imaging objects in motion. You’ve probably seen a movie where the spinning blades on a helicopter appear to be curved. That's caused by filming the movie using a rolling shutter. Or, consider square objects moving down a conveyor, where the objects appear to be non-square parallelograms.

The effects of a rolling shutter can be minimized by careful coordination with a strobe light. The light would need to be on only while all pixels are exposed. However, such coordination also reduces the benefit of improved sensitivity.

The advantage of a global shutter is that all pixels are exposed at the same instant in time. Global shutter cameras are therefore preferred when imaging objects in motion.

The most expensive part of a camera is the image sensor itself. All else being equal, the price of cameras tends to be proportionate to the size of the image sensor. Basically you’re paying for the processing of more silicon. And some cameras use higher quality image sensors that cost more.

That said, there are other differences in cameras that explain their cost. Precise alignment of the image sensor, relative to the lens mount, is critical to image quality. Camera firmware (FPGA programs) if not developed carefully, can substantially degrade image quality and create problems for users. And of course some manufacturers spend more on test, marketing and support activities than others.

Quantum efficiency (“QE”) is the percentage of photons that enter a pixel that result in an electrical charge. If 70 out of 100 photons that enter a pixel create an electrical charge, then the QE is 70%.

QE is specified at the light wavelength a given monochrome camera is most sensitive. Color cameras generally specify the QE for each color band, measured at each band’s maximum efficiency.

QE is especially important when imaging areas with poor illumination, having fast-moving parts, or using high resolution.

Consider two image sensors, one having larger pixels than the other, but both having the same QE. Note that the sensor with the larger pixels will capture more photons. So, if you have a low light installation, you’ll probably want the sensor with the larger pixels, even though they both have the same QE.

Dynamic range expresses the ability of a camera to capture both light and dark areas of the subject. It is the ratio of the maximum to the minimum meaningful signal. The ability to accurately sense dark areas is limited by what’s called temporal dark noise. The ability to sense bright areas is limited by each pixel’s saturation capacity.

Dynamic range is specified in either decibels (dB) or bits. It is especially critical for outdoor cameras and when imaging reflective materials.

Signal-to-Noise ratio, often abbreviated SNR, is the ratio between useful image data and image noise. A good SNR is critical to many automated inspection tasks, but especially when imaging subtle features. SNR may be specified in either decibels (dB) or bits.

There was controversy over CMOS image sensors some years ago, but no longer. CMOS is the way to go. The newer CMOS sensors not only cost less, but also out-perform all but the very best CCD sensors.

Reflecting this reality, Sony discontinued manufacture of their CCD sensors in 2015. Some cameras can still be purchased with a Sony CCD, built from stockpiled inventory. But these cameras will soon become unavailable from all sources.

See Basler’s white paper “Modern CMOS Cameras as Replacements for CCD Cameras” for a detailed comparison.

Most larger camera manufacturers use test methods compliant with EMVA 1288. This standard specifies how camera performance is tested, and how the resulting data is published.

Before EMVA 1288, there was no accepted standard for testing camera performance. It was therefore difficult to compare one camera to another, especially if from different manufacturers. In many cases, camera manufacturers would simply publish the performance data from the camera sensor manufacturer. This data showed a best-case scenario.

When comparing camera specifications, data collected using EMVA 1288 test methods often indicate inferior performance, compared to data collected using a different (often unspecified) method. That’s like comparing apples and oranges. Camera specifications gathered and published per EMVA 1288 cannot fairly be compared to data collected using other methods.

One note about color cameras: Initially, camera manufacturers were only testing monochrome cameras to EMVA 1288. Performance of color cameras was assumed to be roughly comparable, if the Bayer filter could be ignored. More recently, camera manufacturers have also been testing their color models. Some of the older color camera models on our website still show the original test data of the associated monochrome model, in which case we indicate the test method was “EMVA 1288 for Mono.”

You can learn more about EMVA 1288 on the European Machine Vision Association website.

We often get asked which digital interface is better for transferring images to a computer. Several technologies exist because they each have advantages. Note we’re intentionally limiting this comparison to machine vision standards recognized by the Automated Imaging Association.

Ethernet

Gigabit Ethernet enables long cable runs. It is a popular choice, especially if the camera(s) will be located remote from the computer. The main disadvantage of Gigabit Ethernet is limited bandwidth. However, bandwidth is improving. A few 5 gigabit and 10 gigabit cameras have been introduced. We have additional information on the Ethernet interface in our GigE Vision Technology Center.

USB

USB 3 features much greater bandwidth than gigabit Ethernet. Setup is also a bit simpler. The main limitation is short cable runs, at least without converting to fiber-optic media. See our USB3 Vision Technology Center for details.

Camera Link

Camera Link has been the defacto standard for higher bandwidth installations. The main disadvantages are the cost of cables and interface cards, and limited cable lengths.

CoaXPress

The newest standard is CoaXPress. It combines high bandwidth with inexpensive cables that support long runs. The cost of interface cards will be dropping. We expect CoaXPress to supplant Camera Link for many installations.

Firewire

What about the IEEE-1394 IIDC "Firewire" interface? Well, its about dead. Supporting an installed based of Firewire cameras is becoming increasingly difficult. USB 3 is the natural replacement.

Find Your Camera:


Need help? Contact us now. Our experienced, A3 Certified Vision Professionals are ready.