LEDs are replacing more and more traditional light sources as efficient, durable, cost-effective alternatives. But should we measure them the same way?
In a way, this is a lot like the introduction of automobiles, replacing the traditional alternative: horse-drawn carriages.
When people switched to driving cars, they quickly realized that it makes no sense to measure them by the same qualities used to assess horses. No one said: “Well, that Model-T looks impressive, but how often must it rest? And do you think I need to groom it daily?”
The same is true for LEDs. At least to a certain extent. LEDs serve a similar function to earlier light sources, but they are not exactly the same. The difficulty of measuring traditional light sources is they have a broad spectrum, which is not always known to the user.
So there are two choices:
- Use a photodiode detector that has a broad, uneven response over a certain part of the spectrum (UVA, UVB, visible, etc.) and measure relative power.
- Invest in a spectroradiometer to measure the precise optical power (and spectral power) of the source.
Because of the price of spectroradiometers, many prefer photodiodes.
However LEDs are not traditional light sources.
LEDs have the distinct advantage of a narrow, known wavelength spectrum. This means that the high uncertainty from measuring with an uncalibrated photodiode is easily avoided.
Use a calibrated photodiode!
If we know the response of the photodiode (which we do) and we know the wavelength of the LED (which we do, maybe to ±3 nm), why not calibrate the photodiode to measure the actual power at a given wavelength instead of just the relative power?
This is the solution of choice for lasers, where the source is even narrower and even more well-defined.
However, it works quite well for LEDs as well.
Good question. Dr. Efi Rotem here at Ophir, did a nice analysis of the accuracy of different sensors for LED measurement.
Find out more in his white paper.
Flickr creative commons image via Windell Oskay