Current Challenges in High Energy Laser Power Measurements

by: Slav Ligai

1/24/2019


1. Fundamentals: (may affect test validity)

One of the basic ideas of laser power measurements is using temperature as an indication of power rate. When we have a laser beam hitting a solid surface that surface will increase its temperature as high as it needed to support the flux of heat from the laser. By using a thermocouple or thermopile we read that surface temperature and compare it with the NIST probe under the same conditions. By changing a calibration coefficient – Rv (~Responsiveness~) – we bring the temperature reading between customer’s probe and NIST probe to the same numbers. This is the essence of our calibration process: we transfer NIST calibration to our probes.

The key words here are “same conditions”. Calibration conditions at NIST are unknown or different from Calibration Lab conditions, which both are also unknown or different from production or field conditions. Molectron and, possibly, Coherent now has developed a way to bypass this problem, which can be considered not very unreliable, see Exhibit at the bottom of this document. As a result we may have introduced some systematic errors of unknown magnitude in our power measurements. Some estimations could brings the low end of the magnitude to 8% with the high end unknown, possibly in the 50% range.

Even by deviating from NIST probe shape we introduce additional uncertainties into our results. On the other hand, if we remove the housing or tubing from some probes to bring them in par with our NIST probe, we introduce systematic errors due to different cooling conditions in the field when the housing is put back on. In practical terms and as an extreme measure it would be advisable to get rid of both standard and reference NIST probes and replace them with a calorimeter/bolometer. At the very least, if we won’t be able to completely remove NIST probes from our current setup, we can cross check them with bolometer and try to make a correction to our calibration process. Of course, NIST traceable calibration sounds neat but in our case it’s a bit of overestimated statement.

2. Addressing same cooling conditions. In order to close the gap between different or multiple cooling conditions we may consider several options:

a. In situ calibration, which tackles the problem by calibrating at customer’s site

b. Unifying cooling conditions by using, let’s say, only water cooled probes. This option can bring our case closer to low power thermo sensor probes, when heating and cooling effects are relatively smaller and less critical

c. Getting rid of thermal power meters altogether. Semiconductor sensors may bring another set of issues though, such as filters calibration.

3. Low level signals assessment. Since we use thermocouples for temperature measurements, we better follow some established practice in order to minimize the errors. The measured voltages are in the range of millivolts and fractions of millivolts, the measured thermocouple voltage is competing with other bi-metal junction voltages for other metal connections in the system, such as steel/cooper terminals at DAU or some other spots. The cables are pretty long, the reference temperature is not known and drifts with ambient (usually thermocouples have better accuracy when used with a known reference temperature, such as melting ice bath). We better make sure that all these metrology considerations are not missed and properly addressed. Thermocouple accuracy is worse than RTD and thermistor – from 0.5 to 5 degrees Celsius. How does it translate into power sensors’ accuracy?

4. Beam parameters assessment. Some beam parameters, such as duty cycle, absolute power, repetition rate, wavelength, may or may not affect our power reading. For instance, power meter linearity is stated by vendors by low numbers. However, those numbers need to be verified by us for our applications. We calibrate our probes at 10W with possible 1-2% of linearity as stated by Coherent. What happens if we use those probes at 100W or at 1,000W? Can we rely on their power linearity with cooling conditions changed? The cooling conditions could change due to high intensity of heat transfer and consecutive forming a turbulence boundary layer with different – higher- cooling properties.

5. Probe accuracy assessment. If a software implements a 1% pass/fail criteria it may imply that the probe accuracy is 1%. It’s not our probe accuracy or uncertainty, it’s a parameter that says how close are we in transferring NIST reading to our test probe reading. In many cases it could be much better than 1%, which doesn’t qualify power meters to be better than 1% accurate.

Exhibit

Timing

Once the measurement is short – within 2 minutes for our particular probes and 10W of laser power – the calibration transfer from NIST to our probe is mostly defined by the probe itself: mass, shape, material, internal construction etc. This 2 minutes time frame is when the probe heats up itself. Temperature rises due to increase of internal energy of the material of the probe. Once temperature is high enough to provide heat transfer between the probe and the surrounding air, the temperature stabilizes and then starts to drop after air boundary layer is formed and provides enough probe cooling or heat transfer to the air.

Therefore, during first 2 minutes we may hope to have the most close temperature rising conditions between two similar looking probes, which are sort of independent from surrounding conditions and, therefore, repeatable in any surrounding conditions. This is quite a shaky statement but so far this is what our calibration procedure is based on. If we compare two probes under these 2 minutes conditions, we can make them read the same numbers by applying calibration coefficient to one of them. This is the idea of our probe calibration and the “2 minutes” rule is implemented in Work Instruction and in the software itself (it only captures the very first maximum).

However, 2 minutes are not always 2 minutes. With 5W power it can be 5 minutes and with 20W or 100W it can be 20 or 5 seconds. What happens beyond those 2 minutes is what makes the calibration very limited and questionable. Also, we don’t know if NIST probes were in the same 2 minutes time frame during their calibration. We don’t know for how short or long our probes are used in real world either.

Thermodynamics

The most simplified relationship between heat energy and temperature is expressed below:

Q = mcΔT

Q = heat content in Joules

m = mass

c = specific heat, J/g °C

T = temperature

ΔT = change in temperature

At a first glance it looks very simple and easy. Energy and temperature are related by linear equation and can be easily corresponded to each other. However, despite the perceived simplicity this equation is not applicable to our case directly because it involves continues energy transfer – power flow. When you have energy that is continuously going into your system, you must also have energy that is continuously going out from your system. Otherwise the situation is always non-equilibrium and you can’t make any use of it. So, a flux of heat that is transferred from laser beam to the air is an essential part of our calibration, especially beyond those 2 minutes time frames that were mentioned above.

Heat Transfer

The above equation is only good in case of no heat transfer. In other words, when we put our power sensor/probe inside of a thermos only then we may rely on that formula. At some degree the 2 minutes rule is sort of applicable, too, due to the fact that it assumes no heat transfer to the air.

In actuality we have uncontrolled heat transfer. Moreover, the flux of heat is unknown to us. As a result every single component in that formula may change due to the heat transfer that goes within the power sensor and eventually from power sensor to the air. To be precise, the flux of heat is always known to us – well, almost, as usual – as we set it up on a paddle minus some reflected power. What are not known to us fundamentally are the intrinsic parameters of that heat transfer, such as heat resistance/conductivity, specific heat, heat transfer coefficients and etc., which all define the stabilization temperature for these particular flux of heat conditions.

Electric Circuit Analogy

Think of temperature as a voltage that you need to apply in order to pass a certain current through different wires with different resistances. The voltage should be higher for a higher resistance conductor. Does it resemble to you a Current Source from basic electrical engineering? You set the current (heat flux) and the system applies a certain voltage (temperature) in order to provide that current (heat transfer) depending on the impedance (heat resistance) of the load. Same thing are with heat transfer. The higher the heat resistance, the higher the temperature in order to provide that same flux of heat.

Due to the heat transfer, air cooling, and multiple stages of heat transfer rates with different transfer characteristics curves, all those coefficients and variables can change in uncontrollable and sophisticated manner including temperature. For instance, if you set energy as a controlled and measurable parameter, for instance, and change one of the other parameters, let’s say specific heat due to some additional “air heat capacity”, the temperature should also change in order to keep the equation in balance. Note, that this temperature change will happen even without energy or power change. For instance, when you see a drop in power, it may simply mean that your power probe just received another cooling kick in, let’s say, its cooling air boundary layer got transformed from laminar to turbulent one and increased heat transfer rate that way. It would be a false laser power drop.

Extreme Cases

As a result, we can get any imaginable temperature reading, if we don’t pay attention to test conditions. Just check the limits of two opposite cases: imagine that our probes are cooled by liquid nitrogen or the incident beam spot is infinitely small with zero mass and placed in vacuum with no air cooling. In the first case scenario the measured temperature/power will be always zero no matter how hard your laser is firing. In the second case scenario the temperature will be, let’s say, 1000 degrees, when it becomes bright red and starts cooling down by heat radiation. If you won’t allow the radiation cooling and place some spherical mirror around the incident beam spot, the temperature will rise, let’s say to 1,000,000 degrees until the mirror will blow, etc. Bottom line: temperature is not a very good parameter in order to measure power, unless you know what you are doing. So, we should consider ourselves lucky, if we get the same temperature/power readings under the same laser power but under different cooling conditions. This is especially and obviously the case when we calibrate probes with shapes and masses that are different from the NIST probe.

Cooling Considerations

Therefore, when we measure temperature we should also include cooling or heat transfer conditions in our consideration. This consideration is not trivial due to the variety and uncertainties related with multiple customers’ locations and actual measurement conditions. However, we can try to estimate the magnitude of the problem by performing some on site measurements. If deviations are within what manufacturing is agree upon – my guess is 5-10% - then we, probably can just add this type of uncertainty to our final uncertainty budget, which also should not exceed a certain amount, let’s say 15-20%. Personally I would prefer to have a total budget in within 5%, which will leave us with about 2-3% for our heat transfer allowance. This is, probably, tough to meet but we can try it at least.

To view or add a comment, sign in

Others also viewed

Explore topics