Wednesday, September 21, 2011

Estimating laser pit depth using the "Beam_Seconds" channel

We've had a few questions regarding how Iolite estimates down-hole fractionation (in the U-Pb DRS), and several people have also had problems reducing their data because the algorithms didn't work as they should.
So, I figured it might be useful to describe a bit about how the depth is estimated, and more practically to explain how you can tinker with it if you find that the default settings aren't working for you...


So, first of all, in correcting down-hole fractionation we need some estimate of how far down the hole we are, and to do this we work with the time in seconds since the laser shutter opened. This is then the proxy used for hole depth when correcting for down-hole elemental fractionation. We store the time since the shutter opened in a wave we call "Beam_Seconds", and in the U-Pb DRS it's added to the list of intermediate channels, so it's easy to view it and check that it's calculated correctly. Here's an example from the U-Pb DRS of how multiple analyses are compared using this approach.



To do the heavy lifting, we have an Iolite function called "DRS_MakeBeamSecondsWave()", which has a few different options built into it (and is of course available to use in any other DRS if desired).


The Beam_Seconds wave is actually pretty simple – it's reset to zero every time the laser shutter opens, then it increases at 1 second per second until the next shutter open event. In a normal session, it ends up looking like a sawtooth with pretty regularly sized teeth, as the duration between each laser-on event tends to be pretty constant.
 
You may be wondering whether we have to figure out when the laser shutter closes again, and the short answer is "no". This is because we only care about the hole depth while the laser is firing, so the Beam_Seconds value at other times (e.g. during baseline measurement) is never used for anything (likewise, values will actually be negative prior to the first laser-on event, but we don't really mind that either, as those values will never be used for anything...).

There is one tricky bit about the Beam_Seconds wave, and that's figuring out when the laser shutter opens just by looking at the data. There are a few different options built into the function for this: "Rate of Change", "Cutoff Threshold", and "Gaps in data".

Rate of Change is the fancy one – it looks at one of your waves (the default is the baseline-subtracted Index Channel) and smooths it in a few different ways to get rid of normal scatter in the data (i.e., noise), spikes, etc. It then locates the regions in the smoothed result where the derivative (i.e. the change in intensity) is above a certain threshold (the actual value is just an arbitrary number), and sets the peak of each case as a laser-on event (i.e., resets Beam_Seconds to zero). One bonus about using the rate of change is that it is quite insensitive to differences in signal intensity between spots, so you should get a reproducible result despite big differences in U concentration, for example.

Cutoff Threshold is much simpler – it looks at every case where the intensity of your channel increases above the selected threshold (in either CPS or volts, depending on your detector) as a shutter opening event. In many ways this approach is more robust, but it has some downsides: Firstly, depending on the threshold you select, you may see systematic differences in the laser-on time depending on signal intensity (i.e., a spot with a much higher intensity will reach the threshold relatively sooner than one with a lower intensity). Secondly, you may be susceptible to spikes in your data, although in reality this is a pretty minor risk.

Gaps in data is the third option. It treats gaps (of minimum x seconds) between analyses as laser on events, and is useful if you have a high degree of reproducibility in how your analyses are collected. Note that by using the sensitivity factor to set the minimum duration of the gap you can filter spikes or dropouts from your data without affecting the result.

So, now onto the practical side of things – what it looks like it if breaks, and how to fix it.

Below is a perfect example of a broken Beam_Seconds wave: it has worked fine for the higher intensity analyses, but has failed to detect the laser-on events of the low uranium zircons.



If this happens to you it can cause a few different symptoms – If there is a problem for analyses of your reference standard there will more than likely be errors with the curve fitting windows, and even if you have the good fortune to make it to the fit window it will look pretty horrible, e.g.:



Otherwise, your reference standard may behave itself, whereas some or all of your unknowns may not. Such cases are more treacherous, as they can cause an offset in the ages you produce. So this is one of those cases where it's important that you rigorously check each stage of your data processing to make sure that things are working properly. And in this case, all that's required is a quick look at the Beam_Seconds wave after subtraction of baselines to make sure that it's correctly identified all of your laser-on events.

So, if you've got a problem with your Beam_Seconds result you'll obviously want to fix it, and there are a few easy ways to do that:
– If you've collected data for a more abundant element (e.g. Hf in zircons), you may benefit from switching your index channel over to that.
– Try tinkering with the BeamSeconds Sensitivity variable. Often this will be enough to fix your problem.
– Switch to the "Cutoff Threshold" method, and set your BeamSeconds Sensitivity to an appropriate threshold (in CPS or V, depending on your data). Keep in mind that you want it to be triggered at the same stage of each analysis, so try to select a value that corresponds to the stage of very rapid increase for all of your analyses.
– If you're using Cutoff Threshold keep in mind that if another channel has less spikes, it may be a better choice, even if signal intensities are lower.


And finally, as always, if you run into trouble then contact us on the forum and we'll try to help.


2 comments:

  1. Chad,

    this is a real useful explanation. I've found, sometime, weird "patterns" similar to your last figure here, in which the down-hole calculation of standard zircons has those strange analyses displaced. In some cases, out of 10-12 standard runs, I could have three-four out of the average. And I misunderstood the X axis, though the problem was something related to the analysis timing (maybe related to the csv importing). Now I understood and could re-run some of those analytical sessions correcting the issue.

    Luigi Solari

    ReplyDelete
  2. Hi Luigi, it's great to hear you found this explanation useful - I hadn't even realised until recently that people were having trouble with these algorithms...

    Cheers, Chad

    ReplyDelete