Here is a quick demo of the new laser log file importer:
Showing posts with label laser ablation. Show all posts
Showing posts with label laser ablation. Show all posts
Tuesday, November 12, 2013
Saturday, May 4, 2013
Determining the start time of spot analyses for downhole fractionation correction in the UPb DRS
As with the last post, this is a follow-up to a question posted on the Iolite forum, this time by Jiri Slama (the forum thread can be found here). To summarise, he has previously used a raster pattern when measuring U-Pb ages in zircons, and wanted to know whether there is a best practice when using spot analyses. In particular, he asked how the start time of each analysis (i.e., when the laser shutter opens and ablation commences) is determined, and whether it is necessary to strictly maintain the same timing for baselines and analyses within a session in order for this "time=0" to be consistent between each spot analysis.
First of all, I think this is a great question, as the correct determination of time=0 is critical to properly treating the downhole fractionation in each analysis, regardless of the method used. If a spot analysis is corrected based on a start time that is inaccurate this will introduce a bias in calculated ages that may well be significant.
The way we do this in Iolite is quite different from many other data reduction strategies, so I think it's worth clarifying those differences first. The most common approaches are using a linear fit, or regressing the data to the y-intercept (both assume that downhole fractionation is linear). In both cases a line is fit through the data (either from each spot analysis individually, or by assuming that all analyses are identical, and thus share the same slope). This is then either used to subtract the effects of downhole fractionation to produce a "flattened" analysis, or to infer what the ratio would have been at the time the laser shutter opened (because there was no hole at this point it is assumed that downhole fractionation was zero). The former produces corrected ratios for each timeslice of data, whereas the latter yields a single value and associated uncertainty on the regression. Obviously for these methods to work it is essential that the time=0 used is consistent between analyses to avoid either over- or undercorrecting ratios. In many cases the easiest way to achieve this consistency is by structuring analyses within a session so that the duration of the different components of each analysis (i.e., baselines, spot analysis, and washout) are always the same. It is then always straightforward to select and compare equivalent timeslices from each analysis.
In Iolite there are a couple of differences from the above - the first is that there are methods of correcting downhole fractionation (e.g., an exponential curve) that do not correct the data back to time=0 in the same way as when using a linear fit. This can have an influence on the apparent ages at intermediate steps of data reduction (there's a blog post about that here), but does not have an impact on final ages. And most relevant to this post, the consistent selecting of time=0 is still every bit as important as when using linear fits. Although having said that, it doesn't matter if time=0 perfectly coincides with the moment that ablation commenced, provided that it is the same for every single analysis (this is also true for linear fitting where all analyses are assumed to have identical slope).
The second difference is the big one - in Iolite there are different options available for the way in which time=0 is determined. These different methods have pros and cons, and it's important to confirm that the method used is producing the correct outcome. The big advantage of this flexibility of approach is that it allows for freedom in how data are both acquired and reduced. For example, if analytical conditions are stable it may be preferable to acquire longer baselines every 5 analyses, instead of a short baseline prior to every spot analysis. Likewise, if during data reduction it becomes obvious that the early portion of an analysis is rubbish there is no problem with only selecting the latter range of data.
Regardless of what method is used to determine time=0, there is a specific channel in Iolite that stores information about how long the laser shutter has been open for. It is called "Beam_Seconds" and can be found in the intermediate channels list once it has been calculated (if you make changes it will be recalculated when you crunch data). Below is an image showing Beam_Seconds versus (red) overlaid on 238U signal intensity (grey), plotted against time on the x axis.
I realise that at first glance this may look a bit strange, but you can see that it shows the time that has elapsed since the laser began ablating steadily increasing until the beginning of the next analysis, at which point it is reset back to zero. It probably makes a lot more sense once it is converted into an x-y plot of Beam_Seconds (x-axis) versus 238U intensity (y-axis):
Now you can more clearly see each analysis and its subsequent washout down to the baseline. It is hopefully also obvious that by using Beam_Seconds it is easy to compare different analyses in relation to the time since the laser started firing (which we assume directly relates to how deep the laser pit is).
So that's how we keep track of Beam_Seconds in Iolite, now the next thing is how it is determined. There are three different methods, with a 4th one in the pipeline:
"Cutoff Threshold" - This is the easiest one to explain: every time the intensity of the index channel increases over the threshold set using "BeamSeconds Sensitivity" the Beam_Seconds channel is reset to zero. This works really well in cases where there is not a sharp wash-in of the signal. But it should be noted that the value selected should be as low as possible (different thresholds can be tested until the Beam_Seconds wave produced the correct result), otherwise there may be a significant difference in the trip point between grains with high and low abundances of the index isotope.
"Gaps in data" - In the vast majority of cases this will work off the time since the beginning of each data file. Thus, in cases where each analysis is acquired in a separate file this will allow you to set a specific elapsed time (in seconds, set using the "BeamSeconds Sensitivity" variable) since the start of each file as the trip point for resetting Beam_Seconds.
"Rate of change" - This is the clever one, it uses a snappy little algorithm based on the rate of change of the signal to determine the time at which the laser starts firing (this is where the logarithm of the signal increases most rapidly). It does the best job of consistently finding the same point in each analysis, despite differences in signal intensity, but unfortunately it is also quite susceptible to noisy or spiky analyses, and is thus quite prone to failure. So, as usual, careful checking of results is important.
"Laser log file" - This one is still in the pipeline, but as the name suggests, it will use the laser shutter open events stored in the laser log file to determine when to reset Beam_Seconds.
One thing that is important to clarify is that (regardless of which of the above methods is used) the determination of Beam_Seconds is entirely independent of the masking of low signals. So even if the signal is masked up to the point at which the laser began to fire this does not necessarily mean that the Beam_Seconds wave will coincide. Similarly, the integration periods selected for each analysis are also entirely independent of Beam_Seconds. As such, editing an integration period to exclude the beginning of an analysis will have no impact on the calculation of time=0 and how downhole fractionation correction is performed).
Hopefully this provides some more detail to those not entirely sure of how these calculations are performed in Iolite, and as always if you have any questions feel free to post on the forum. Also, if you want to know more about making sure that Beam_Seconds is calculated correctly there is a blog post about that here.
First of all, I think this is a great question, as the correct determination of time=0 is critical to properly treating the downhole fractionation in each analysis, regardless of the method used. If a spot analysis is corrected based on a start time that is inaccurate this will introduce a bias in calculated ages that may well be significant.
The way we do this in Iolite is quite different from many other data reduction strategies, so I think it's worth clarifying those differences first. The most common approaches are using a linear fit, or regressing the data to the y-intercept (both assume that downhole fractionation is linear). In both cases a line is fit through the data (either from each spot analysis individually, or by assuming that all analyses are identical, and thus share the same slope). This is then either used to subtract the effects of downhole fractionation to produce a "flattened" analysis, or to infer what the ratio would have been at the time the laser shutter opened (because there was no hole at this point it is assumed that downhole fractionation was zero). The former produces corrected ratios for each timeslice of data, whereas the latter yields a single value and associated uncertainty on the regression. Obviously for these methods to work it is essential that the time=0 used is consistent between analyses to avoid either over- or undercorrecting ratios. In many cases the easiest way to achieve this consistency is by structuring analyses within a session so that the duration of the different components of each analysis (i.e., baselines, spot analysis, and washout) are always the same. It is then always straightforward to select and compare equivalent timeslices from each analysis.
In Iolite there are a couple of differences from the above - the first is that there are methods of correcting downhole fractionation (e.g., an exponential curve) that do not correct the data back to time=0 in the same way as when using a linear fit. This can have an influence on the apparent ages at intermediate steps of data reduction (there's a blog post about that here), but does not have an impact on final ages. And most relevant to this post, the consistent selecting of time=0 is still every bit as important as when using linear fits. Although having said that, it doesn't matter if time=0 perfectly coincides with the moment that ablation commenced, provided that it is the same for every single analysis (this is also true for linear fitting where all analyses are assumed to have identical slope).
The second difference is the big one - in Iolite there are different options available for the way in which time=0 is determined. These different methods have pros and cons, and it's important to confirm that the method used is producing the correct outcome. The big advantage of this flexibility of approach is that it allows for freedom in how data are both acquired and reduced. For example, if analytical conditions are stable it may be preferable to acquire longer baselines every 5 analyses, instead of a short baseline prior to every spot analysis. Likewise, if during data reduction it becomes obvious that the early portion of an analysis is rubbish there is no problem with only selecting the latter range of data.
Regardless of what method is used to determine time=0, there is a specific channel in Iolite that stores information about how long the laser shutter has been open for. It is called "Beam_Seconds" and can be found in the intermediate channels list once it has been calculated (if you make changes it will be recalculated when you crunch data). Below is an image showing Beam_Seconds versus (red) overlaid on 238U signal intensity (grey), plotted against time on the x axis.
I realise that at first glance this may look a bit strange, but you can see that it shows the time that has elapsed since the laser began ablating steadily increasing until the beginning of the next analysis, at which point it is reset back to zero. It probably makes a lot more sense once it is converted into an x-y plot of Beam_Seconds (x-axis) versus 238U intensity (y-axis):
Now you can more clearly see each analysis and its subsequent washout down to the baseline. It is hopefully also obvious that by using Beam_Seconds it is easy to compare different analyses in relation to the time since the laser started firing (which we assume directly relates to how deep the laser pit is).
So that's how we keep track of Beam_Seconds in Iolite, now the next thing is how it is determined. There are three different methods, with a 4th one in the pipeline:
"Cutoff Threshold" - This is the easiest one to explain: every time the intensity of the index channel increases over the threshold set using "BeamSeconds Sensitivity" the Beam_Seconds channel is reset to zero. This works really well in cases where there is not a sharp wash-in of the signal. But it should be noted that the value selected should be as low as possible (different thresholds can be tested until the Beam_Seconds wave produced the correct result), otherwise there may be a significant difference in the trip point between grains with high and low abundances of the index isotope.
"Gaps in data" - In the vast majority of cases this will work off the time since the beginning of each data file. Thus, in cases where each analysis is acquired in a separate file this will allow you to set a specific elapsed time (in seconds, set using the "BeamSeconds Sensitivity" variable) since the start of each file as the trip point for resetting Beam_Seconds.
"Rate of change" - This is the clever one, it uses a snappy little algorithm based on the rate of change of the signal to determine the time at which the laser starts firing (this is where the logarithm of the signal increases most rapidly). It does the best job of consistently finding the same point in each analysis, despite differences in signal intensity, but unfortunately it is also quite susceptible to noisy or spiky analyses, and is thus quite prone to failure. So, as usual, careful checking of results is important.
"Laser log file" - This one is still in the pipeline, but as the name suggests, it will use the laser shutter open events stored in the laser log file to determine when to reset Beam_Seconds.
One thing that is important to clarify is that (regardless of which of the above methods is used) the determination of Beam_Seconds is entirely independent of the masking of low signals. So even if the signal is masked up to the point at which the laser began to fire this does not necessarily mean that the Beam_Seconds wave will coincide. Similarly, the integration periods selected for each analysis are also entirely independent of Beam_Seconds. As such, editing an integration period to exclude the beginning of an analysis will have no impact on the calculation of time=0 and how downhole fractionation correction is performed).
Hopefully this provides some more detail to those not entirely sure of how these calculations are performed in Iolite, and as always if you have any questions feel free to post on the forum. Also, if you want to know more about making sure that Beam_Seconds is calculated correctly there is a blog post about that here.
Saturday, April 6, 2013
Downhole corrected UPb ages and general workflow of the U-(Th)-Pb Geochronology DRS
I was recently asked on the forum to explain the downhole-corrected ages produced by the UPb DRS, and Luigi suggested that I turn my reply into a blog post – so here it is…
To give a bit of context (the original forum thread can also be found here), Luigi noticed that his downhole-corrected ratios (e.g., "DC_206_238") and their related ages were quite inaccurate, and was naturally concerned that this was affecting his results. He observed that the raw ratios were reasonably close to the accepted ratios, but that despite having significantly better uncertainties, his downhole-corrected ratios were about double what they should be, and that his final ratios and subsequent ages were nevertheless coming out with the right numbers.
And here's my reply (augmented a bit and with some pictures added):
What you're observing is perfectly ok, it's a natural consequence of the different steps of processing that the DRS uses.
The channels for ratio calculations are divided into three groups - "raw", "DC" (down-hole corrected), and "final".
Raw ratios
The raw ratios are hopefully pretty obvious - they're the observed ratio, and are generated by simply dividing one baseline-subtracted intensity (e.g., 206Pb) by another (238U). Here's an example of the raw 206/238 ratio from a single spot analysis, showing clear down-hole fractionation in the ratio as the analysis progresses:
Down-hole corrected ratios
The next step is obviously the one that's causing the confusion - down-hole corrected ratios are corrected for fractionation induced by drilling an increasingly deep laser pit into the sample (also referred to as LIEF, which stands for laser-induced elemental fractionation).
In the Iolite DRS, this correction is made using a model of downhole fractionation that is produced interactively by the user using the reference standard analyses (I'm assuming that if you're reading this you'll know the general concept - if not then I'd suggest reading our 2010 G-cubed paper). Now here's the punchline - the correction is typically made using an equation (in Iolite it's an exponential curve by default), and depending on the type of equation used, either the y-intercept or the asymptote will be the reference point from which the correction is made. So in the case of a linear fit the correction will be zero at the y-intercept, and increase linearly with hole depth.
A slight twist on this is the y-intercept method, which regresses the data back to its intercept with the y-axis (where downhole fractionation is assumed to be zero, or at the very least constant between the reference standard and the sample). This obviously also results in ratios being corrected down to their starting value.
The result of both of these is that the slope of the raw ratio gets flattened down to the value that was measured at the start of the ablation. This typically results in down-hole corrected ratios that are reasonably close to the accepted values.
In contrast, an exponential equation will be flattened up to the asymptote, meaning that the observed ratios will change quite a lot as they're shifted up towards the values measured at the end of the analysis, or likely even higher.
Of course this then means the down-hole corrected ratios will be much higher than the accepted values. I know that as you read that you're probably screaming in horror, but it's ok, it doesn't make any difference to the end result. The down-hole corrected ratios have been flattened, and whether they're accurate or not is not yet relevant.
Final ratios
This is where the reference standard spline is used to normalise down-hole corrected ratios to the accepted values of the reference standard. If for example the 206/238 ratio of the reference standard is twice as high as it should be then all 206/238 ratios in the session are halved to correct for this bias. Of course by using a spline any variability within the session can also be accounted for. It is this correction that also absorbs the high values potentially produced by using an exponential equation - if all of the flattened DC ratios were corrected 15% too high, then this step will bring them back down to accurate values.
So to bring it back to your specific questions - the DC ratios are "flattened", and that's all. The intention of this step of data reduction is to remove the down-hole effect that systematically affects every analysis, so the end result should be ratios that do not vary with hole depth (unless they really varied in the sample of course!).
The reason that you noticed a decrease in uncertainties relative to the raw ratios is that the effects of downhole fractionation have been removed, and that the resulting analysis has less variability (i.e., it's flat, not sloped). So it's a very good thing that the uncertainties are smaller, and a sign that the downhole correction is beneficial.
And finally, just to address the mention you made about dispersion - if a good downhole correction model is used then each analysis should be flattened, which is great. However, scatter between analyses is only minimally affected by this correction, so if you're seeing a lot of scatter in your DC ratios (or final ratios) then this is most likely real, and not something that you will be able to fix by playing with the downhole fractionation model. The variability may be due to differences in ablation characteristics between zircons (e.g., the effect you're seeing of systematically older/younger ages between 91500 and plesovice, which is very commonly observed). Or it may be due to changes in the conditions of ablation (e.g., different gas flows in different parts of the cell, different heights of the sample relative to the laser focus depth, etc.).
Note also that at least some of these causes of scatter will not be identified by the uncertainty propagation, and the only way to really get a grip on your overall uncertainty is by extended tests using a range of reference zircons.
If you have any questions about the above feel free to post questions either here or on the forum.
And we're open to suggestions for future blog posts, so if there are any other topics you'd like to know more about feel free to make a request!
Thursday, February 28, 2013
More Iolite 2013 Workshop details
We've started preparing the Iolite 2013 Workshop, to be held in Florence, Italy, the weekend before the Goldschmidt Conference. Here is a brief list of some of the topics we will be covering:
- An introduction to the Iolite data reduction flow
- How to install and use Iolite
- Loading and checking mass spec data in Iolite
- Various data reduction examples (including trace elements and U-Pb geochronology)
- How to use Iolite for solution analyses
- Creating laser ablation images in Iolite
- Error propagation and estimation
- Creating and editing reference material files (i.e. how to use your own values for reference materials like NIST SRM 612 etc)
- Creating and editing your own Data Reduction Schemes
If there's anything you'd like covered, or would like more information about the topics outlined above, head over to the forum and ask away!
The Iolite Team
Thursday, April 26, 2012
CellSpace images tutorial
The Iolite team and Ash Norris from Resonetics have been working on a new way of creating images. We take the spatial information recorded in laser log files and combine that with mass spec data to produce images of composition versus sampling position in the laser cell. This has some great advantages:
-images don't need to be of regular shape;
-image scans don't have to be sequential (so you can measure your reference materials between scans, or several times over the course of collecting an image);
-and, you can plot your laser ablation image over mapped images of your sample collected by other means, such as photomicrographs, SEM images etc.
Here's a tutorial, using an otolith (part of a fish's auditory system) to illustrate how to create a CellSpace image, and how it can be displayed over another image. You can download the example files from this page (at the bottom) to follow along. And if you're not familiar with using laser log files, you might want to visit this page first.
[Note: Aglient users may have issues synching the example laser log file and Agilent data. If so, please delete the AgilentDateFormat.txt file in the "Other Files" subfolder within your Iolite folder and try importing the example data again.]
And just to illustrate how this approach can save time, here's a figure of how the image scan lines were arranged (green lines in right-most image) over the otolith (shown without scan lines on left and center). You can see we mostly avoid the epoxy mount, saving time and reducing the amount of gunk put into the mass spec:
Unfortunately the mouse position was not captured very well by the recording software. If you have any questions, feel free to post on the forum.
-images don't need to be of regular shape;
-image scans don't have to be sequential (so you can measure your reference materials between scans, or several times over the course of collecting an image);
-and, you can plot your laser ablation image over mapped images of your sample collected by other means, such as photomicrographs, SEM images etc.
Here's a tutorial, using an otolith (part of a fish's auditory system) to illustrate how to create a CellSpace image, and how it can be displayed over another image. You can download the example files from this page (at the bottom) to follow along. And if you're not familiar with using laser log files, you might want to visit this page first.
[Note: Aglient users may have issues synching the example laser log file and Agilent data. If so, please delete the AgilentDateFormat.txt file in the "Other Files" subfolder within your Iolite folder and try importing the example data again.]
And just to illustrate how this approach can save time, here's a figure of how the image scan lines were arranged (green lines in right-most image) over the otolith (shown without scan lines on left and center). You can see we mostly avoid the epoxy mount, saving time and reducing the amount of gunk put into the mass spec:
Unfortunately the mouse position was not captured very well by the recording software. If you have any questions, feel free to post on the forum.
Wednesday, September 21, 2011
Estimating laser pit depth using the "Beam_Seconds" channel
We've had a few questions regarding how Iolite estimates down-hole fractionation (in the U-Pb DRS), and several people have also had problems reducing their data because the algorithms didn't work as they should.
So, I figured it might be useful to describe a bit about how the depth is estimated, and more practically to explain how you can tinker with it if you find that the default settings aren't working for you...
So, I figured it might be useful to describe a bit about how the depth is estimated, and more practically to explain how you can tinker with it if you find that the default settings aren't working for you...
Thursday, April 7, 2011
Selecting integration periods automatically (Part 2)
As promised, this post will cover the 3rd option of automatic integration selection called "detect from beam intensity". This option allows you to select portions of your data that satisfy up to three different criteria (e.g., where U238 is greater than 10000 counts per second). Because such criteria are independent of how your data were collected (e.g., numerous small files vs. a single large file), the method is particularly useful for sessions that don't include some sort of labelling for each analysis. This first short video demonstrates the basics of the interface using a U-Pb dataset as an example:
So you would have noticed the extra level of feedback in this option, with the graph showing not only the location of each integration period, but also which ones you've selected, and the location of any pre-existing integration periods for the selected integration type.
But there's one other thing about this new option which we think makes it really powerful, which is that in addition to your input channels you can also use intermediate and output channels as selection criteria. This means you can use an isotopic ratio, or even the calculated 206/238 age of an analysis to distinguish between analyses, as this next video demonstrates:
The other benefit of being able to use intermediate and output channels is that anyone who's really keen can make waves in their DRS specifically for assisting in selecting integrations. For example, you could make an intermediate wave that is given the value 0 for all parts of your data that you don't want to select, and the value 1 for time intervals of interest, then use this wave as your selection criterion in the "detect from beam intensity" interface. In this way, you could either come up with discriminating criteria that are more complex than what's provided in the current interface, or just make your life a little easier by, for example, rolling three different criteria into a single wave, so that you don't need to set everything up in the interface every time you use it.
So you would have noticed the extra level of feedback in this option, with the graph showing not only the location of each integration period, but also which ones you've selected, and the location of any pre-existing integration periods for the selected integration type.
But there's one other thing about this new option which we think makes it really powerful, which is that in addition to your input channels you can also use intermediate and output channels as selection criteria. This means you can use an isotopic ratio, or even the calculated 206/238 age of an analysis to distinguish between analyses, as this next video demonstrates:
The other benefit of being able to use intermediate and output channels is that anyone who's really keen can make waves in their DRS specifically for assisting in selecting integrations. For example, you could make an intermediate wave that is given the value 0 for all parts of your data that you don't want to select, and the value 1 for time intervals of interest, then use this wave as your selection criterion in the "detect from beam intensity" interface. In this way, you could either come up with discriminating criteria that are more complex than what's provided in the current interface, or just make your life a little easier by, for example, rolling three different criteria into a single wave, so that you don't need to set everything up in the interface every time you use it.
Subscribe to:
Posts (Atom)