uh so yeah good morning everyone. As Sam mentioned this is the second part kind of applying for the atom probe talks. Now originally I had kind of planned on doing both atom probe data reconstruction and analysis in this webinar In practice I found that just covering the reconstructions may be enough for one webinar so possibly going over analysis of improved data at a future date. So it’s just going to be the reconstruction that we’ll focus on today so what I would like to talk about is to start by just reviewing essentially a nanoprobe experiment showing where data reconstruction lies within the whole flow of the experiment from start to finish and then kind of get into the details, talk atom probe reconstruction in terms of spectral calibration ranging, spatial reconstruction, and a bit of discussion as well on some artifacts that can happen during atom probe construction, as well as some of the limitations we have. Then I’d like to kind of summarize everything by going through a fairly straightforward example of a reconstruction using a steel sample with a heterophase interface and I’ll do this using the commercially available software. So starting again with our kind of atom probe overview. So to refresh an atom probe experiment has multiple steps to it. First of course we have our starting material and we need to form that into an atom probe specimen, these are those nano scale needle samples that have that shape because basically we need to get down to that fine point so we can concentrate the electric field of the tip such that we’ll get field evaporation to occur. This is the process that happens during the next part of the atom probe experiment. The data acquisition this is where a material is essentially picked apart atom by atom through successive field evaporation using either a pulsed voltage or pulsed laser in the presence of a high electric field This is the part of the atom probe experiment that actually uses you know the atom probe tool itself and so far on this kind of the process flow of an atom probe experiment, you know from starting material to data acquisition. This is what we covered in the last webinar so if you want to refresh on that you can look that up. But then the next part of the atom probe the atom probe experiment is the data reconstruction, will be what we covered today and that’s really when you’re taking the output from the acquisition stage which is not you know immediately kind of readable to get your scientific observations and using that to build your you know 3d model of your material that you can then analyze to make your observations So the goal for a broad reconstruction is to essentially end up with you know a fully complete atom probe data set which is basically a 3d model where we have the spatial positions of all the atoms you know in our reconstruction and a mass spectrum or a mass to charge ratio spectrum to be precise, where we have all of our different peaks identified as belonging to you know an element or combination of elements, and then this is the point when you know you can start your analysis, to look into the microstructure of your sample you know and and make your observations But really always keep in mind you’re making these observations on the model you make with the reconstruction so really every step of the whole process is important The reconstruction is quite critical so let’s go into the details of a reconstruction. How you make it and and you know what are some of the factors that lead into it, and can separate a good reconstruction from a bad one. So as I mentioned data reconstruction is really the process by which we take the output data from the atom probe and convert it into the 3d model that as best as we can represents the original material. Now sometimes our output reconstruction can be fairly straightforward and using the commercial software can be done quite easily, but I don’t want anyone to kind of underestimate this. Reconstruction can also be a very difficult and quite involved process often requiring kind of different approaches based on your sample, how it operated, as well as what other types of information you might have available Also something to always keep in mind is that you know before we evaporate the sample in the atom probe you know there is kind of one true configuration of the original material, you know at one point all the atoms are locked in place and that’s what we want to get to, but we might not always be able to do that you know right away perfect, especially not first try Sometimes reconstruction can be an iterative process where you build a reconstruction,

you do a bit of analysis, you maybe see areas where the reconstruction isn’t perfect something that needs to be changed, you go back try and improve it, maybe bring in more data helps also sometimes, you can just never really get to that perfect model of your material both because of limitations in the resolution of the atom probe, but also just some limitations in the field of operation and reconstruction process Something to keep in mind is that an adequate reconstruction can sometimes get to the point where you have a globally accurate reconstruction that is maybe you know a representation of the overall shape and you know distribution of phases in your sample, but it might have local inaccuracies you know especially around you know small phases, defects, such as interfaces and so on. Now in all cases reconstructions will primarily invoke two main steps, first being the spectral calibration and ranging, where we’re looking at essentially the mass spec side of the atom probe, and then that spatial reconstruction aspect. So let’s go through those now. First of all looking at spectral calibration ranging keep in mind that part of the information we get from the atom probe experiment is essentially time-of-flight data and that’s the time of flight between the incident pulse of either the voltage or laser onto the specimen and the detection you know of the ionized atom leaving the sample And what we need to do is at first apply some corrections to the time of flight data which we can do to try and improve the overall signal affecting by both the voltage that the sample was at when the ion departed as well as correcting for the position effects due to the position of the tip you know from where the atom departed, and that’s known as the ball kind of correction And just to show a little map there I’m showing where the ball progression looks like it’s basically the the different flight paths will come from being at different parts of the tip and the projection of that course to the electrostatic field. So like corrections in time of flight to improve that spectra then what we need to do is basically convert that to a mass to charge ratio spectrum. And we do that by aligning essentially to known values, this is usually done in the first case using a parametric fit in the kind of instrument calibration on known material, like say a sample of pure silicon but then what will usually be done by the user on top of that will be a kind of a linear fit, and that would be based on user input for that exact sample and data set that is being reconstructed, and again this is done using known peaks within the material Then what we need to do is kind of look at the remainder of the mass spectrum. So now we’re looking at the mass charge ratio spectrum or the time of flight spectrum and assign the different peaks ionic identities, so this is known as ranging or ranged ion assignment and it’s called that because basically we are going to assign a range of mass to charge values. So essentially along this x-axis on the images in the bottom here, as defined by a peak in the spectrum So what we have to do essentially go through all these peaks and say you know each peak corresponds to an element or a combination of elements, and what we do to help us identify that is take advantage of the fact that in atom probe since it is a mass to charge ratio spectrum and we can see the different masses of the different elements this gives us the ability to resolve different isotopes, and for all you know terrestrial elements at least those you know having not undergone you know radioactive decay we can expect that the the isotope ratios should meet well-known natural abundances and so basically what this allows us to do is look at our mass spectrum and compare the peaks we get to what we would expect for the different isotopes of a given element at a different given charge state. And so to kind of show this as an example, let’s say we have our last spectrums we have shown here below, and we’re interested in identifying the peak at mass charge ratio 28, so 28 dalton And if this is say you know a steel that has a fair amount of silicon in it we know that both iron and silicon would be present in this sample, and if we look at it then we say that the peak at the mass charge ratio of 28 it could belong to silicon, the major isotope of silicon and the one plus charge state which would be exactly 28, but it could also belong to the major isotope of iron, iron 56 if it’s in the two plus charge state because since it’s the mass to charge ratio spectrum the mass divided by the charge state,

being the two plus would take that 56 down to a position at 28. So how do we know which element that peak would correspond to well, what we can do is also compare it to the other isotopes So if we look at the silicon we see that we also have isotopes appearing, sorry to interrupt, there’s just been a request if you could slow down a little. Oh okay all right, I’ll slow down a little. So let’s say then we’re looking at the silicon, so if we take, if we look at the different isotopes that we have there, we see here we have one at 28 We have one at 21.29, one at 30, and the heights of those blue lines would represent the natural abundance, the ratios rather or the amounts we would expect if we satisfy the natural abundance ratio. So in other words if we were to have silicon as represented by that peak at 28 dalton, and what that would mean is that if we were hitting the natural abundance ratio we would expect peaks at 29 and 30 as high as those blue lines on that graph, and what we can see from our actual mass spectrum is that we basically don’t get to there at that point. In other words there’s not enough silicon or there’s not enough of those peaks for that 28 peak to be entirely made of silicon Now if we compare that to what we see with iron on the right, we see the other isotopes of iron that would appear at 27, 28.5, and 29 dalton All have peaks that match the expected values based on the natural abundance ratios, so we can be a lot more confident then in assigning those peaks to iron Now this is the kind of a process we would repeat and continue throughout the entire mass spectrum until we’ve identified you know all the different peaks there So then this brings us to the spatial reconstruction stage, and one of the main aspects of this is in defining how the radius of your sample, the radius profiles, the radius with respect to z can be defined. There’s a few different ways we can do that. So one of the most common is to use the voltage profile. So this is the profile of the voltage of specimen versus the detected ion and this is because we expect or can assume, and often that when we have an atom probe experiment evaporation occurring we’ll be at relatively constant evaporation field, which according to that equation shown on the left would mean that the voltage and the radius will be proportional to each other and so as we see the voltage, which is adjusted to maintain evaporation, as we see that voltage vary we can assume that is proportional to the changing radius of the sample, which of course starts very small at the very tip of the specimen and gets larger as we evaporate successfully more material Now this is a fairly good assumption if you have a homogeneous sample, for example a single phase or dilute solid solution that is you know running well, kind of well-behaved doesn’t run through many defects, and also a case where you’re not doing it a lot of changes in running conditions So for example not, you know imparting a large laser energy or changing too much you know those parameters. In which case, because then you can still assume that you know you’ll be at a constant field and a proportionality between the voltage and the radius will remain constant Now another way that you can define the radius profile is more kind of geometrically and this is by using a shank angle and an initial radius as inputs, and then kind of essentially building a reconstruction based purely on the geometry of those given by those factors Now this is good when you have a more inhomogeneous material or so in other words, you’re not going to be satisfying that kind of constant relation between the voltage and radius, or if you happen to be varying running parameters A kind of extension of that is to also do kind of a tip profile reconstruction, where you are kind of basically a variable shank angle. And now normally we even would be measuring the shank angle from an input, you know TEM or SEM image in a tip profile, you’re kind of using the exact profile based on that image. Now this is good for inhomogeneous materials particularly when they don’t mill uniformly in the FIB, you know you might have different faces that that lead to an overall sample shape that’s not a uniform shank ankle, so you can more easily or more kind of perfectly match it by building a tip profile

Now to show an example of how this different radius evolutions can affect your reconstruction we can take an example. This one’s from Cameca where they’ve taken a known standard of nickel chromium in alternating layers and you see on the bottom left the voltage profile of such an of such an experiment, you can see that it varies quite a bit going up and down, it’s not kind of constant or uniform and this is because the different layers the chromium and the nickel have different evaporation fields. So we can see if we were to try and build a reconstruction using the voltage to define our radius evolution we end up with something we show here in the middle, where we can see that kind of pagoda shape structure, so you know it’s non-physical as well as we’re not matching the the thicknesses that we would expect for this known standard and this again is just because of the different evaporation fields of those materials, therefore violating one of the assumptions that we make if we were doing a voltage profile, in that we can expect a constant proportionality between the voltage and radius Here the different materials mean we can’t, so in this case what was done is switched to a tip profile reconstruction would have taken one of the images of the sample and it looks like it was done following FIB milling defined a radius evolution based on that, and you can see that the reconstruction that results from it showing on the top right is a lot more representative of the original material, not only did we lose that kind of weird you know zigzag structure of the edges, but we also are able to get the layer thicknesses very close to you know what was expected from the standard. So of course there’s a lot more involved in the reconstruction, that’s kind of just one of the initial factors that we have to account for Overall when we’re performing the spatial reconstruction we’re trying to get those 3d coordinates for each ion that best represents the positions in the sample and we get this from the x y detector positions, as well as the sequence of evaporation is what we then use for the z increment. So when we’re determining where in x y an ion should be positioned, we have to factor in the magnification of the tip at the moment of that ion’s evaporation, so this includes such factors as the sample geometry, in other words such as given by the radius evolution model that we’ve chosen, and depending on that model that will also include different inputs. So if we were choosing a voltage profile we would have to use the input as the voltage at the moment of the evaporation as well as value for the proportionality constant. If we were using other methods such as say the shank angle, then we would need those inputs as well, and then also the x y factor will be affected by something known as the image compression factor And this is essentially just due to the deviation of the actual flight path of the eye of the ions from a perfect hemispherical projection. And this image compression factor essentially has its origins in the fact that our sample isn’t you know a perfect sphere floating in space, it does have kind of a shank of a needle specimen going in behind it which can affect the shape of the electric field. So the ICF, image compression factor will vary in you know kind of from sample to sample, and so then in addition to knowing the x y position we also have to then increment our z position for each successive ion that was evaporated and then is reconstructed So to do this we need to make some assumptions then about the atomic volume. In other words the actual physical space that that atom occupied, you know that gives us essentially the difference in z between it and the ions that were behind it. Also then we have to increment it, we have to know essentially our sample geometry because that’s going to be affected by the radius as well and we have to know our detection efficiency Because just a reminder in atom probe we evaporate every single atom but we don’t necessarily detect every single atom, and so depending on the detection efficiency of our atom probe we have to account for those missing atoms when we build our reconstruction, otherwise it will be you know essentially a falsely compressed you know data set. So to show this kind of schematically, and we have this figure on the right, and it’s a little maybe confusing at first glance so I’ll kind of talk through it for a second. So here we’re showing a crystalline material that you know she clearly has atoms that are aligned on kind of crystal planes separated with the given d spacing up there, and if we assume then that we have evaporation in the order given by the sequence of those numbers

where we’re of course always having the surface atoms evaporate, and then successive ions behind it being evaporated as a new surface is revealed, what we would then see if we were considering you know just the reconstruction in you know in x I guess because we’re only considering this in 2d case, it’s kind of showing that middle figure there. So basically we see the detected order from top to bottom as we would input it say from our detector and we can identify the let’s say x coordinate of those different atoms, get them into the right position, but then to fully fill the reconstruction we have to increment that z coordinate. And so you can see that if we are even putting in a constant you know kind of z-spacing there, but we’re doing it appropriately with respect to the sample radius we will be able to get a good approximation to you know the original plane spacing positions of those atoms, and so that is kind of just a good representation of how these reconstructions can get built Now I’d like to just show a figure here that shows kind of basically how the reconstruction can vary if we start playing around with some of the different input factors So what we see kind of on the center is basically, a you know, a well-made reconstruction with the input parameters that does you know would build a good reconstruction, but then for the sake of demonstration three of these factors, the detection efficiency, the field factor, and the image compression factor are changed. So we can see the effect of that on the reconstruction Now you know you might look at this figure and then think, well how can I trust any atom probe data now, like you look at how much the user can put their fingerprints onto it and they can change it for reconstructions that look nothing alike anymore. So how can we actually be sure that any atom probe reconstruction is valid for analysis. And you know to that I’ll say to a certain degree that’s true, you know you should always kind of consider an atom probe data set as guilty until proven innocent, but at the same time there’s a lot that we can do to actually you know make sure that we can trust our atom probe data. So first of all I’ll say, is kind of a general caveat that in this case we’re clearly taking these factors to extremes, you know to show their effects upon the material, so you know it’s not as, your material might not be you know it’s not always gonna be as sensitive as that But there’s basically a lot of things we can do to help you know build our reconstruction. So we know that we’re not you know completely out in terms of its veracity, so for example let’s say with this material we knew that the precipitates within it, say we’ve done prior to EM where you know spherical or spheroids. Well right away we can see from just visual inspection then that you know the really high or really low field factors won’t give us a representative material because we’re either going to end up with precipitates that look like needles or precipitates that look like kind of pancake shaped, so you know right away that kind of takes us you know doing it taking a bit of aprior information can help us right away, kind of narrowing out a reconstruction that looks a little bit more valid Also we will know a little bit more about our material, you know we and our atom probe, we know for example with detection efficiency you know between something like point three and point nine, we should be able to get a good estimation that will take us a lot closer than that which is a very wide range. Finally there’s other ways in which we can kind of calibrate on the data set that I’ll talk about in a moment But then finally as kind of just a way of saying you know how much we can trust the atom probe data, one thing I want you know, I want everyone to know when looking at this figure is no matter how much this data set is stretched and squished and taken kind of the fun house mirror to it, one thing that never occurs with it is the atoms are never shuffled You never have an atom that’s completely rearranged with respect to the other atoms and because of that you can say that basically well, you know if you wanted to know for example the composition of the precipitates in this material you could get that from any one of these reconstructions, because the composition which is just you know essentially a binning of you know a phase and counting the atoms within it isn’t going to be changed, you know by different distortions of it spatially So and you can kind of extend that that concept forward, which is to say you know if I just want to know composition maybe I don’t need an absolute you know atomic scale perfect spatial you know reconstruction, because I know that, you know even if I’m a little bit off, the composition

will remain the same. But also you know, say okay if you were interested in different spatial factors then maybe you would have to say, well I’ll need a better reconstruction then like if you wanted the radius of these precipitates then you would have to get a you know reconstruction you can trust spatially a lot more. So part of the whole process in reconstructing atom probe data and then doing the subsequent analysis is knowing how far you can take the analysis given kind of you know how well you’ve been able to do a reconstruction, and how much you can trust the the final material model that you build. And so it’s part of building that kind of you know trust in terms of spatial calibration. There are other pieces of information we can bring in, so I, you know I already talked about the idea of bringing in a you know TEM or SEM images to help you know build profiles on the specimen. You can also take some information you know from the atom probe from other aspects of the output data. In this case if you have a sample that is you know metallic or well other well-behaved crystalline sample you can often observe crystal poles in the detector event histogram, seen as that kind of hemispherical projection of the tip and this is due to very slight deviations in the evaporation field based on you know the positions of the atoms within the crystal. And what we can do is essentially use that to give us idea of how the tip is projected which can help us define an image compression factor. What we can also do is look at the reconstruction directly at those poles, or we could expect the resolution kind of for those given planes corresponding to that pole to be highest, and we can map the d spacing at those poles and basically compare it to known values for that material. So in this case with the aluminum and the 111 pole, we can look at the d spacing that we get at 111 and compare it to the known d spacing for aluminum, then adjust our reconstruction parameters accordingly until we get a d spacing that works. We can do that for the other poles as well, to kind of help build our reconstruction, spatially calibrate it. So what is other information you can bring in, but you know sometimes that won’t always be the case. So I’d like to talk a little bit now about atom probe artifacts and limitations, and how this can affect you know the final reconstruction that we create. So first of all when I worried about assumptions, different reconstruction protocols they all make different assumptions. And some are common to nearly all reconstructions and samples Of which being we almost always will assume we have a hemispherical tip shape during evaporation, the new evaporation is uniform across the field of view, that the radius evolution during evaporation can either be mapped by the voltage curve or the initial sample shape, in other words you know we can use that voltage profile mode or a mode defined by initial imaging of the sample, also that when we’re determining our reconstruction parameters a lot of times we assume that they’ll remain constant throughout the data set. For example the image compression factor or the detection efficiency, and sometimes that might actually change a little bit and might you know input limitations in the data. Finally we also assume that the position of an atom on the surface of a material when it evaporated was its original position within the material, in other words you know as we’re evaporating material and exposing a new surface, wherever the atom departed from is the position that we’re trying to reconstruct to, so we assume that that initial departure position represents the initial position within the material In actuality what can happen is especially for very mobile elements, that when they’re exposed to the sample surface and by evaporation of the atoms on top of them, they might you know move a position or they’ll roll up onto different positions especially high field areas and therefore we might end up with slight deviations between where we put the atom in the reconstruction and where it originally was prior to evaporation So the thing to keep in mind in knowing these assumptions is because when these assumptions are violated, artifacts and aberrations can appear in the reconstructed data or we can have a spectrospatial resolution degraded. So we saw that for example thinking back to that nickel chromium alternating layers, if we assume or, violated in that case the assumption that for a voltage profile the ratio between the voltage and the radius would be constantly proportional,

and we ended up with this non-physical reconstruction. So we have to keep this in mind when we’re building our reconstruction, so we know how far to take the analysis Another common artifact that we can get in atom probe, we’re doing especially the spectral reconstruction is with overlapping ions in the mass to charge ratio spectrum And this is usually caused by essentially having you know integer multiples of mass and charge and this can then make it indistinguishable as far as the atom probe is concerned by using the time of flight. Since both of those factors will affect its position and its time of flight So we saw for example when we were speaking before about the ranging that peak at 28 dalton and whether it could be silicon or whether it could be iron and that was a pretty straightforward case because you know iron was the you know solvent element in that sample and it was very clear that the silicon you know we didn’t hit those other isotopes, but sometimes it can be more complicated than that, especially in cases when we’re looking at lower abundance elements, in the material where we won’t necessarily have sufficient counts of the more rare isotopes to be able to fully determine whether a peak belongs to one element or another We can also have run to that problem if we’re looking at elements that have no other isotopes because then we kind of lose that ability to guide us in the reconstruction Taking for example if we were looking at aluminum, in aluminum the one plus charge they will appear at 27 which overlaps with an isotope of iron in the two plus charge state Aluminum essentially momisotopic, it will not really, we can’t really use any other isotopes to see whether or not there would be you know we can expect that peak to be aluminum or not We’d have to only be looking at the iron peaks, so we might say well maybe a small fraction of that large peak is aluminum and we can’t really tell so the best way, we have of kind of working through this problem is by deconvoluting the mass spectrum to try to quantify how much overlap we have and that again is the situation where we’re leaning on the natural abundance ratios. And so we basically can say, if we look at the different heights of the peaks how much do those vary from the natural abundance ratios and can that variation be explained by the presence of another element So this example again of iron and aluminum, if we were to look at the peaks of iron and then we saw that well maybe you know we expect aluminum in the sample and maybe that peak at 27 dalton is five percent higher than it should be, even though the majority of that peak is still iron, we can maybe say well then five percent of that is probably aluminum and and we can then use that in determining the composition, and essentially quantify that overlap. The difficulty there though is that we still have no way of saying which of those atoms was aluminum and which ones are iron, we can only say you know five percent of them in this bulk are likely aluminum but we don’t know if you know they were the ones over here, there, at the interface, or in the bulk and so on Another common error that we can get in in atom probe data set is something known as local magnification or trajectory operations. And these we get when we have variation in the evaporation behavior due to different materials that might evaporate more easily or with more difficulty than the surrounding material. So looking at the figure at the bottom there, so let’s say we for example have a precipitating material that might be either low field or high field, in other words it can evaporate at a lower field or more easily or evaporates at a higher field or with more difficulty than the surrounding matrix, and what can happen in that case is we end up with the deviation from the ideal hemispherical tip shape, or that assumption either in the case of a low field where it kind of begins to flatten out and goes you know concave or the high field of precipitate where it kind of bubbles out this kind of convex And this can also be caused by other structural defects now because in atom probe our tip is essentially our optics, and how it shapes the electric field this local alteration of the electric field around that feature can affect the overall trajectory, and therefore either you know come you know compress or magnify, so magnify or demagnify that feature. Also depending on the the the exact ion trajectory deviations, we might even end up with essentially overlap our ions that

appear you know say, having matrix ions appear within the region that should be the precipitate or vice versa. And that can actually affect our composition measurements. This also goes back to what I mentioned earlier about the idea that we could build a globally accurate reconstruction and still have local inaccuracies. So let’s say we had an aluminum alloy, you know with precipitates in it, we might use our you know spatial plane spacings to properly calibrate that aluminum you know by looking at the matrix, and say yes we’ve got all the aluminum peaks, the d spacings are right overall, we have the exact shape of our sample, but when you look at the individual precipitates, if their field varies significantly from the matrix we could end up with them you know magnifying or de-magnified. We could end up with ion overlap or ion overlap, and this basically means we have essentially local inaccuracies in a globally accurate reconstruction So always something to keep in mind. Now these are just a couple of the kind of main limitations and artifacts we can see in atom probe. I’d like to go over a lot more in more detail, probably what we need to go to talk about data analysis. For now I’d like to then just kind of summarize everything that we’ve seen you know in the reconstruction by going over an example of an atom probe reconstruction, using the commercial software of IMS. And again you know I can’t say that CCEM you know condones or supports or is any way you know benefiting or linked with Cameca, which is the company that makes the software, it just happens to be the brand of our atom probe, our elite 4000 XHR and then the software that we’ll use accordingly with that. It’s also the most common you know appropriate construction software there is here. So for this sample, this is a sample that’s a mixture of ferrite which is a BCC iron with some you know solution elements and nanoscale perlite, and perlite is a mixture of ferrite and iron carbide of h3c or cementite. And what we want to do in this sample is look at the interface between the ferrite and the nanoscale pearlite, you know especially at those carbides So we since we were of course seeking this very fine feature of interest we made our sample using the FIB, we created our lift out such that we had our ferrite at the top and our nanoscale perlite at the bottom. We ran our acquisition, collected a good amount of data, and now we’re ready to do our reconstruction. So just to comment on essentially data acquisition and file transfer, so using again the kind of Cameca system with our you know 4000 XHR and IBAS following data acquisition, the atom probe will write the data, the acquired data into a dot RHIT file, this proprietary file format contains information on the instrument, the acquisition as well as you know, all the data obtained by that from the atom probe, and then that dot RHIT file is what you can read with IBAS to be able to form your data reconstruction. What we’re looking to make when we finish our reconstruction is essentially two files ,one is this dot dos or position file, which for every reconstructed atom or ion rather, contains the xyz spatial positions as well as the corresponding mass to charge ratio So this will be unique to each reconstruction, but then also we look to contain this dot rrng, our range file and this is a simple file that is basically contains our ranging info, so it tells us what peaks we want to identify to different ions or elements or combination of elements based on these ranges of mass to charge. So when you’re reading a pause file you know with the software, it needs a corresponding range file in order to interpret you know what to make of the different peaks within it, and a range file could be used you know with multiple reconstructions as well, because it you know, if you know the peaks and they’re going to be common between them Like I said though the pause file will be unique to a given reconstruction, but you can also still have multiple reconstructions or multiple pause files made you know for a given data set So when we go to start our reconstruction the first thing we’re looking at is essentially a summary screen that just says a lot of the information about what came off the atom probe, how the experiment went, as well as comments you know put in by the operator So the first thing that we want to do when we start to build a reconstruction is we need to select the ion sequence that we’re going to reconstruct,

so this is by looking at the voltage curve, which again is a is a plot of the ion sequence, so essentially from the first to the last evaporated ion or detected ion rather, and then its corresponding evaporation voltage ,which of course you know is varied throughout the experiment. And so at this point we use that kind of selection box that you see there to pick the part of the data that we want to reconstruct. Now in this particular case we’re going to reconstruct essentially all of the acquired data so that box is selecting essentially from the beginning rate to the end of that curve Then what we’re looking at is we look at the detector event histogram, so this is essentially a map of hits on the detector and here what we want to do is choose the x y detector coordinates that we will be reconstructing. So essentially if we have that kind of inner circle that you can see drawn there and that is drawn within you know the area where we’re receiving hits on the detector, and that defines you know atoms within that area of the detector are what we’ll use to build our reconstruction The next state takes us to looking at the time flight spectra. So in this case what we want to do is start by running some iterative corrections to try and improve the overall peak shape and mass resolving power by looking by basically correcting for differences due to the departing voltage at which the ion departed, as well as the bowl correction, so the corrections for the position on that tip. This is done kind of iteratively with measuring the fourth half max of a given peak to try and improve like I said the mass resolution The peak that you choose to run examples should essentially be a major peak in your system, something that is present at all stages of the evaporation so that it’s present at all different voltages, as well as present uniformly across the tip so you can work that bowl correction on it as well. In this case we didn’t really see much when we did the change and we did get a improvement in the full width half max from about 900 to about a thousand but it’s just goes to show we already had you know a pretty good data. We weren’t able to really improve the mass resolving power that much more in this case. Finally that we need to do is do the mass calibrations, so in this point like I said we’re applying essentially a linear fit using known peaks to improve the positions of our peaks So good practice for this is to usually choose three peaks, one at the very for a very low dalton you know with respect to your sample, one at relatively high dalton, and then one kind of in the middle, which should be hopefully you know a major peak in your system. So for this particular case you know we can go through and select these peaks and we have to identify what they are, so we have to use some monomer elements. Now in this steel sample I know I have iron, and I know I have carbon, so that should be really easy to pick out. So what I’ve done here is we’ve used the for the cup low end we’ve looked at carbon in the two plus charge state which is at six dalton, in the middle for the major peak we’ve chosen that iron 56, and two plus at 28, and then for the high end we’ve looked at iron 56 and the one plus which is at 56. And you can see the difference between you know before and after the mass calibration and slight improvement in the location of our peaks A good way to check this is to compare to a third element, one that you know is in your sample but you didn’t actually use as part of defining the mass calibration, so in this case I know there’s some nickel in the sample, I can look at one of the nickel one plus peaks and I can see it’s shifted a little bit more after the mass calibration to be exactly where it should be Even if you have small shifts like this it’s important to do it, that way you know if you are looking at you know multiple data sets especially the same range file or so on everything should be exactly where you know where it should be, you don’t have to treat everything on kind of a case-by-case basis Then we’ll go to the step of the ranged ion assignment. And I’ve kind of talked about this already and how we go about you know assigning the different piece to the different elements, so I won’t kind of go into this in more detail other than just kind of show it before and after So before we only have the peaks identified that we used for our mass calibration because of course those peaks if we use them to calibrate they should be you know what we said they were And then afterward we have everything else filled in you know which includes you know a series of you know ions that are just elements you know aluminum, iron, and so on, as well as molecular ions you know combinations of carbon, c2, c3, c4, and so on, and other various combinations until we have basically all the mass spectrum, the major peaks identified Now we get into the spatial reconstruction aspect So I’m just going to kind of walk through some of the different factors we have in here. You know what we’ll do is inputs and so on. So first of

all we do have some material parameters that will go into our reconstruction. These are based again on those at the atomic volume of the specimen and also some assumptions about the evaporation field given the primary element and the temperature of this evaporation. We also will see some instrument parameters. So here we have our input detection efficiency which for this case shouldn’t be too much of a variation from the nominal for this 4000 XHR leap which is 0.36, we also can build our kind of specimen parameters, so these are inputs on kind of what the specimen shape was like, so here by default we’re looking at radius evolution using a voltage profile. It shows us what that kind of reconstruction voltage initial voltage would be and the resulting initial tip radius, and that’s calculated again using that k factor, the fuel factor and then also will be affected by the image compression factor and those k factors and image compression factors as it currently stands are at their default values for IOS So now let’s just say we built our reconstruction using those all those default parameters, default field factor, default image compression factor, and using a radius profile our voltage profile for our radius. We can take a look at our sample and see okay well we’ve got you know an iron rich area on top of our ferrite and we’ve got carbon rich area underneath so that’s our cementite, and what we know is that these are very different materials you know one’s a metal one is a non-metal, very different evaporation fields and we can kind of see this when we look at the voltage profile as well, you know we kind of get a weird flat part, a flat part and then kind of a curve going up, and if you look at the you know that voltage curve and then the sample you can see how the two of them are related. So, but we know though that this moving between these different phases would violate one of the assumptions of using the voltage profile which is that we have a homogeneous evaporation field or a rather relatively constant evaporation field, and such that we can expect constant proportionality between the voltage and the radius, and the result of this is even though you know this recon[struction] still looks pretty good some of the features are a little mis-shapen, you know we can see that interface is a bit curved, you know we would expect that entries to be much more planar. So we can go in and try and improve this and one of the things we can say is okay, let’s stop using the voltage curve. Then to define our radius let’s instead, since especially we made these samples on the FIB and we have images of them that we took right after sharpening, let’s try using that tip profile, so we can map you know calibrate an image and you know it’s tilt corrected and everything from this SEM image following FIB sharpening and build this radius profile from it. Now you can see that this isn’t too detailed, we can’t see the carbide within it directly from the SEM image, also you know I wouldn’t always recommend trying to do extensive imaging on a tip after you finish sharpening, the risk of you know depositing you know some carbon on it with the e-beam. But overall we have the overall shape that we can see here and we can build that. So now we can do is try and see if there’s other factors we can do within this radius evolution profile to calibrate some of the other factors that will influence this reconstruction And so for a tip profile reconstruction we’re looking at primarily the initial temp radius, so where we begin building this profile and the image compression factor as the primary other parameters that we would need to calibrate Now to look at this we can go back to as I mentioned before and try and use a little bit of the crystallographic information from the sample to spatially calibrate our reconstruction So if we look at the kind of early part of our data which we know is the ferrite we can isolate that portion, and if we see on the detector event histogram we can see all these shapes that are representative of essentially the crystallography of the sample So we can identify peaks known for basically BCC materials, in this case because BCC iron and identify those poles based on these established patterns. So we can see an example of that on the bottom left and this is using the pole indexing tool in IBAS. But you know you don’t necessarily need to use that tool if you can readily identify what they are So one of the poles we can use for example is that in that main cross we have the 011 ferrite pole, then what we can do is go to the reconstruction explorer and begin to adjust our reconstruction parameters until we get d spacings that satisfy this material. So what we want to do in this case is look directly on the material at those poles because that’s where essentially we’ll

be able to build that spatial resolution the highest for those playing that plane spacing So we define it on the detector, we find a small region here, we’ve taken only a one millimeter and that’s detector space diameter radius and replace that at detector x coordinates where we see that pole. Another thing we do is we have to define a portion of the ion sequence that we’re going to sample when we are looking at this reconstruction So here we’ve taken an ion sequence range of only 20 so only a fifth of the data, and we’re centering it at 25, so we know that it will place it right within that ferrite region. And the reason of course that we’re talking about this in terms of percentages is because at this stage of the reconstruction as inputs you know we don’t have z coordinates this is kind of pre that pre-z coordinates you know reconstruction So we take, we can take this small amount of data and then what we can also do is build the reconstruction projected about that point or from that point are centered for the projection So then what this does is this builds in z- essentially it sets up our z to be equal to you know the direction of that pole such that you know we can expect the planes to be normal to our z-axis. And we can see that when we just even look at the the reconstruction volume there, that we’re looking at this small kind of subset of volume can already begin to see you know some essentially little planes appearing, these kind of horizontal lines. Then we’ll look at the distribution of the ions. But then what we can do when we want to measure the d spacing, it’s very hard to measure the d spacing directly in atom probe, but what you can do is kind of measure it in aggregate, and so we can do this with a spatial distribution now. So what this is doing is essentially it’s a histogram of measured distances along a different given direction, in this case our z between every single ion in this you know sub-volume that we’re sampling. And so what this means is if we have two atoms that are on the same plane the z distance between them will essentially be zero, right, so that’s why we have a very high spike you know at the distance of zero z- and if two atoms are on adjacent planes or successive planes in z, the distance between them should be roughly that just the z distance between them will be roughly equal to one d, or one d spacing. And then each of these other peaks occurs because if we’re measuring two items that are two planes away or three planes away, you know we’ll have distances of d2, d3, and so on. And it’s symmetrical because of course the distance between you know atom on the top and the bottom is the inverse of the distance between that on the bottom of the top, so now we can take you know this measurement here of our d spacing, and we can use it to help you know give us an image compression factor, you know that gives us a good representation of the crystal based on the known parameters for this material and then we can use that to build a much better reconstruction. So this is the reconstruction that we would make from a tip profile recon to the sample using the poles that we’ve seen here to get us you know calibrated based on those plane spacings by adjusting the image compression factor and adjusting the r0 value initial radius. So you can see it’s not too different than what we saw with the voltage profile, but it does show a better you know better representation of material especially even if we look at that interface, we can see you know it’s very planar now, so it’s good for refined reconstruction and it’s this reconstruction that we can take and we can you know really begin to run our analysis on. So in this case you know let’s say we were interested at the you know segregation profile, the interface, we can take a little sub volume, begin to map the concentration over that interface and you know begin to do all the other analysis we have on it, because for this case you know we’re pretty confident that we have a good you know idea of both the spectral and spatial you know calibration for this material So with that I’ll leave the atom probe data analysis to a future webinar So just to summarize a little bit about what we’ve looked at today on reconstruction, just keep in mind that basically acquiring the APT data is only the first half of the experiment In order to be analyzed the data has to first be reconstructed. The reconstruction requires processing both the spectra and spatial data to form that full 3d model of the original material with all the elements identified. Now most methods of spatial reconstruction will require assumptions to be made about the material and its evaporation behavior, and if a material violates those assumptions, either you know on a large scale or even at a small scale as most materials of interest, in you know engineering interests, scientific interests, do,

it will require that the reconstruction be adjusted or done in a way to account for this so. We can look at you know different reconstruction methods, you know even just changing the the radius profile method or bringing in other information to help calibrate you know, so either SEM or TEM images, other material knowledge that you have ahead of time, but also of course just keeping in mind that not every limitation can be overcome. In other words a perfect reconstruction that is 100 accurate both globally and locally it might not be achievable or at least not kind of achievable on the first try, you might have to kind of go back and you know change things again and again, or in some cases you might even just have to apply some post-processing you know to some you know small areas of the reconstruction. Once you’ve made it to better you know calibrate for your feature of interest. So overall the atom probe data requires that the artifacts and limitations within it originating from both the acquisition and the reconstruction be understood in order to be analyzed properly. So always remember that when you’re analyzing the atom probe data, as well you can talk about later we’re just analyzing the 3d model that we built, and so essentially if we do it you know if the evaporation goes well, if the reconstruction goes well, then we can take that a little bit further. But if we don’t have you know a very good evaporation, if we are not very confident in our reconstruction, then it’s just important that as scientists we know how far we can kind of trust this data, and you know in terms of the conclusions that we can draw from it So with that I’d like to conclude the talk and I’m sorry for rushing in the beginning too much, I just get really excited about atom probe guess And so I’d just like to kind of finish off by pointing you towards some references and sources These are three texts that are all very good, I’ve been using a lot of the images from them earlier, so here’s my credit to them Right here, as well as some sources I presented today from Cameca. Yeah I’d like to thank you very much for your attention and I’ll do my best to answer any questions you might have at this point Great, thanks Brian. So currently I have no questions in the chat, so if you have a question feel free to put it on the chat or since we’re a smaller group if you want to just throw your mic on you’re more than welcome to ask that way as well Yeah, I’ll also add that if anybody has any other questions after or if anybody is even watching the YouTube video a video of this later on and has questions for me you can always contact me, my info is on the CCEM website and I’d be happy to to discuss. Great, so there’s one question on the slide assigning the main peak, for I think that’s a typo, but bowl and voltage collection, maybe not, should one of the peak originating from overlapping elements through the main peak So I believe the question’s been asking if we’re running the time of flight correction for a given peak is there an issue if we have overlaps on that peak. And basically no not really, because essentially the system as far as it’s seeing, you know the issue on our end is that when we have overlaps it’s because we have different types of elements that will all you know form at, they all will have essentially the identical time of flight, what we’re doing when we’re running those voltage and bowl corrections is we’re just accounting for other variations that can affect that time of flight, but overall if we have an atom of say you know that integer multiple of mass and charge it will be affected you know, its time flight will be affected similarly to you know two others that it would overlap with So it’s relatively safe but overall you should be using the kind of largest most dominant element in your data to run that correction, so you know there shouldn’t really be an or even if there is overlap it should probably be minimal you know. So say if I’m running a steel and I’m using that iron peak I would expect that you know even if I had significant overlap from something else it’s still going to be small compared to the amount of iron in there, but yeah it should be safe. I hope that answers the question great thanks, Brian. So all right, your sample worked well with two stacked phases how would the correct procedure vary if the phases were next to each other? So I guess the idea would be if the two phases were side by side, so like if I had, if I could just go back a couple slides, then so if instead of this sample where I have basically kind of you

know on top of each other versus if it was side by side with each other, how that would vary Well it would vary in a number of ways, first of all it would probably vary when I FIB-ed it, you know you’d expect the sputter yield to be quite different between the two, which would lead to kind of, a not probably, not a very good tip shape from the very beginning, and like I said getting a good tip shape is pretty much ideal because it affects so much in terms of the projected electric field, and therefore our you know our ability to properly spatially resolve everything If it was say varying within and we had a good shape to begin with it would depend a lot on essentially how well the two phases were evaporating relative to each other. If one was of a significantly different evaporation field than the other, then we could expect pretty severe aberrations in the data, as far as skip all the way back, I don’t know maybe this is too long to go, we could expect essentially very severe local magnification. That image on the right is actually kind of showing what we have if we have you know one phase in between that’s high field or low field that’s sandwiched by two identical faces, but you can kind of extrapolate that to imagine you know if it’s just one phase you know side-by-side phases, that are high in low field, yeah we expect basically a pretty severe deviation from that ideal hemispherical geometry and probably a very difficult data set to reconstruct Even if you could you know do it properly, so what I would suggest is essentially if you had a data set that contained that, you know you’d be very aware of the errors that are probably in there that you would have to correct for and and so maybe you know look to other areas in the data or look to a different data set as well, if you need really good information, you know a spatial even composition. In that case at that interface and if it’s something that you’re doing intentionally like say when you’re making your sample using the FIB, I would highly suggest you avoid such an orientation, you know because you can foresee these problems ahead of time, you know in varying in different evaporation fields and having them side by side, and try to put them as we did where we have one phase you know and the other phase stacked on top of each other,