half, I did an exploratory information evaluation of the gamma spectroscopy information. We have been in a position to see that utilizing a contemporary scintillation detector, we can’t solely see that the article is radioactive. With a gamma spectrum, we’re additionally in a position to inform why it’s radioactive and what sort of isotopes the article comprises.
On this half, we are going to go additional, and I’ll present make and practice a machine studying mannequin for detecting radioactive components.
Earlier than we start, an necessary warning. All information information collected for this text can be found on Kaggle, and readers can practice and take a look at their ML fashions with out having actual {hardware}. If you wish to take a look at actual objects, do it at your individual threat. I did my assessments with sources that may be legally discovered and bought, like classic uranium glass or previous watches with radium dial paint. Please verify your native legal guidelines and skim security pointers about dealing with radioactive supplies. Sources used on this take a look at should not critically harmful, however nonetheless should be dealt with with care!
Now, let’s get began! I’ll present gather the information, practice the mannequin, and run it utilizing a Radiacode scintillation detector. For these readers who do not need Radiacode {hardware}, the hyperlink to the datasource is added on the finish of the article.
Methodology
This text will comprise a number of components:
- I’ll briefly clarify what a gamma spectrum is and the way we are able to use it.
- We are going to gather the information for our ML mannequin. I’ll present the code for accumulating the spectra utilizing the Radiacode system.
- We are going to practice the mannequin and management its accuracy.
- Lastly, I’ll make an HTMX-based internet frontend for the mannequin, and we are going to see the ends in real-time.
Let’s get into it!
1. Gamma Spectrum
It is a brief recap of the first half, and for extra particulars, I extremely suggest studying it first.
Why is the gamma spectrum so attention-grabbing? Some objects round us will be barely radioactive. Its sources differ from the naturally occurring radiation of granite within the buildings to the radium in some classic watches or the thorium in trendy thoriated tungsten rods. A Geiger counter solely reveals us the variety of radioactive particles that have been detected. A scintillation detector reveals us not solely the variety of particles but additionally their energies. It is a essential distinction—it turned out that totally different radioactive supplies emit gamma rays with totally different energies, and every materials has its personal “footprint.”
As a primary instance, I purchased this pendant within the Chinese language store:
It was marketed as an “ion-generating,” so I already suspected that the pendant might be barely radioactive (an ionizing radiation, as its title suggests, can produce ions). Certainly, as we are able to see on the meter display, its radioactivity degree is about 1,20 µSv/h, which is 12 instances increased than the background (0,1 µSv/h). It isn’t loopy excessive and akin to a degree on an airplane through the flight, however it’s nonetheless statistically vital 😉
Nevertheless, by solely observing the worth, we can’t inform why the article is radioactive. A gamma spectrum will present us what isotopes are inside the article:
On this instance, the pendant comprises thorium-232, and a thorium decay chain produces radium and actinium. As we are able to see on the graph, the actinium-228 peak is nicely seen on the spectrum.
As a second instance, let’s say we now have discovered this piece of rock:
That is uraninite, a mineral that comprises lots of uranium dioxide. Such specimens will be present in some areas of Germany, the Czech Republic, or the US. If we get it within the mineral store, it in all probability has a label on it. However within the area, it’s normally not the case 😉 With a gamma spectrum, we are able to see a picture like this:
By evaluating the peaks with identified isotopes, we are able to inform that the rock comprises uranium, however, for instance, not thorium.
A bodily rationalization of the gamma spectrum can be fascinating. As we are able to see on the graph beneath, gamma rays are literally photons and belong to the identical spectrum as seen gentle:
When some folks assume that radioactive objects are glowing in the dead of night, it’s truly true! Each radioactive materials is certainly glowing with its personal distinctive “coloration,” however within the very far and non-visible to the human eye a part of the spectrum.
A second fascinating factor is that solely 10-20 years in the past, gamma-spectroscopy was obtainable just for establishments and large labs (in the perfect case, some used crystals with unknown high quality might be discovered on eBay). These days, because of developments in electronics, a scintillation detector will be bought for the worth of a mid-range smartphone.
Now, let’s return to our mission. As we are able to see from the 2 examples above, the spectra of various objects are totally different. Let’s create a machine studying mannequin that may mechanically detect varied components.
2. Accumulating the Knowledge
As readers can guess, our first problem is accumulating the samples. I’m not a nuclear establishment, and I don’t have entry to the calibrated take a look at sources like cesium or strontium. Nevertheless, for our process, it’s not required, and a few supplies will be legally discovered and bought. For instance, americium continues to be utilized in smoke detectors; radium was utilized in portray the watch dials earlier than the Nineteen Sixties; uranium was extensively utilized in glass manufacturing earlier than the Nineteen Fifties, and thoriated tungsten rods are nonetheless produced right now and will be bought from Amazon. Even the pure uranium ore will be bought within the mineral retailers; nevertheless, it requires a bit extra security precautions. And a benefit of gamma-spectroscopy is that we don’t must disassemble or break the gadgets, and the method is usually secure.
The second problem is accumulating the information. For those who work in e-commerce, then it’s normally not an issue, and each SQL request will return hundreds of thousands of data. Alas, within the “actual world,” it may be rather more difficult. Particularly if you wish to make a database of the radioactive supplies. In our case, accumulating each spectrum requires 10-20 minutes. For each take a look at object, it could be good to have a minimum of 10 data. As we are able to see, the method can take hours, and having hundreds of thousands of data shouldn’t be a sensible choice.
For getting the spectrum information, I shall be utilizing a Radiacode 103G scintillation detector and an open-source radiacode library.
A gamma spectrum will be exported in XML format utilizing the official Radiacode Android app, however the guide course of is simply too gradual and tedious. As a substitute, I created a Python script that collects the spectra utilizing random time intervals:
from radiacode import RadiaCode, RawData, Spectrum
def read_forever(rc: RadiaCode):
""" Learn information from the system """
whereas True:
interval_sec = random.randint(10*60, 30*60)
read_spectrum(rc, interval_sec)
def read_spectrum(rc: RadiaCode, interval: int):
""" Learn and save spectrum """
rc.spectrum_reset()
# Learn
dt = datetime.datetime.now()
filename = dt.strftime("spectrum-%YpercentmpercentdpercentHpercentMpercentS.json")
logging.debug(f"Making spectrum for {interval // 60} min")
# Wait
t_start = time.monotonic()
whereas time.monotonic() - t_start < interval:
show_device_data(rc)
time.sleep(0.4)
# Save
spectrum: Spectrum = rc.spectrum()
spectrum_save(spectrum, filename)
def show_device_data(rc: RadiaCode):
""" Get CPS (counts per second) values """
information = rc.data_buf()
for file in information:
if isinstance(file, RawData):
log_str = f"CPS: {int(file.count_rate)}"
logging.debug(log_str)
def spectrum_save(spectrum: Spectrum, filename: str):
""" Save spectrum information to log """
duration_sec = spectrum.period.total_seconds()
information = {
"a0": spectrum.a0,
"a1": spectrum.a1,
"a2": spectrum.a2,
"counts": spectrum.counts,
"period": duration_sec,
}
with open(filename, "w") as f_out:
json.dump(information, f_out, indent=4)
logging.debug(f"File '{filename}' saved")
rc = RadiaCode()
app.read_forever()
Some error dealing with is omitted right here for readability causes. A hyperlink to the total supply code will be discovered on the finish of the article.
As we are able to see, I randomly choose the time between 10 and half-hour, gather the gamma spectrum information, and put it aside to a JSON file. Now, I solely want to position a Radiacode detector close to the article and go away the script operating for a number of hours. Because of this, 10-20 JSON information shall be saved. I additionally must repeat the method for each pattern I’ve. As a remaining output, 100-200 information will be collected. It’s nonetheless not hundreds of thousands, however as we are going to see, it’s sufficient for our process.
3. Coaching the Mannequin
When the information from the earlier step is prepared, we are able to begin coaching the mannequin. As a reminder, all information can be found on Kaggle, and readers are welcome to make their very own fashions as nicely.
First, let’s preprocess the information and extract the options we wish to use.
3.1 Knowledge Load
When the information is collected, we should always have some spectrum information saved in JSON format. A person file appears to be like like this:
{
"a0": 24.524023056030273,
"a1": 2.2699732780456543,
"a2": 0.0004327862989157,
"counts": [ 48, 52, , ..., 0, 35],
"period": 1364.0
}
Right here, the “counts” array is the precise spectrum information. Completely different detectors might have totally different codecs; a Radiacode returns the information within the type of a 1024-channel array. Calibration constants [a0, a1, a2] permit us to transform the channel quantity into the vitality in keV (kiloelectronvolt).
First, let’s make a technique to load the spectrum from a file:
@dataclass
class Spectrum:
""" Radiation spectrum measurement information """
period: int
a0: float
a1: float
a2: float
counts: checklist[int]
def channel_to_energy(self, ch: int) -> float:
""" Convert channel quantity to the vitality degree """
return self.a0 + self.a1 * ch + self.a2 * ch**2
def energy_to_channel(self, e: float):
""" Convert vitality to the channel quantity (inverse E = a0 + a1*C + a2 C^2) """
c = self.a0 - e
return int(
(np.sqrt(self.a1**2 - 4 * self.a2 * c) - self.a1) / (2 * self.a2)
)
def load_spectrum_json(filename: str) -> Spectrum:
""" Load spectrum from a json file """
with open(filename) as f_in:
information = json.load(f_in)
return Spectrum(
a0=information["a0"], a1=information["a1"], a2=information["a2"],
counts=information["counts"],
period=int(information["duration"]),
)
Now, we are able to draw it with Matplotlib:
import matplotlib.pyplot as plt
def draw_simple_spectrum(spectrum: Spectrum, title: Non-obligatory[str] = None):
""" Draw spectrum obtained from the Radiacode """
fig, ax = plt.subplots(figsize=(12, 3))
ax.spines["top"].set_color("lightgray")
ax.spines["right"].set_color("lightgray")
counts = spectrum.counts
vitality = [spectrum.channel_to_energy(x) for x in range(len(counts))]
# Bars
ax.bar(vitality, counts, width=3.0, label="Counts")
# X values
ticks_x = [
spectrum.channel_to_energy(ch) for ch in range(0, len(counts), len(counts) // 20)
]
labels_x = [f"{ch:.1f}" for ch in ticks_x]
ax.set_xticks(ticks_x, labels=labels_x)
ax.set_xlim(vitality[0], vitality[-1])
plt.ylim(0, None)
title_str = "Gamma-spectrum" if title is None else title
ax.set_title(title_str)
ax.set_xlabel("Vitality, keV")
plt.legend()
fig.tight_layout()
sp = load_spectrum_json("thorium-20250617012217.json")
draw_simple_spectrum(sp)
The output appears to be like like this:
What can we see right here?
As was talked about earlier than, from a regular Geiger counter, we are able to get solely the variety of detected particles. It tells us if the article is radioactive or not, however no more. From a scintillation detector, we are able to get the variety of particles grouped by their energies, which is virtually a ready-to-use histogram! A radioactive decay itself is random, so the longer the gathering time, the “smoother” the graph.
3.2 Knowledge Remodel
3.2.1 Normalization
Let’s take a look at the spectrum once more:
Right here, the information was collected for about 10 minutes, and the vertical axis comprises the variety of detected particles. This method has a easy drawback: the variety of particles shouldn’t be a relentless. It relies on each the gathering time and the “power” of the supply. It signifies that we might not have 600 particles like on this graph, however 60 or 6000. We will additionally see that the information is a bit noisy. That is particularly seen with a “weak” supply and a brief assortment time.
To eradicate these points, I made a decision to make use of a two-step pipeline. First, I utilized the Savitzky-Golay filter to scale back the noise:
from scipy.sign import savgol_filter
def smooth_data(information: np.array) -> np.array:
""" Apply 1D smoothing filter to the information array """
window_size = 10
data_out = savgol_filter(
information,
window_length=window_size,
polyorder=2,
)
return np.clip(data_out, a_min=0, a_max=None)
It’s particularly helpful for spectra with brief assortment instances, the place the peaks should not so nicely seen.
Second, I normalized a NumPy array to 0..1 by merely dividing its values by the utmost.
A remaining “normalize” technique appears to be like like this:
def normalize(spectrum: Spectrum) -> Spectrum:
""" Normalize information to the vertical vary of 0..1 """
# Easy information
counts = np.array(spectrum.counts).astype(np.float64)
counts = smooth_data(counts)
# Normalize
val_norm = counts.max()
return Spectrum(
period=spectrum.period,
a0 = spectrum.a0,
a1 = spectrum.a1,
a2 = spectrum.a2,
counts = counts/val_norm
)
Because of this, spectra from totally different sources now have an analogous scale:
As we are able to additionally see, the distinction between the 2 samples is sort of seen.
3.2.2 Knowledge Augmentation
Technically, we’re prepared to coach the mannequin. Nevertheless, as we noticed within the “Accumulating the information” half, the dataset is fairly small – I could have solely 100-200 information in complete. The answer is to reinforce the information by including extra artificial samples.
As a easy method, I made a decision so as to add some noise to the unique spectra. However how a lot noise ought to we add? I chosen a 680 keV channel as a reference worth, as a result of this half has no attention-grabbing isotopes. Then I added a noise with 50% of the amplitude of that channel. A np.clip name ensures that the information values should not damaging (for the quantity of detected particles, it doesn’t make bodily sense).
def add_noise(spectrum: Spectrum) -> Spectrum:
""" Add random noise to the spectrum """
counts = np.array(spectrum.counts)
ch_empty = spectrum.energy_to_channel(680.0)
val_norm = counts[ch_empty]
ampl = val_norm / 2
noise = np.random.regular(0, ampl, counts.form)
data_out = np.clip(counts + noise, min=0)
return Spectrum(
period=spectrum.period,
a0 = spectrum.a0,
a1 = spectrum.a1,
a2 = spectrum.a2,
counts = data_out
)
sp = load_spectrum_json("thorium-20250617012217.json")
sp = add_noise(normalize(sp))
draw_simple_spectrum(sp, filename)
The output appears to be like like this:
As we are able to see, the noise degree shouldn’t be that massive, so it doesn’t distort the peaks. On the similar time, it provides some variety to the information.
A extra refined method will also be used. For instance, some radioactive minerals comprise thorium, uranium, or potassium in numerous proportions. It could be attainable to mix spectra of current samples to get some “new” ones.
3.2.3 Characteristic Extraction
Technically, we are able to use all 1024 values “as is” as an enter for our ML mannequin. Nevertheless, this method has two issues:
- First, it’s redundant – we’re principally solely particularly isotopes. For instance, on the final graph, there’s a good seen peak at 238 keV, which belongs to Lead-212, and a much less seen peak at 338 keV, which belongs to Actinium-228.
- Second, it’s device-specific. I desire a mannequin to be common. Utilizing solely the energies of the chosen isotopes as enter permits us to make use of any gamma spectrometer mannequin.
Lastly, I created this checklist of isotopes:
isotopes = [
# Americium
("Am-241", 59.5),
# Potassium
("K-40", 1460.0),
# Radium
("Ra-226", 186.2),
("Pb-214", 242.0),
("Pb-214", 295.2),
("Pb-214", 351.9),
("Bi-214", 609.3),
("Bi-214", 1120.3),
("Bi-214", 1764.5),
# Thorium
("Pb-212", 238.6),
("Ac-228", 338.2),
("TI-208", 583.2),
("AC-228", 911.2),
("AC-228", 969.0),
# Uranium
("Th-234", 63.3),
("Th-231", 84.2),
("Th-234", 92.4),
("Th-234", 92.8),
("U-235", 143.8),
("U-235", 185.7),
("U-235", 205.3),
("Pa-234m", 766.4),
("Pa-234m", 1000.9),
]
def isotopes_save(filename: str):
""" Save isotopes checklist to a file """
with open(filename, "w") as f_out:
json.dump(isotopes, f_out)
Solely spectrum values for these isotopes shall be used as enter for the mannequin. I additionally created a technique to save lots of a listing into the JSON file – will probably be used to load the mannequin later. Some isotopes, like Uranium-235, could also be current in minuscule quantities and never be virtually detectable. Readers are welcome to enhance the checklist on their very own.
Now, let’s create a technique that converts a Radiacode spectrum to a listing of options:
def get_features(spectrum: Spectrum, isotopes: Checklist) -> np.array:
""" Extract options from the spectrum """
energies = [energy for _, energy in isotopes]
information = [spectrum.counts[spectrum.energy_to_channel(energy)] for vitality in energies]
return np.array(information)
Virtually, we transformed the checklist of 1024 values to a NumPy array with solely 23 components, which is an effective measurement discount!
3.3 Coaching
Lastly, we’re prepared to coach the ML mannequin.
First, let’s mix all information into one dataset. Virtually, it relies on the samples you’ve got and will appear to be this:
all_files = [
("Americium", glob.glob("../data/train/americium*.json")),
("Radium", glob.glob("../data/train/radium*.json")),
("Thorium", glob.glob("../data/train/thorium*.json")),
("Uranium Glass", glob.glob("../data/train/uraniumGlass*.json")),
("Uranium Glaze", glob.glob("../data/train/uraniumGlaze*.json")),
("Uraninite", glob.glob("../data/train/uraninite*.json")),
("Background", glob.glob("../data/train/background*.json")),
]
def prepare_data(augmentation: int) -> Tuple[np.array, np.array]:
""" Put together information for coaching """
x, y = [], []
for title, information in all_files:
for filename in information:
print(f"Processing {filename}...")
sp = normalize(load_spectrum(filename))
for _ in vary(augmentation):
sp_out = add_noise(sp)
x.append(get_features(sp_out, isotopes))
y.append(title)
return np.array(x), np.array(y)
X_train, y_train = prepare_data(augmentation=10)
As we are able to see, our y-values comprise names like “Americium.” I’ll use a LabelEncoder to transform them into numeric values:
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
le.match(y_train)
y_train = le.remodel(y_train)
print("X_train:", X_train.form)
#> (1900, 23)
print("y_train:", y_train.form)
#> (1900,)
I made a decision to make use of an open-source XGBoost mannequin, which is predicated on gradient tree boosting (authentic paper hyperlink). I can even use a GridSearchCV to search out optimum parameters:
from xgboost import XGBClassifier
from sklearn.model_selection import GridSearchCV
bst = XGBClassifier(n_estimators=10, max_depth=2, learning_rate=1)
clf = GridSearchCV(
bst,
{
"max_depth": [1, 2, 3, 4],
"n_estimators": vary(2, 20),
"learning_rate": [0.001, 0.01, 0.1, 1.0, 10.0]
},
verbose=1,
n_jobs=1,
cv=3,
)
clf.match(X_train, y_train)
print("best_score:", clf.best_score_)
#> best_score: 0.99474
print("best_params:", clf.best_params_)
#> best_params: {'learning_rate': 1.0, 'max_depth': 1, 'n_estimators': 9}
Final however not least, I want to save lots of the skilled mannequin:
isotopes_save("../fashions/V1/isotopes.json")
bst.save_model("../fashions/V1/XGBClassifier.json")
np.save("../fashions/V1/LabelEncoder.npy", le.classes_)
Clearly, we want not solely the mannequin itself but additionally the checklist of isotopes and labels. If we alter one thing, the information is not going to match anymore, and the mannequin will produce rubbish, so mannequin versioning is our buddy!
To confirm the outcomes, I want information that the mannequin didn’t “see” earlier than. I already collected a number of XML information utilizing the Radiacode Android app, and only for enjoyable, I made a decision to make use of them for testing.
First, I created a technique to load the information:
import xmltodict
def load_spectrum_xml(file_path: str) -> Spectrum:
""" Load the spectrum from a Radiacode Android app file """
with open(file_path) as f_in:
doc = xmltodict.parse(f_in.learn())
consequence = doc["ResultDataFile"]["ResultDataList"]["ResultData"]
spectrum = consequence["EnergySpectrum"]
cal = spectrum["EnergyCalibration"]["Coefficients"]["Coefficient"]
a0, a1, a2 = float(cal[0]), float(cal[1]), float(cal[2])
period = int(spectrum["MeasurementTime"])
information = spectrum["Spectrum"]["DataPoint"]
return Spectrum(
period=period,
a0=a0, a1=a1, a2=a2,
counts=[int(x) for x in data],
)
It has the identical spectra values that I used within the JSON information, with some additional information that’s not required for our process.
Virtually, that is an instance of knowledge assortment. This Victorian creamer from the Nineties is 130 years previous, and belief me, you can not get this information by utilizing an SQL request 🙂
This uranium glass is barely radioactive (the background degree is about 0,08 µSv/h), but it surely’s at a secure degree and can’t produce any hurt.
The take a look at code itself is easy:
# Load mannequin
bst = XGBClassifier()
bst.load_model("../fashions/V1/XGBClassifier.json")
isotopes = isotopes_load("../fashions/V1/isotopes.json")
le = LabelEncoder()
le.classes_ = np.load("../fashions/V1/LabelEncoder.npy")
# Load information
test_data = [
["../data/test/background1.xml", "../data/test/background2.xml"],
["../data/test/thorium1.xml", "../data/test/thorium2.xml"],
["../data/test/uraniumGlass1.xml", "../data/test/uraniumGlass2.xml"],
...
]
# Predict
for group in test_data:
information = []
for filename in group:
spectrum = load_spectrum(filename)
options = get_features(normalize(spectrum), isotopes)
information.append(options)
X_test = np.array(information)
preds = bst.predict(X_test)
preds = le.inverse_transform(preds)
print(preds)
#> ['Background' 'Background']
#> ['Thorium' 'Thorium']
#> ['Uranium Glass' 'Uranium Glass']
#> ...
Right here, I additionally grouped the values from totally different samples and used batch prediction.
As we are able to see, all outcomes are right. I used to be additionally going to make a confusion matrix, however a minimum of for my comparatively small variety of samples, all objects have been detected correctly.
4. Testing
As a remaining a part of this text, let’s use the mannequin in real-time with a Radiacode system.
The code is nearly the identical as at the start of the article, so I’ll present solely the essential components. Utilizing the radiacode library, I hook up with the system, learn the spectra as soon as per minute, and use these values to foretell the isotopes:
from radiacode import RadiaCode, RealTimeData, Spectrum
import logging
le = LabelEncoder()
le.classes_ = np.load("../fashions/V1/LabelEncoder.npy")
isotopes = isotopes_load("../fashions/V1/isotopes.json")
bst = XGBClassifier()
bst.load_model("../fashions/V1/XGBClassifier.json")
def read_spectrum(rc: RadiaCode):
""" Learn spectrum information """
spectrum: Spectrum = rc.spectrum()
logging.debug(f"Spectrum: {spectrum.period} assortment time")
consequence = predict_spectrum(spectrum)
logging.debug(f"Predict: {consequence}")
def predict_spectrum(sp: Spectrum) -> str:
""" Predict the isotope from a spectrum """
options = get_features(normalize(sp), isotopes)
preds = bst.predict([features])
return le.inverse_transform(preds)[0]
def read_cps(rc: RadiaCode):
""" Learn CPS (counts per second) values """
information = rc.data_buf()
for file in information:
if isinstance(file, RealTimeData):
logging.debug(f"CPS: {file.count_rate:.2f}")
if __name__ == '__main__':
logging.basicConfig(
degree=logging.DEBUG, format="[%(asctime)-15s] %(message)s",
datefmt="%Y-%m-%d %H:%M:%S"
)
rc = RadiaCode()
logging.debug(f"ML mannequin loaded")
fw_version = rc.fw_version()
logging.debug(f"Gadget linked:, firmware {fw_version[1]}")
rc.spectrum_reset()
whereas True:
for _ in vary(12):
read_cps(rc)
time.sleep(5.0)
read_spectrum(rc)
Right here, I learn the CPS (counts per second) values from the Radiacode each 5 seconds, simply to make sure that the system works. Each minute, I learn the spectrum and use it with the mannequin.
Earlier than operating the app, I positioned the Radiacode detector close to the article:
This classic watch was made within the Nineteen Fifties, and it has radium paint on the digits. Its radiation degree is ~5 instances the background, however it’s nonetheless inside a secure degree (and it’s truly 2 instances decrease than everybody will get in an airplane throughout a flight).
Now, we are able to run the code and see the ends in real-time:
As we are able to see, the mannequin’s prediction is right.
Readers who don’t have a Radiacode {hardware} can use uncooked log information to replay the information. The hyperlink is added to the top of the article.
Conclusion
On this article, I defined the method of making a machine studying mannequin for predicting radioactive isotopes. I additionally examined the mannequin with some radioactive samples that may be legally bought.
I additionally did an interactive HTMX frontend for the mannequin, however this text is already too lengthy. If there’s a public curiosity on this subject, this shall be revealed within the subsequent half.
As for the mannequin itself, there are a number of methods for enchancment:
- Including extra information samples and isotopes. I’m not a nuclear establishment, and my selection (from not solely monetary or authorized views, but additionally contemplating the free area in my house) is restricted. Readers who’ve entry to different isotopes and minerals are welcome to share their information, and I’ll attempt to add it to the mannequin.
- Including extra options. On this mannequin, I normalized all spectra, and it really works nicely. Nevertheless, on this method, we lose the details about the radioactivity degree of the objects. For instance, the uranium glass has a a lot decrease radiation degree in comparison with the uranium ore. To differentiate these objects extra successfully, we are able to add the radioactivity degree as a further mannequin characteristic.
- Testing different mannequin sorts. It appears to be like promising to make use of a vector search to search out the closest embeddings. It will also be extra interpretable, and the mannequin can present a number of closest isotopes. A library like FAISS will be helpful for that. One other method is to make use of a deep studying mannequin, which will also be attention-grabbing to check.
On this article, I used a Radiacode radiation detector. It’s a good system that enables making some attention-grabbing experiments (disclaimer: I don’t have any revenue or different business curiosity from its gross sales). For these readers who don’t have a Radiacode {hardware}, all collected information is freely obtainable on Kaggle.
The complete supply code for this text is accessible on my Patreon web page. This assist helps me to purchase tools or electronics for future assessments. And readers are additionally welcome to attach through LinkedIn, the place I periodically publish smaller posts that aren’t sufficiently big for a full article.
Thanks for studying.