• About Us
  • Privacy Policy
  • Disclaimer
  • Contact Us
TechTrendFeed
  • Home
  • Tech News
  • Cybersecurity
  • Software
  • Gaming
  • Machine Learning
  • Smart Home & IoT
No Result
View All Result
  • Home
  • Tech News
  • Cybersecurity
  • Software
  • Gaming
  • Machine Learning
  • Smart Home & IoT
No Result
View All Result
TechTrendFeed
No Result
View All Result

Hybrid Neuro-Symbolic Fraud Detection: Guiding Neural Networks with Area Guidelines

Admin by Admin
March 10, 2026
Home Machine Learning
Share on FacebookShare on Twitter


Summary

datasets are extraordinarily imbalanced, with optimistic charges beneath 0.2%. Commonplace neural networks educated with weighted binary cross-entropy usually obtain excessive ROC-AUC however battle to establish suspicious transactions below threshold-sensitive metrics. I suggest a Hybrid Neuro-Symbolic (HNS) method that comes with area information instantly into the coaching goal as a differentiable rule loss — encouraging the mannequin to assign excessive fraud likelihood to transactions with unusually giant quantities and atypical PCA signatures. On the Kaggle Credit score Card Fraud dataset, the hybrid achieves ROC-AUC of 0.970 ± 0.005 throughout 5 random seeds, in comparison with 0.967 ± 0.003 for the pure neural baseline below symmetric analysis. A key sensible discovering: on imbalanced knowledge, threshold choice technique impacts F1 as a lot as mannequin structure — each fashions have to be evaluated with the identical method for any comparability to be significant. Code and reproducibility supplies can be found at GitHub.

The Drawback: When ROC-AUC Lies

I had a fraud dataset at 0.17% optimistic charge. Educated a weighted BCE community, obtained ROC-AUC of 0.96, somebody mentioned “good”. Then I pulled up the rating distributions and threshold-dependent metrics. The mannequin had quietly discovered that predicting “not fraud” on something ambiguous was the trail of least resistance — and nothing within the loss perform disagreed with that call.

What bothered me wasn’t the mathematics. It was that the mannequin had no concept what fraud seems like. A junior analyst on day one might inform you: giant transactions are suspicious, transactions with uncommon PCA signatures are suspicious, and when each occur collectively, it’s best to positively be paying consideration. That information simply… by no means makes it into the coaching loop.So I ran an experiment. What if I encoded that analyst instinct as a smooth constraint instantly within the loss perform — one thing the community has to fulfill whereas additionally becoming the labels? The outcome was a Hybrid Neuro-Symbolic (HNS) setup. This text walks by means of the total experiment: the mannequin, the rule loss, the lambda sweep, and — critically — what a correct multi-seed variance evaluation with symmetric threshold analysis really exhibits.

The Setup

I used the Kaggle Credit score Card Fraud dataset — 284,807 transactions, 492 of that are fraud (0.172%). The V1–V28 options are PCA elements from an anonymized authentic characteristic area. Quantity and Time are uncooked. The extreme imbalance is the entire level; that is the place normal approaches begin to battle [1].

Break up was 70/15/15 prepare/val/check, stratified. I educated 4 issues and in contrast them head-to-head:

  • Isolation Forest — contamination=0.001, suits on the total coaching set
  • One-Class SVM — nu=0.001, suits solely on the non-fraud coaching samples
  • Pure Neural — three-layer MLP with BCE + class weighting, no area information
  • Hybrid Neuro-Symbolic — the identical MLP, with a differentiable rule penalty added to the loss

Isolation Forest and One-Class SVM function a gut-check. If a supervised community with 199k coaching samples can’t clear the bar set by an unsupervised technique, that’s price understanding earlier than you write up outcomes. A tuned gradient boosting mannequin would doubtless outperform each neural approaches; this comparability is meant to isolate the impact of the rule loss, not benchmark in opposition to all potential strategies. Full code for all 4 is on GitHub.

The Mannequin

Nothing unique. A 3-layer MLP with batch normalization after every hidden layer. The batch norm issues greater than you would possibly anticipate — below heavy class imbalance, activations can drift badly with out it [3].

class MLP(nn.Module):
    def __init__(self, input_dim):
        tremendous().__init__()
        self.internet = nn.Sequential(
            nn.Linear(input_dim, 128),
            nn.ReLU(),
            nn.BatchNorm1d(128),
            nn.Linear(128, 64),
            nn.ReLU(),
            nn.BatchNorm1d(64),
            nn.Linear(64, 1)
        )

    def ahead(self, x):
        return self.internet(x)

For the loss, BCEWithLogitsLoss with pos_weight — computed because the ratio of non-fraud to fraud counts within the coaching set. On this dataset that’s 577 [4]. A single fraud pattern in a batch generates 577 instances the gradient of a non-fraud one.

pos_weight = depend(y=0) / depend(y=1) ≈ 577

That weight supplies a directional sign when labeled fraud does seem. However the mannequin nonetheless has no idea of what “suspicious” seems like in characteristic area — it solely is aware of that fraud examples, after they do present up, ought to be closely weighted. That’s totally different from understanding the place to look on batches that occur to include no labeled fraud in any respect.

The Rule Loss

Right here is the core concept. Fraud analysts know two issues empirically: unusually excessive transaction quantities are suspicious, and transactions that sit removed from regular habits in PCA area are suspicious. I would like the mannequin to assign excessive fraud chances to transactions that match each indicators — even when a batch accommodates no labeled fraud examples.

The trick is making the rule differentiable. An if/else threshold — flag any transaction the place quantity > 1000 — is a tough step perform. Its gradient is zero in all places besides on the threshold itself, the place it’s undefined. Which means backpropagation has nothing to work with; the rule produces no helpful gradient sign and the optimizer ignores it. As an alternative, I take advantage of a steep sigmoid centered on the batch imply. It approximates the identical threshold habits however stays easy and differentiable in all places — the gradient is small removed from the boundary and peaks close to it, which is strictly the place you need the optimizer paying consideration. The result’s a easy suspicion rating between 0 and 1:

def rule_loss(x, probs):
    # x[:, -1]   = Quantity  (final column in creditcard.csv after dropping Class)
    # x[:, 1:29] = V1–V28  (PCA elements, columns 1–28)
    quantity   = x[:, -1]
    pca_norm = torch.norm(x[:, 1:29], dim=1)

    suspicious = (
        torch.sigmoid(5 * (quantity   - quantity.imply())) +
        torch.sigmoid(5 * (pca_norm - pca_norm.imply()))
    ) / 2.0

    penalty = suspicious * torch.relu(0.6 - probs.squeeze())
    return penalty.imply()

A word on why PCA norm particularly: the V1–V28 options are the results of a PCA remodel utilized to the unique anonymized transaction knowledge. A transaction that sits removed from the origin on this compressed area has uncommon variance throughout a number of authentic options concurrently — it’s an outlier within the latent illustration. The Euclidean norm of the PCA vector captures that distance in a single scalar. This isn’t a Kaggle-specific trick. On any dataset the place PCA elements symbolize regular behavioral variance, the norm of these elements is an inexpensive proxy for atypicality. In case your options are usually not PCA-transformed, you’d exchange this with a domain-appropriate sign — Mahalanobis distance, isolation rating, or a feature-specific z-score.

The relu(0.6 – probs) time period is the constraint: it fires solely when the mannequin’s predicted fraud likelihood is beneath 0.6 for a suspicious transaction. If the mannequin is already assured (prob > 0.6), the penalty is zero. That is intentional — I’m not penalizing the mannequin for being too aggressive on suspicious transactions, just for being too conservative. The asymmetry means the rule can by no means combat in opposition to an accurate high-confidence prediction.

Formally, the mixed goal is:

L_total = L_BCE + λ · L_rule

L_rule = E[ σ_susp(x) · ReLU(0.6 − p) ]

σ_susp(x) = ½ · [ σ(5·(amount − ā)) + σ(5·(‖V₁₋₂₈‖ − mean‖V‖)) ]

The λ hyperparameter controls how laborious the rule pushes. At λ=0 you get the pure neural baseline. The total coaching loop:

for xb, yb in train_loader:
    xb, yb = xb.to(DEVICE), yb.to(DEVICE)

    logits = mannequin(xb)
    bce    = criterion(logits.squeeze(), yb)
    probs  = torch.sigmoid(logits)
    rl     = rule_loss(xb, probs)
    loss   = bce + lambda_rule * rl

    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

Tuning Lambda

5 values examined: 0.0, 0.1, 0.5, 1.0, 2.0. Every mannequin educated to finest validation PR-AUC with early stopping at endurance=7, seed=42:

Lambda 0.0  →  Val PR-AUC: 0.7580
Lambda 0.1  →  Val PR-AUC: 0.7595
Lambda 0.5  →  Val PR-AUC: 0.7620   ← finest
Lambda 1.0  →  Val PR-AUC: 0.7452
Lambda 2.0  →  Val PR-AUC: 0.7504

Finest Lambda: 0.5

λ=0.5 wins narrowly on validation PR-AUC. The hole between λ=0.0, 0.1, and 0.5 is small — throughout the vary of seed variance because the multi-seed evaluation beneath exhibits. The significant drop at λ=1.0 and a pair of.0 means that aggressive rule weighting can override the BCE sign somewhat than complement it. On new knowledge, deal with λ=0 because the default and confirm any enchancment holds throughout seeds earlier than trusting it.

One factor to watch out about with threshold choice: I computed the optimum F1 threshold on the validation set and utilized it to the check set — for each fashions symmetrically. On a 0.17% positive-rate dataset, the optimum choice boundary is nowhere close to 0.5. Making use of totally different thresholding methods to totally different fashions means measuring the brink hole, not the mannequin hole. Each should use the identical method:

def find_best_threshold(y_true, probs):
    precision, recall, thresholds = precision_recall_curve(y_true, probs)
    f1_scores = 2*(precision*recall) / (precision+recall+1e-8)
    return thresholds[np.argmax(f1_scores)]

# Utilized symmetrically to BOTH fashions — val set solely
hybrid_thresh, _ = find_best_threshold(y_val, hybrid_val_probs)
pure_thresh,   _ = find_best_threshold(y_val, pure_val_probs)

Outcomes

Mannequin F1 PR-AUC ROC-AUC Recall@1percentFPR
Isolation Forest 0.121 0.172 0.941 0.581
One-Class SVM 0.029 0.391 0.930 0.797
Pure Neural (λ=0) 0.776 0.806 0.969 0.878
Hybrid (λ=0.5) 0.767 0.745 0.970 0.878
Desk 1 — Check-set outcomes, seed=42, each supervised fashions utilizing val-tuned thresholds. The pure neural baseline is a single retrained run; seed variance is quantified in Desk 2 beneath.

On this seed, the hybrid and pure baseline are aggressive on F1 (0.767 vs 0.776) and an identical on Recall@1percentFPR. The hybrid’s PR-AUC is decrease on this explicit seed (0.745 vs 0.806). The cleanest sign is ROC-AUC — 0.970 for the hybrid vs 0.969 for the pure baseline. ROC-AUC is threshold-independent, measuring rating high quality throughout all potential cutoffs. That edge is the place the rule loss exhibits up most persistently.

Precision-Recall Curve

Precision-Recall curve for the Hybrid Neuro-Symbolic model (seed=42) showing PR-AUC of 0.745
Determine 1 — Precision-Recall curve for the Hybrid mannequin (seed=42). PR-AUC = 0.745. Picture by Creator.

Robust early precision is what you need in a fraud system. The curve holds fairly earlier than dropping — that means the mannequin’s top-ranked transactions are genuinely fraud-heavy, not only a fortunate threshold. In manufacturing you’d tune the brink to your precise value ratio: the price of a missed fraud versus the price of a false alarm. The val-optimized F1 threshold used here’s a affordable center floor for reporting, not the one legitimate selection.

Confusion Matrix

Confusion matrix for the Hybrid model (seed=42) at validation-tuned threshold
Determine 2 — Confusion Matrix at validation-tuned threshold (seed=42). Picture by Creator.

Rating Distributions

Histogram of predicted probabilities for non-fraud (blue) and fraud (orange) classes using the Hybrid model (seed=42)
Determine 3 — Predicted likelihood distributions (seed=42). Non-fraud (blue) clusters close to 0; fraud (orange) is pushed greater by the rule penalty. Picture by Creator.

This histogram is what I take a look at first after coaching any classifier on imbalanced knowledge. The non-fraud distribution ought to spike close to zero; the fraud distribution ought to unfold towards 1. The overlap area within the center is the place the mannequin is genuinely unsure — that’s the place your threshold lives.

Variance Evaluation — 5 Random Seeds

A single-seed outcome on a dataset this imbalanced shouldn’t be sufficient to belief. I ran each fashions throughout seeds [42, 0, 7, 123, 2024], making use of val-optimized thresholds symmetrically to each in each run:

Seed   42 | Hybrid F1: 0.767  PR-AUC: 0.745 | Pure F1: 0.776  PR-AUC: 0.806
Seed    0 | Hybrid F1: 0.733  PR-AUC: 0.636 | Pure F1: 0.788  PR-AUC: 0.743
Seed    7 | Hybrid F1: 0.809  PR-AUC: 0.817 | Pure F1: 0.767  PR-AUC: 0.755
Seed  123 | Hybrid F1: 0.797  PR-AUC: 0.756 | Pure F1: 0.757  PR-AUC: 0.731
Seed 2024 | Hybrid F1: 0.764  PR-AUC: 0.745 | Pure F1: 0.826  PR-AUC: 0.763
Mannequin F1 (imply ± std) PR-AUC (imply ± std) ROC-AUC (imply ± std)
Pure Neural 0.783 ± 0.024 0.760 ± 0.026 0.967 ± 0.003
Hybrid (λ=0.5) 0.774 ± 0.027 0.740 ± 0.058 0.970 ± 0.005
Desk 2 — Multi-seed variance throughout 5 seeds. Hybrid and pure baseline are statistically indistinguishable on F1 and PR-AUC. Hybrid exhibits a constant ROC-AUC benefit throughout all 5 seeds.
Bar chart showing mean and standard deviation of F1 and PR-AUC across 5 random seeds for pure neural and hybrid models
Determine 4 — F1 and PR-AUC imply ± std throughout 5 seeds. Variations on threshold-dependent metrics are inside noise vary. Picture by Creator.

Three observations from the variance knowledge. The hybrid wins on F1 in 2 of 5 seeds; the pure baseline wins in 3 of 5. Neither dominates on threshold-dependent metrics. The hybrid’s PR-AUC variance is notably greater (±0.058 vs ±0.026), that means the rule loss makes some initializations higher and a few worse — it’s a sensitivity, not a assured enchancment. The one outcome that holds with out exception: ROC-AUC is greater for the hybrid throughout all 5 seeds. That’s the cleanest sign from this experiment.

Why Does the Rule Loss Assist ROC-AUC?

ROC-AUC is threshold-independent — it measures how properly the mannequin ranks fraud above non-fraud throughout all potential cutoffs. A constant enchancment throughout 5 seeds is an actual sign. Here’s what I believe is occurring.

With 0.172% fraud prevalence, most 2048-sample batches include solely 3–4 labeled fraud examples. The BCE loss receives virtually no fraud-relevant gradient on the vast majority of batches. The rule loss fires on each suspicious transaction no matter label — it generates gradient indicators on batches that may in any other case inform the optimizer virtually nothing about fraud. This provides the mannequin constant path all through coaching, not simply on the uncommon batches the place labeled fraud occurs to look.

The penalty can also be feature-selective. By pointing the mannequin particularly towards quantity and PCA norm, the rule reduces the prospect that the mannequin latches onto irrelevant correlations within the different 28 dimensions. It features as smooth regularization over the characteristic area, not simply the output area.

The one-sided relu issues too. I’m not penalizing the mannequin for being too aggressive on suspicious transactions — just for being too conservative. The rule can’t combat in opposition to an accurate high-confidence prediction, solely push up underconfident ones. That asymmetry is deliberate.

The lesson shouldn’t be that guidelines exchange studying. It’s that guidelines can information it — particularly when labeled examples are scarce and also you already know one thing about what you’re searching for.

On Threshold Analysis in Imbalanced Classification

One discovering from this experiment is price its personal part as a result of it applies to any imbalanced classification downside, not simply fraud.

On a dataset with 0.17% optimistic charge, the optimum F1 threshold is nowhere close to 0.5. A mannequin can rank fraud virtually completely and nonetheless rating poorly on F1 at a default threshold, just because the choice boundary must be calibrated to the category imbalance. Which means that if two fashions are evaluated with totally different thresholding methods — one at a set cutoff, the opposite with a val-optimized cutoff — you aren’t evaluating fashions. You’re measuring the brink hole.

The sensible guidelines for clear comparability on imbalanced knowledge:

  • Each fashions evaluated with the identical thresholding technique
  • Threshold chosen on validation knowledge, by no means on check knowledge
  • PR-AUC and ROC-AUC reported alongside F1 — each are threshold-independent
  • Variance throughout a number of seeds to separate actual variations from fortunate initialization

Issues to Watch Out For

Batch-relative statistics. The rule computes “excessive quantity” and “excessive PCA norm” relative to the batch imply, not a set inhabitants statistic. Throughout coaching with giant batches (2048) and stratified sampling, batch means are secure sufficient. In on-line inference scoring particular person transactions, freeze these statistics to training-set values. In any other case the “suspicious” boundary shifts with each name.

PR-AUC variance will increase with the rule loss. Hybrid PR-AUC ranges from 0.636 to 0.817 throughout seeds versus 0.731 to 0.806 for the pure baseline. A rule that helps on some initializations and hurts on others requires multi-seed validation earlier than drawing conclusions. Single-seed outcomes are usually not sufficient.

Excessive λ degrades efficiency. λ=1.0 and a pair of.0 present a significant drop in validation PR-AUC. Aggressive rule weighting can override the BCE sign somewhat than complement it. Begin at λ=0.5 and confirm by yourself knowledge earlier than going greater.

A pure extension would make the rule weights learnable somewhat than mounted at 0.5/0.5:

# Learnable mixture weights
self.rule_w = nn.Parameter(torch.tensor([0.5, 0.5]))

w = torch.softmax(self.rule_w, dim=0)
suspicious = (
    w[0] * torch.sigmoid(5 * (quantity   - quantity.imply())) +
    w[1] * torch.sigmoid(5 * (pca_norm - pca_norm.imply()))
)

This lets the mannequin determine whether or not quantity or PCA norm is extra predictive for the particular knowledge, somewhat than hard-coding equal weights. This variant has not been run but — it’s the subsequent factor on the checklist.

Closing Ideas

The rule loss does one thing actual — the ROC-AUC enchancment is constant and threshold-independent throughout all 5 seeds. The development on threshold-dependent metrics like F1 and PR-AUC is inside noise vary and depends upon initialization. The trustworthy abstract: area guidelines injected into the loss perform can enhance a mannequin’s underlying rating distributions on rare-event knowledge, however the magnitude relies upon closely on the way you measure it and the way secure the advance is throughout seeds.

In case you work in fraud detection, anomaly detection, or any area the place labeled positives are uncommon and area information is wealthy, this sample is price experimenting with. The implementation is easy — a handful of strains on high of an ordinary coaching loop. The extra necessary self-discipline is measurement: use symmetric threshold analysis, report threshold-independent metrics, and all the time run a number of seeds earlier than trusting a outcome.

The repo has the total coaching loop, lambda sweep, variance evaluation, and eval code. Obtain the CSV from Kaggle, drop it in the identical listing, run app.py. The numbers above ought to reproduce — if they don’t in your machine, open a difficulty and I’ll have a look.

References

[1] A. Dal Pozzolo, O. Caelen, R. A. Johnson and G. Bontempi, Calibrating Chance with Undersampling for Unbalanced Classification (2015), IEEE SSCI. https://dalpozz.github.io/static/pdf/SSCI_calib_final_noCC.pdf

[2] ULB Machine Studying Group, Credit score Card Fraud Detection Dataset (Kaggle). https://www.kaggle.com/datasets/mlg-ulb/creditcardfraud (Open Database license)

[3] S. Ioffe and C. Szegedy, Batch Normalization: Accelerating Deep Community Coaching by Lowering Inside Covariate Shift (2015), arXiv:1502.03167. https://arxiv.org/abs/1502.03167

[4] PyTorch Documentation — BCEWithLogitsLoss. https://pytorch.org/docs/secure/generated/torch.nn.BCEWithLogitsLoss.html

[5] Experiment code and reproducibility supplies. https://github.com/Emmimal/neuro-symbolic-fraud-pytorch/

Disclosure

This text is predicated on unbiased experiments utilizing publicly accessible knowledge (Kaggle Credit score Card Fraud dataset) and open-source instruments (PyTorch). No proprietary datasets, firm assets, or confidential data had been used. The outcomes and code are absolutely reproducible as described, and the GitHub repository accommodates the entire implementation. The views and conclusions expressed listed here are my very own and don’t symbolize any employer or group.

Tags: DetectionDomainFraudGuidingHybridNetworksNeuralNeuroSymbolicrules
Admin

Admin

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Trending.

Reconeyez Launches New Web site | SDM Journal

Reconeyez Launches New Web site | SDM Journal

May 15, 2025
Discover Vibrant Spring 2025 Kitchen Decor Colours and Equipment – Chefio

Discover Vibrant Spring 2025 Kitchen Decor Colours and Equipment – Chefio

May 17, 2025
Safety Amplified: Audio’s Affect Speaks Volumes About Preventive Safety

Safety Amplified: Audio’s Affect Speaks Volumes About Preventive Safety

May 18, 2025
Apollo joins the Works With House Assistant Program

Apollo joins the Works With House Assistant Program

May 17, 2025
Flip Your Toilet Right into a Good Oasis

Flip Your Toilet Right into a Good Oasis

May 15, 2025

TechTrendFeed

Welcome to TechTrendFeed, your go-to source for the latest news and insights from the world of technology. Our mission is to bring you the most relevant and up-to-date information on everything tech-related, from machine learning and artificial intelligence to cybersecurity, gaming, and the exciting world of smart home technology and IoT.

Categories

  • Cybersecurity
  • Gaming
  • Machine Learning
  • Smart Home & IoT
  • Software
  • Tech News

Recent News

Hybrid Neuro-Symbolic Fraud Detection: Guiding Neural Networks with Area Guidelines

Hybrid Neuro-Symbolic Fraud Detection: Guiding Neural Networks with Area Guidelines

March 10, 2026
Kai Emerges From Stealth With $125M in Funding for AI Platform Bridging IT and OT Safety

Kai Emerges From Stealth With $125M in Funding for AI Platform Bridging IT and OT Safety

March 10, 2026
  • About Us
  • Privacy Policy
  • Disclaimer
  • Contact Us

© 2025 https://techtrendfeed.com/ - All Rights Reserved

No Result
View All Result
  • Home
  • Tech News
  • Cybersecurity
  • Software
  • Gaming
  • Machine Learning
  • Smart Home & IoT

© 2025 https://techtrendfeed.com/ - All Rights Reserved