Method
Scrutinix turns link triage into one evidence surface instead of a pile of disconnected lookups.
A scan combines high-confidence reputation checks with resilient local signals so the output stays useful even when one provider is degraded. Each signal resolves independently, and verdict confidence explains how much clean or risky coverage actually supported the final score.
Evidence model
Weighted first
Browser-protection lists, threat feeds, and multi-engine detections outweigh softer context like domain age or redirect complexity.
Delivery
Live stream
Single and batch scans emit NDJSON events so the UI can show partial results, coverage caveats, and progress in real time.
Confidence
Coverage-aware
Safe verdicts lose confidence when primary reputation sources time out, while risky verdicts gain confidence when categories agree.
Scoring approach
High-confidence evidence moves the verdict most.
Safe Browsing matches, community feed hits, and stronger multi-engine detections outweigh softer context. DNS posture, TLS quality, WHOIS age, and redirect behavior still matter, but they are supporting evidence rather than the primary driver.
Risk-moving signals
- Google Safe Browsing
- Threat feeds: URLhaus and OpenPhish — matches use the exact listed URL string, not directory or hub pages.
- VirusTotal multi-engine detections
- Local ensemble consensus
Resilience signals
- TLS validation and certificate metadata
- WHOIS age, registrar, and country
- DNS anomalies and passive observations
- Redirect-chain hops and terminal reachability
Confidence behavior
Verdicts are coverage-aware, not just score bands.
Partial provider failures reduce confidence even when the headline verdict remains safe. Multiple agreeing risk signals raise confidence because the evidence came from different categories.
Score bands