Identity scoring for every user, in real time.

Dregs computes continuous identity scores for every user in your application — actually four of them, across Humanity, Authenticity, Uniqueness, and Behavior. Each is a 0–100 value derived from a pipeline of AI-assisted analyzers running over device, identity, and event data, updated as new events arrive.

The result: a single, current fraud-risk view per user identity. Surface the scores in your dashboard, route them to alerts and escalations, or send them straight to your application via webhook to drive automated abuse prevention.

Four Identity Scores, Not One

Most fraud detection products give a user a single risk score. Dregs gives every identity four, because "risky" usually means one of four very different things — and conflating them makes false positives and false negatives both worse. A user with a clean device but a fake-looking profile is a different problem from a user with a real profile but a shared device. Identity scoring on a single dimension can't tell those apart; identity scoring on four can.

Humanity

Humanity Score

The probability that a real human is controlling the browser. Low Humanity means bot, scraper, or automation framework — driven by signals like canvas rendering, WebGL fingerprints, automation framework signatures, and inhuman timing patterns.

Authenticity

Authenticity Score

Whether the user's identity data looks real. Low Authenticity means disposable email domains, keyboard-mashed names, fake-looking profile data, and other tells that the user didn't bother to look legitimate.

Uniqueness

Uniqueness Score

Whether this is the user's only account. Low Uniqueness means shared devices, shared IPs, behavioral overlap with other identities, and other evidence that multiple accounts trace back to the same person.

Behavior

Behavior Score

Whether usage patterns match a legitimate user journey. Low Behavior means unnaturally efficient navigation, repetitive automation-like patterns, or activity that diverges from the profile of your real customers.

How Identity Scoring Works in Dregs

Dregs ships with a pipeline of analyzers — small, focused pieces of logic, each looking for one specific signal. Some are written in Java for performance, most in JavaScript for transparency and customizability. Each analyzer runs over an identity's events and devices and produces a list of observations.

Each observation has:

Dregs aggregates observations within each category using weighted averages of value × confidence, producing the four 0–100 scores. The full list of observations is preserved, so you can always trace a score back to the analyzers that produced it.

See Identity Scoring in Action

Below is a live mock of how Dregs presents identity scores in the dashboard. Each card shows the four scores, recent observations, and the badges and escalations that would fire based on your rules.

Real-Time and Continuous

Identity scoring isn't a one-time check at signup. Every event a user generates can change their scores — and often will. A user who looked legitimate on day one and starts cycling through sessions on day three will see their Behavior score drop. A user whose device suddenly appears under a second account will see their Uniqueness score crater.

All scoring runs asynchronously after events arrive. The REST API always returns the most recent persisted scores per identity, so your application logic can decide based on the current view of risk rather than a cached snapshot.

From Identity Score to Action

An identity score by itself is diagnostic. Where it earns its keep is when it drives action.

Dashboard surfacing Sort, filter, and triage users by score. Investigate suspicious accounts; ignore the obviously legitimate ones.
Badges Auto-label identities based on score thresholds. "Likely Bot," "Freeloader," "Trusted Customer" — your rules, your labels.
Alerts and escalations Notify your team via Slack, email, or webhook when a score crosses a threshold. Track the resolution lifecycle inside Dregs.
Application webhooks Send the score back to your own app and act on it directly: shadow ban, gate a feature, require extra verification, or block outright.

Customizing the Scoring

The four score categories are fixed, but what feeds into them is extensible. On the Advanced plan, you can write custom analyzers in JavaScript that produce observations against any category, using the same APIs as the built-in analyzers. This lets you encode domain-specific fraud signals — things only your team knows about your users — and have them flow through the same scoring pipeline as everything else.

You can also configure custom datasets — lists or mappings stored in Dregs and queryable from your analyzers. Banned domains, known-good IPs, internal employee identifiers — anything you want your custom logic to reference.

Where Identity Scoring Helps

Identity scoring is the central mechanism behind every abuse pattern Dregs handles. Each abuse pattern shows up as a distinctive combination of low scores across the four dimensions.

Included With Every Dregs Plan

Identity scoring is the core of Dregs. It's part of every plan, billed against the active-identity meter — plans start at $17/month. Custom analyzers and datasets are available on the Advanced plan. See the pricing page for the details.

Frequently Asked Questions

Q: What's the difference between identity scoring, user fraud scoring, and risk scoring?

A: The terms overlap heavily. Identity scoring emphasizes that the score belongs to a specific user identity in your application — a per-user view rather than a per-event or per-IP one. User fraud scoring and risk scoring describe what the score measures (fraud likelihood, risk level). Dregs's identity scoring is all of these at once: a continuous fraud risk score per user identity, computed across four dimensions (Humanity, Authenticity, Uniqueness, Behavior). Some products score IPs, transactions, or sessions instead — Dregs scores the user, because that's the entity you actually want to make decisions about.

Q: How does Dregs score new users with no event history?

A: Scoring runs on whatever data is available. A user with one event still gets a score — the Authenticity score evaluates the email and profile data they submitted, the Uniqueness score looks at their device and IP, and the Humanity score checks for automation signatures. Confidence increases as more events arrive, but Dregs doesn't need a training period or a minimum event count to produce useful scores. The Behavior score is the one that genuinely benefits from more data, since it compares activity patterns over time.

Q: Are scores deterministic, or do they change over time?

A: Scores update continuously as new events arrive. A user's Uniqueness score can drop the moment Dregs detects their device shared with another account. A Behavior score can climb as the user demonstrates legitimate activity patterns, or fall if usage starts looking automated. Scores are a current view of risk, not a one-time verdict — you should design your application logic to react to the latest score rather than caching it.

Q: Can I customize the scoring weights or thresholds?

A: The four score categories themselves are fixed: Humanity, Authenticity, Uniqueness, Behavior. Within each category, Dregs aggregates observations from multiple analyzers using weighted averages. On the Advanced plan you can write custom analyzers in JavaScript that contribute their own observations to any category, which effectively lets you customize what feeds into each score. For thresholds — when scores trigger alerts or badges — you set those yourself through badge rules and alert rules in the dashboard.

Q: What's the difference between a score, a badge, and an escalation?

A: A score is a continuous 0–100 value. A badge is a label automatically applied to an identity when it matches your rules — for example, 'Likely Bot' when Humanity drops below 30. An escalation is a stateful incident opened when an identity crosses a threshold you've configured, with an open/acknowledged/closed lifecycle for your team to work through. Scores describe risk, badges classify it, escalations are how your team actions it.

Q: How does Dregs handle false positives?

A: Two main mechanisms. First, the four-score system means a single signal rarely tanks an identity on its own — a user with a low Uniqueness score because they share a household device still gets credit for high Authenticity and Humanity. Second, you can mark specific identities or devices as 'disregarded' to exclude them from analysis (useful for your own team accounts, QA bots, and known good users) and Dregs automatically re-scores anyone affected by that decision.

Q: Can I see why a particular user got a particular score?

A: Yes. Every score is computed from a list of observations, each with a category, a value (0.0 to 1.0), a confidence, and an explanation string from the analyzer that produced it. The dashboard shows the full observation list per identity, including which analyzers contributed and what they found. The same data is available through the API on the identity endpoints, so you can surface the reasoning in your own admin tools if needed.

An identity score for every user, automatically.

Drop the Dregs tracking script into your application and start seeing Humanity, Authenticity, Uniqueness, and Behavior identity scores from the very first event.

Schedule a Demo