Manual

The Scoring System

Dregs evaluates every user across four independent dimensions, producing a multidimensional risk profile that catches what single-signal fraud systems miss. Each dimension scores from 0 to 100, and together they tell you not just whether a user is risky — but how they are risky.

Why Multidimensional Scoring

A single fraud signal is easy to spoof. A bot can mimic human mouse movements. A fraudster can use a real email address. A duplicate account can use a different device. Any individual signal, on its own, can be defeated.

Four independent dimensions are much harder to game simultaneously. A sophisticated bot might score well on Behavior but fail on Humanity. A multi-accounter might look authentic but score poorly on Uniqueness. A real person using disposable data might pass the Humanity check but fail Authenticity.

Abuse and fraud reveals themselves differently depending on what you measure. By measuring four distinct things, you see the full picture.

Humanity

Scale: 0-100. High = real human. Low = bot, scraper, or automation.

The Humanity score estimates the probability that a real person is behind the keyboard. It examines device-level signals and behavioral timing patterns that are difficult for automated tools to replicate convincingly.

Signals that feed into Humanity include:

  • User agent characteristics — does the browser string look like a real browser, or a headless automation tool?
  • Event timing patterns — are actions spaced like a human (variable, with pauses) or a script (uniform, instant)?
  • Device capabilities — does the hardware profile match a real consumer device?

Humanity is accurate from day one. Even a single page load provides enough device and timing data to distinguish most bots from humans. Confidence increases as more events arrive.

Authenticity

Scale: 0-100. High = genuine data. Low = fake, disposable, or inconsistent data.

The Authenticity score evaluates the quality and consistency of user-submitted information. It looks for patterns commonly associated with throwaway accounts, fake signups, and identity fabrication.

Signals that feed into Authenticity include:

  • Name structure — does it follow plausible naming patterns, or is it random characters?
  • Email analysis — is the domain disposable? Does the local part match the user's name?
  • User agent consistency — does the claimed browser match the actual device capabilities?
  • Data cross-references — do the various pieces of identity data tell a coherent story?

Authenticity improves with training. Out of the box, Dregs catches obvious red flags like disposable email providers and gibberish names. Over time, as analyzers learn what legitimate data looks like for your specific application, the score becomes more nuanced.

Uniqueness

Scale: 0-100. High = unique visitor. Low = duplicate or repeat account.

The Uniqueness score estimates the probability that this is the user's only account. Multi-accounting is one of the most common forms of abuse — free trial fraud, review manipulation, referral gaming, ban evasion — and Uniqueness is the dimension built to catch it.

Signals that feed into Uniqueness include:

  • Device fingerprint sharing — are other identities using the same device?
  • IP overlap — do multiple accounts originate from the same network?
  • Data similarity — do names, emails, or other attributes resemble those of existing users?
  • Session overlap — do browsing sessions connect different accounts?

Uniqueness is accurate from day one. If a new signup shares a device fingerprint with an existing account, Dregs flags it immediately — no training period needed. The score becomes more confident as more relationship data accumulates.

Behavior

Scale: 0-100. High = normal behavior. Low = suspicious patterns.

The Behavior score evaluates whether the user's actions match expected patterns. It examines how users interact with your application over time, looking for anomalies that indicate scripted activity, credential stuffing, scraping, or other abuse.

Signals that feed into Behavior include:

  • Session velocity — how many sessions does this user create in a given timeframe?
  • Time-of-day patterns — does the user follow normal usage rhythms, or operate at unusual hours?
  • IP churn — does the user's IP address change more frequently than expected?
  • Navigation patterns — does the user visit pages in a natural sequence?

Behavior requires some data to work well. A brand-new user with one event will not have a meaningful Behavior score yet. After a handful of sessions, the score stabilizes and becomes highly informative.

How Scores Are Calculated

Behind each score is a set of AI-assisted analyzers that each examine a specific signal. When Dregs scores an identity, every relevant analyzer runs and produces one or more observations.

Each observation contains:

  • Value — how favorable the signal is, with higher values being favorable and lower values being unfavorable.
  • Confidence — how certain the analyzer is in its assessment. More data generally means higher confidence.
  • Explanation — a human-readable description of what the analyzer found.
  • Metadata — supporting data points like raw counts, percentages, and thresholds.

The category score is a weighted average of all observations in that category, where the weights are the confidence levels. An observation with confidence 0.9 influences the score much more than one with confidence 0.3. The final score is mapped to the 0-100 scale you see in the dashboard.

You can inspect individual observations for any identity in the dashboard. This transparency is deliberate. You should always be able to understand why Dregs scored someone the way it did.

What Score Ranges Mean

While the right thresholds depend on your application, here are general guidelines for interpreting scores:

  • 90-100 — Very trustworthy. Strong positive signals across the board. Safe to extend full privileges.
  • 70-89 — Normal. Typical of legitimate users. No action needed.
  • 50-69 — Warrants attention. Some signals are weak or ambiguous. May deserve a closer look depending on context.
  • 25-49 — Suspicious. Multiple negative signals. Consider restricting access or flagging for manual review.
  • Below 25 — Likely fraud or abuse. Strong negative signals across multiple analyzers. Act decisively.

Remember that each dimension tells a different story. An identity with Humanity 95 and Uniqueness 20 is a real person with multiple accounts — very different from Humanity 15 and Uniqueness 90, which is a bot with a unique device.

Continuous Scoring

Scores are not static snapshots. Every time new events arrive for an identity, Dregs re-runs the analyzers and updates the scores. This means:

  • A suspicious user can improve. Someone who starts with a low Authenticity score (disposable email) but exhibits consistently human behavior over weeks will see other scores reflect that legitimacy.
  • A good user can deteriorate. A long-trusted account that suddenly starts churning IP addresses or creating sessions at inhuman speed will see its Behavior score drop.
  • Confidence grows over time. Early scores may be based on limited data. As events accumulate, analyzer confidence increases and scores become more reliable.

This continuous recalculation is automatic. You do not need to trigger it — Dregs re-scores identities within seconds of receiving new events.

Once you understand what scores mean, the next step is acting on them. Alerts let you define thresholds that notify your team automatically. Badges turn score ranges into human-readable labels like "Trusted" or "Suspicious" that you can use in your own application logic.