This page is the public, answer-ready explanation of the SniperIntel scoring framework. It is not the full model, and it is not a feature tour. It exists so a skeptical operator can land here cold and leave with a clear idea of what the score means and what it does not.
What it is
The measurable, public framework.
The signal families we score, how they combine in principle, and how to interpret a resulting score responsibly.
What it is
The limits and edge cases.
Where the model can be wrong, where coverage thins, and the conditions under which a high score still fails.
What it is not
The full internal model.
Weights, thresholds, and the complete rule stack are not published — deliberately. Public methodology, private internals.
What it is not
Financial or trading advice.
The score is a research aid. It never replaces your own live context check, entry rule, or risk discipline.
SniperIntel is not a universal chain explorer. It is narrow by design — a scoring surface for Pump.fun developer wallets. Everything below is measured continuously across tracked creators and compiled into a relative score.
- 01How a creator behaves across launchesCadence, consistency, and pattern of repeated deployments across the developer's wallet history.
- 02How prior tokens resolvedOutcome quality on earlier launches — graduation, durability, and post-launch resolution across tracked windows.
- 03Whether the setup is crowdedBot presence, sniper density, and buyer-side compression around the creator's current or expected launches.
- 04Whether fee behavior is degrading qualityFee patterns across launches, and whether drift or deterioration is measurably weakening the setup.
- 05Whether wallet context looks stronger or weaker than the surface feedHow risk markers, recency, and trend context reshape what the visible feed would otherwise suggest.
The score is a joined read across those families — not a single metric, not a leaderboard of one axis.
A clean scoring system for Pump.fun creators is not a sorting problem — it is a context problem. Any single metric can look flattering until the surrounding signals move against it.
Noise / 01
Creator wallets are noisy. Deployment volume, reused addresses, and repeated throwaway launches drown out the actual patterns that matter.
Isolation / 02
One isolated metric is not enough. Outcome history without crowding is misleading. Fees without trend context are a half read.
Fragments / 03
Generic dashboards show fragments, not a joined decision model. You get rows of numbers, not a framework that weighs them against each other.
Drift / 04
A creator can look clean until crowding, fees, trend, or risk change the picture. Strong history doesn't survive a bad current setup.
04
The six signal families.
These are the public components of the score. They are explainable, but their weights, thresholds, and interaction logic are not published. Treat this section as the vocabulary of the score, not its recipe.
01 / Family
MIG · GRAD
What it measures
Migration & graduation behavior.
How often a creator moves beyond initial launch noise into stronger post-launch outcomes. A creator that reliably clears early thresholds scores higher on this family than one who churns through launches without resolution.
02 / Family
OUT · HIST
What it measures
Historical outcome quality.
How prior tokens behaved across tracked windows and quality thresholds. We read outcomes as distributions — consistency and depth matter more than any single spike.
03 / Family
FEE · DRIFT
What it measures
Fee patterns.
How fee behavior compares across a creator's launches and whether deterioration is measurably weakening the setup. A wallet degrading on fees tends to degrade on outcomes.
04 / Family
COMP · CROWD
What it measures
Competition & crowding.
Whether too many bots or snipers are already compressing the opportunity. A high-quality creator in a fully crowded setup is a different trade than the same creator early.
05 / Family
RISK · FLAG
What it measures
Scam & risk signals.
Bundle-style, suspicious, insider-style, or clearly low-quality patterns that can invalidate a setup before momentum matters. These markers can suppress an otherwise attractive score.
06 / Family
REC · TREND
What it measures
Recency & trend context.
Whether the developer is strengthening, fading, or too stale to trust. A strong historical score attached to a cold wallet is not the same signal as a strong score trending upward.
05
What stays proprietary.
The signal families above are public because you need them to interpret the score. Everything that turns those signals into a ranking is deliberately not published. The public framework is explainable enough to audit; the internals are not commoditized.
PublicDocumented here
The vocabulary.
- The six signal families
- What each family measures
- How they are combined in principle
- How to interpret a score responsibly
- Where the model's limits are
PrivateNever published
The recipe.
- Exact weights per family
- Threshold values and boundaries
- The full internal rule stack
- Signal interaction logic
- Final promotion / suppression rules
Every scoring system is a compression of reality. Here is where SniperIntel's compression can mislead you, and the conditions you should keep in your head whenever you read a score.
Research systemSniperIntel is a research surface, not financial advice. Nothing on this page or inside the product constitutes an instruction to buy or sell.
No guaranteesHistorical quality does not guarantee future token behavior. A strong past distribution is not a promise about the next launch.
Model errorAny scoring system can make mistakes. A score is a probabilistic summary; it will be wrong on some creators at some points in time.
False positives / negativesThey are normal edge conditions, not bugs. Expect them, and design your workflow so a single score is never the only filter.
Coverage & freshnessCoverage is broad but not magical. Freshness depends on upstream data, chain behavior, and processing windows. A score can lag a fast-moving setup.
07
Reading a score correctly.
A high score means the creator looks stronger relative to measurable public factors. It does not mean the next token is automatically tradable. Here is the same wallet read two different ways — first by a naive scanner, then correctly against live context.
sample · wallet 8Qz…Pmp
readout · illustrative
Score
High relative rank. Historical outcome quality and recent trend both strong.
A · Strong
Competition
Sniper density elevated on recent launches. Buyer crowding compressing available edge.
Crowded
Fees
Fee behavior drifting upward over the last three launches. Not yet broken, but degrading.
Drifting
Recency
Active within current window. Trend context still intact.
Fresh
Conclusion
A strong score is not a green light on its own. Here, the wallet is genuinely strong — but crowding and fee drift mean the setup is no longer clean. The correct read is "watch and wait for a better entry," not "buy the next launch." Crowding, fees, and stale recency each have independent power to kill a high-score setup.
// Note: this readout is illustrative, not a live wallet state. Real workflow happens inside the product surface, against live data.
Methodology answers the framework question. The other public surfaces answer different ones — workflow, proof, and the actual product.