How we test & rank services
Full transparency into our independent testing process. No black boxes, no hidden criteria.
Service Sign-Up
We sign up for every service using standard agency accounts at the tier most relevant to agencies (usually mid-tier). No special arrangements, no press accounts, no freebies.
Controlled Testing
We submit identical test content to 15+ platforms across tube sites, social media, and forums. Each service gets the same set of URLs to detect and remove, tested at 5, 15, and 25 creator scale.
Data Collection
We measure detection speed, takedown filing time, removal success rate, Google de-indexing success, and platform coverage. Every data point is timestamped and verifiable.
Weighted Scoring
Our 42-criteria scoring model weights agency-relevant factors more heavily: multi-creator management (15%), scalability (12%), takedown speed (10%), bulk operations (10%), and white-label capabilities (8%).
Editorial Review
A second reviewer cross-checks every score against the raw benchmark data before publication. Outliers are retested. Corrections are published with a changelog note.
Quarterly Re-Testing
Every quarter, we re-test all services with fresh content. Pricing changes and major feature updates are reflected within 48 hours. Our data is never stale.
42-Criteria Scoring Model
Weighted for agency use cases. Here's what we measure and how much it counts.
Agency Operations
25%- Multi-creator dashboard
- Team roles & permissions
- Client portal
- Bulk operations
- Per-creator reporting
Takedown Performance
20%- Detection speed
- Filing speed
- Removal success rate
- Platform coverage
- Persistence tracking
Scalability
15%- Creator limit flexibility
- Per-creator cost at scale
- Performance under load
- Dashboard usability at volume
Platform Coverage
15%- Tube site monitoring
- Social media coverage
- Telegram and Reddit removal
- Google de-indexing speed
Value & Pricing
15%- Per-creator cost
- Pricing transparency
- Contract flexibility
- Feature-to-price ratio
Support & Service
10%- Response time
- Dedicated account manager
- Escalation process
- Onboarding quality
How we stay honest
Scores are generated from benchmark data collected during quarterly testing cycles. Referral links on this site generate revenue that funds test accounts and infrastructure, but the scoring pipeline is separate from the commercial side. A provider that performs poorly in our benchmarks will rank low regardless of any referral relationship. Every methodology detail is published here so you can audit the process yourself.