How Criticaster Works
Our approach to turning aggregated professional reviews into one trustworthy critic score per product.
What sources we use
We pull reviews from established, independent publications that employ professional reviewers. Publications like CNET, TechRadar, Tom's Guide, RTINGS, Wirecutter, and many more. New sources are added automatically as we discover quality reviews.
Big names don’t get special treatment. Neither do small ones. A deeply technical review from a niche audio publication carries the same weight as one from a major tech site. We care about the quality of the review, not the size of the logo. We’re constantly discovering and adding new sources as we find reputable reviewers doing real, hands-on testing. For a deeper look at our sourcing criteria, see where the reviews come from
What we exclude
Not all reviews are created equal. We apply strict filters to keep the data clean:
- No e-commerce reviews. Amazon, Best Buy, Walmart, and similar retailer reviews are excluded. Those are customer opinions, not professional analysis.
- No user-generated review platforms. Sites like Trustpilot, Yelp, and G2 are filtered out.
- No manufacturer or brand content. Official product pages and brand blogs are not reviews—they're marketing.
- No social media or video platforms. Reddit threads, YouTube videos, and social posts don't meet our quality bar for structured review data.
- No thin content. Reviews must contain at least 500 characters of substantive analysis. Quick takes and headline summaries are discarded.
Every review is also checked for relevance: if a review isn't actually about the product in question (e.g., a roundup that barely mentions it), it gets removed.
How scores are normalized
Different publications use different scoring systems—some rate out of 5, others out of 10, some use letter grades, and some don't give a score at all. We normalize everything to a consistent 0–100 scale.
Conversion examples
When a review doesn't include an explicit score, we analyze the full text—looking at the reviewer's conclusion, the balance of praise vs. criticism, and the severity of any issues mentioned—to infer a fair normalized score.
How the final score is calculated
The product's Critic Score is the average of all valid normalized review scores, rounded to the nearest whole number. Simple and transparent.
We don't apply hidden weighting, editorial overrides, or adjustments based on advertiser relationships. The score is purely a reflection of what professional critics think.
Pros and cons
We extract pros and cons from every review and then consolidate them across sources. When multiple reviewers mention the same strength or weakness, it gets a higher count—so you can see at a glance what the consensus is, not just one reviewer's opinion.
Why this is more reliable
Most “best of” articles you find online are written by a single author, often influenced by affiliate revenue or limited testing. Here's how Criticaster is different:
- Consensus over opinion. A single review can be an outlier. Aggregating across multiple independent reviewers surfaces the true picture.
- No pay-to-play. We don't accept payment from brands to feature or boost products. The scores are what they are.
- Transparent sourcing. Every product page links directly to the original reviews so you can verify our data yourself.
- Systematic, not editorial. Our pipeline processes every product the same way. There's no editorial hand on the scale picking favorites.
No methodology is perfect. We've written openly about the limitations of aggregated scores—including score compression, normalization trade-offs, and uneven review coverage—because we think transparency matters as much as the scores themselves.
Questions about how it works? Reach out at future@criticaster.com