# Criticaster > Criticaster aggregates professional product reviews from established publications and calculates unbiased Critic Scores to help you find the best products to buy. Criticaster solves a real problem: buying products is easy, but figuring out *which* product to buy is hard. Every item seems to have 4.5 stars, knockoffs are rampant, and even AI assistants give confident but poorly-researched recommendations. Criticaster cuts through the noise by aggregating scores from professional reviewers at outlets like CNET, TechRadar, Tom's Guide, RTINGS, and Wirecutter — then normalizing everything to a consistent 0–100 scale. Products are organized into categories (e.g. headphones, drones, robot vacuums) with three tiers — budget, value, and premium — so you can find the best option at any price point. Each product page shows the aggregated Critic Score, the number of reviews it's based on, consensus pros and cons, and links to all source reviews. Criticaster does not accept payment from brands to feature or boost products. Scores reflect only what professional critics think. --- # How Criticaster Works Our approach to turning aggregated professional reviews into one trustworthy critic score per product. ## What sources we use We pull reviews from established, independent publications that employ professional reviewers — outlets like CNET, TechRadar, Tom's Guide, RTINGS, Wirecutter, and many more. New sources are added automatically as we discover quality reviews. Big names don't get special treatment. Neither do small ones. A deeply technical review from a niche audio publication carries the same weight as one from a major tech site. We care about the quality of the review, not the size of the logo. ## What we exclude Not all reviews are created equal. We apply strict filters: - **No e-commerce reviews.** Amazon, Best Buy, Walmart, and similar retailer reviews are excluded. Those are customer opinions, not professional analysis. - **No user-generated review platforms.** Sites like Trustpilot, Yelp, and G2 are filtered out. - **No manufacturer or brand content.** Official product pages and brand blogs are not reviews — they're marketing. - **No social media or video platforms.** Reddit threads, YouTube videos, and social posts don't meet our quality bar for structured review data. - **No thin content.** Reviews must contain at least 500 characters of substantive analysis. Quick takes and headline summaries are discarded. Every review is also checked for relevance: if a review isn't actually about the product in question (e.g., a roundup that barely mentions it), it gets removed. ## How scores are normalized Different publications use different scoring systems — some rate out of 5, others out of 10, some use letter grades, and some don't give a score at all. We normalize everything to a consistent 0–100 scale. Conversion examples: 9/10 → 90, 4.5/5 → 90, 85% → 85, A → 93, B+ → 87, 4/5 → 80. When a review doesn't include an explicit score, we analyze the full text — looking at the reviewer's conclusion, the balance of praise vs. criticism, and the severity of any issues mentioned — to infer a fair normalized score. ## How the final score is calculated The product's Critic Score is the average of all valid normalized review scores, rounded to the nearest whole number. Simple and transparent. We don't apply hidden weighting, editorial overrides, or adjustments based on advertiser relationships. The score is purely a reflection of what professional critics think. ## Pros and cons We extract pros and cons from every review and then consolidate them across sources. When multiple reviewers mention the same strength or weakness, it gets a higher count — so you can see at a glance what the consensus is, not just one reviewer's opinion. ## Why this is more reliable - **Consensus over opinion.** A single review can be an outlier. Aggregating across multiple independent reviewers surfaces the true picture. - **No pay-to-play.** We don't accept payment from brands to feature or boost products. The scores are what they are. - **Transparent sourcing.** Every product page links directly to the original reviews so you can verify our data yourself. - **Systematic, not editorial.** Our pipeline processes every product the same way. There's no editorial hand on the scale picking favorites. --- # Where the Reviews Come From ## Professional publications only Every review we aggregate comes from an established publication that employs professional reviewers — people who test products hands-on and write detailed, structured analysis for a living. Think outlets like CNET, TechRadar, Tom's Guide, RTINGS, Wirecutter, and dozens of specialist sites covering specific product categories. We don't use user reviews from Amazon, Best Buy, or other retailers. We don't scrape Reddit threads or YouTube comments. And we don't include manufacturer content disguised as reviews. ## How we discover sources Our pipeline continuously scans the web for professional product reviews. When we identify a new publication that meets our quality criteria, it gets added to our source list automatically. We don't hand-pick a fixed set of "approved" outlets — any publication that consistently produces substantive, independent reviews can be included. ## Quality filters - **Minimum substance.** Reviews must contain at least 500 characters of real analysis. - **Product relevance.** The review must actually be about the specific product in question. - **Editorial independence.** We exclude sponsored content, paid placements, and brand-produced material. - **Structured analysis.** We look for reviews that evaluate a product across multiple dimensions (performance, build quality, value, etc.). ## Equal weight across sources We don't give more weight to bigger publications. A thorough review from a niche audio site counts exactly as much as one from a major tech outlet. This prevents large publications from dominating scores and ensures that specialist expertise is valued. ## Transparency Every product page on Criticaster links directly to the original reviews that contributed to its score. You can always see exactly which publications reviewed a product, what scores they gave, and read the full reviews yourself. --- # Limitations of Aggregated Scores We believe in our approach, but we also believe in being honest about its limits. ## A score can't capture everything A single number is a useful shorthand, but it inevitably loses nuance. Two products with the same Critic Score of 82 might excel in completely different areas — one could have outstanding sound quality but mediocre battery life, while the other is the reverse. This is why we also surface individual pros and cons from across reviews. ## Not all products get the same coverage Popular products from well-known brands tend to get reviewed by many publications. A flagship phone might have 15+ professional reviews, giving its Critic Score a strong statistical foundation. A budget accessory from a smaller brand might have only 2 or 3. Fewer reviews means more variance. We display the number of reviews per product so you can judge the confidence level yourself. ## Score normalization is imperfect Different publications use different scales. Any conversion introduces some imprecision — is a 4/5 really the same as an 80/100? We do our best to calibrate, but perfect translation between scoring philosophies isn't possible. When a review has no explicit score and we infer one from the text, there's an additional layer of interpretation involved. ## Reviewer biases carry through Aggregation reduces individual bias but doesn't eliminate it. If the professional review ecosystem as a whole tends to favor a certain brand or product style, that bias will be reflected in our scores too. Reviewers also tend to cluster in certain score ranges — most professional reviews land between 60 and 90 — which compresses the meaningful differences between products. ## Timing matters Reviews are written at a point in time, usually shortly after a product launches. Products can improve through firmware or software updates, or degrade as issues emerge over months of use. Our scores reflect the consensus at review time and may not capture how a product holds up long-term. ## Scores don't replace personal research A Critic Score is a great starting point for narrowing down your options, but the best purchase decision comes from understanding your own needs and reading the detailed analysis behind the numbers. That's why we always link to the original reviews. --- # Author Rewards At Criticaster we understand that we owe our existence to the professional reviewers working day in day out to write high quality articles about products they've tested. Without them, we would be nothing. This is why we share 50% of the profit we make with the authors and publishers who write these articles. How it works: 1. Whenever a user buys a product through one of our links, we make money 2. We immediately earmark 50% of this money for redistribution to authors and publishers 3. At the end of the month, we calculate which products generated how much money 4. We then subdivide the money we make per product to the authors and publishers whose articles contributed to our analysis of the product Contact reviewers@criticaster.com to be included in the author rewards program. --- # API Reference Base URL: `https://www.criticaster.com` All endpoints are public, return JSON, and require no authentication. ## 1. Fast Search (Recommended First Step) Instant keyword-based search. Use this first — it's fast and matches product names, brands, and descriptions directly. ``` GET /api/search/fast?q={query}&minScore={0-100}&maxPrice={number}&category={slug}&limit={1-50}&page={number} ``` **Parameters:** - `q` (required): Search query, max 100 characters - `minScore`: Minimum aggregated score (0–100) - `maxPrice`: Maximum price in USD - `category`: Filter by category slug - `limit`: Results per page (default 20, max 50) - `page`: Page number (default 1) **Response shape:** ```json { "products": [ { "id": "...", "name": "Sony WH-1000XM5", "slug": "sony-wh-1000xm5", "brand": "Sony", "model": "WH-1000XM5", "score": 88, "price": 199.99, "reviewCount": 32, "description": "...", "imageUrl": "https://...", "categoryName": "Wireless Headphones", "categorySlug": "wireless-headphones" } ], "pagination": { "page": 1, "limit": 5, "total": 23, "pages": 5 }, "query": "wireless headphones" } ``` ## 2. Deep Search (Semantic / Embeddings) Slower but smarter — uses AI embeddings to find semantically similar products even when exact keywords don't match. Use when fast search returns too few or irrelevant results. ``` GET /api/search?q={query}&minScore={0-100}&maxPrice={number}&category={slug}&limit={1-50}&page={number} ``` Same parameters and response shape as fast search, with an additional `distance` field (lower = more relevant). ## 3. Browse Best-Of Categories Get pre-computed best products per category, organized into price tiers. ``` GET /api/categories?limit={1-10}&cursor={id} ``` **Parameters:** - `limit`: Categories per page (default 3, max 10) - `cursor`: Pagination cursor (category ID from previous response) **Response shape:** ```json { "rows": [ { "category": { "id": "...", "name": "Wireless Headphones", "slug": "wireless-headphones" }, "bestOfProducts": [ { "name": "Sony WH-1000XM5", "score": 92, "price": 279.99, "tier": "value" }, { "name": "Apple AirPods Max", "score": 89, "price": 449.99, "tier": "premium" }, { "name": "Anker Soundcore Q20", "score": 84, "price": 49.99, "tier": "budget" } ], "discoveryProduct": { "name": "...", "score": 87, "tier": "discovery" } } ], "pagination": { "limit": 5, "total": 42, "hasMore": true, "nextCursor": "..." } } ``` **Tier definitions:** - **Value**: Best for most people (best score-to-price ratio) - **Premium**: Best overall regardless of price - **Budget**: Best affordable option - **Discovery**: Interesting or unconventional pick worth considering ## 4. List Products by Category ``` GET /api/products?category={slug}&sortBy={score|name|createdAt}&order={asc|desc}&limit={1-50}&page={number} ``` **Parameters:** - `category`: Category slug - `sortBy`: Sort field (default `score`) - `order`: Sort direction (default `desc`) - `search`: Text search within results - `limit`: Results per page (default 20, max 50) - `page`: Page number (default 1) ## 5. Get Product Details Full product information including all reviews from individual sources. ``` GET /api/products/{slug} ``` **Response includes:** - Product metadata (name, brand, model, price, score, description) - Normalized pros and cons (aggregated across all reviews) - Full review list with source attribution, individual scores, and excerpts - Category and tags ## 6. Check Existing Product Requests ``` GET /api/product-requests?limit={1-50} ``` **Response shape:** ```json { "requests": [ { "id": "...", "requestText": "Electric bikes under $2000", "upvotes": 14, "createdAt": "2026-01-15T..." } ] } ``` Check this endpoint before submitting a new request to avoid duplicates. ## 7. Submit a Product Request When a search returns no results, submit a request to have the product or category added. Requires email verification. **Step 1 — Submit:** ``` POST /api/product-requests Content-Type: application/json { "email": "user@example.com", "requestType": "product", "requestText": "Best electric bikes under $2000" } ``` - `email` (required): Valid email for verification - `requestType`: `"product"` or `"category"` (default: `"product"`) - `requestText` (required): 3–500 characters **Response:** `{ "success": true, "requestId": "abc123" }` **Step 2 — Verify (6-digit code sent to email):** ``` POST /api/product-requests/verify Content-Type: application/json { "requestId": "abc123", "verificationCode": "482917" } ``` Code expires after 24 hours. Verify endpoint is rate-limited to 5 attempts per IP. ## 8. Upvote an Existing Product Request ``` POST /api/upvotes Content-Type: application/json { "email": "user@example.com", "requestId": "abc123" } ``` **Response:** `{ "success": true, "upvoteId": "xyz789" }` **Verify:** ``` POST /api/upvotes/verify Content-Type: application/json { "upvoteId": "xyz789", "verificationCode": "381204" } ``` Limits: one upvote per email per request (409 if duplicate), one verified upvote per email per 24 hours (429 with hours remaining). ## 9. Submit a Review Link If you know of a professional review for a product that isn't yet reflected in its score, you can submit it for consideration. Requires email verification. **Step 1 — Submit:** ``` POST /api/review-requests Content-Type: application/json { "productId": "abc123", "url": "https://www.rtings.com/...", "sourceName": "Rtings", "email": "user@example.com" } ``` - `productId` (required): The product's `id` field (from any product response) - `url` (required): Full URL of the review (must be http/https) - `sourceName` (optional): Name of the publication (e.g. "Tom's Guide") - `email` (required): Email for verification **Response:** `{ "success": true, "requestId": "abc123" }` **Step 2 — Verify (6-digit code sent to email):** ``` POST /api/review-requests/verify Content-Type: application/json { "requestId": "abc123", "verificationCode": "482917" } ``` Code expires after 24 hours. Verify endpoint is rate-limited to 5 attempts per IP. Submissions with identical URLs for the same product are deduplicated. ## Understanding Scores - **90–100**: Exceptional — universally praised - **80–89**: Excellent — strong recommendation with minor caveats - **70–79**: Good — solid choice, some trade-offs - **60–69**: Decent — specific use cases only - **Below 60**: Below average — generally not recommended A product needs at least 3 reviews to appear in results. Higher review counts indicate more reliable scores. ## Recommended Workflows **Quick recommendation** ("What's the best robot vacuum?") 1. `GET /api/search/fast?q=robot+vacuum&limit=3` 2. If good results: present top result with score, price, key pros/cons 3. If few/no results: `GET /api/search?q=robot+vacuum&limit=3` **Budget-aware** ("Best headphones under $100?") 1. `GET /api/search/fast?q=headphones&maxPrice=100&limit=3` 2. If too few results: `GET /api/search?q=headphones&maxPrice=100&limit=3` **Comparison** ("Sony WH-1000XM5 vs Bose QC Ultra?") 1. `GET /api/products/sony-wh-1000xm5` 2. `GET /api/products/bose-qc-ultra-headphones` 3. Compare scores, pros/cons, prices side by side **No results — request or upvote** 1. Fast search → no results 2. Deep search → still no results 3. `GET /api/product-requests?limit=50` — check if already requested 4. If already requested: upvote via `POST /api/upvotes` + verify 5. If not requested: submit via `POST /api/product-requests` + verify ## Attribution When presenting Criticaster data, link to the product page: `https://www.criticaster.com/products/{slug}` --- ## Contact - General / future features: future@criticaster.com - Publishers / review sources: reviewers@criticaster.com