Skip to content
AgencyAIStack

How we test

Every review on AgencyAIStack is built from at least 14 days of hands-on usage on real client work — not vendor demos, not press releases, not feature checklists. This page documents the methodology so you can decide how much weight to give our verdicts.

The minimum bar for a published review

  • 14 days minimum of active usage by at least one senior strategist on our team.
  • 3 real client engagements minimum — not personal projects, not test accounts, not sample data.
  • Paid plan — we pay for our own seats. No "press access" or comp'd accounts. Our subscriptions are billable to AgencyAIStack the company, not to vendors.
  • Documented test plan — what we used the tool for, what we measured, and how we cross-checked results.

If a tool can't clear those bars, we don't publish. Tools that we tried and abandoned mid-test get a one-paragraph mention on the relevant category page, not a full review.

The scoring rubric

Every review is scored on a 0–10 scale across five dimensions. The published headline rating is a weighted average:

  • Output quality (30%) — does the tool produce work we'd ship to a paying client without significant rework? Measured against our internal "shippable / needs-edit / rewrite" rubric.
  • Agency fit (25%) — multi-seat pricing math, client permission models, white-labeling, account isolation, integration with our existing stack (ClickUp, Slack, Looker, etc.).
  • Time-to-value (20%) — onboarding speed, learning curve for a junior strategist, time-to-first-useful-output.
  • Pricing scaling (15%) — cost economics at 1, 5, 10, and 25 seats. Per-seat upcharges, tier jumps, hidden enterprise gates.
  • Support & reliability (10%) — uptime in our test window, support response time on a paid plan, public roadmap transparency.

Account isolation

We test in isolated environments to prevent cross-contamination. Each tool gets:

  • A dedicated Google Workspace account with anonymized identity, so vendor-side personalisation doesn't bias our results.
  • A separate billing card, so vendors can't link our test accounts back to AgencyAIStack and offer special treatment.
  • A clean browser profile per tool — no shared session state, no cookies leaking between tests.

This means our test results reflect what a normal new customer would experience, not what a tooling reviewer with vendor relationships would see.

What "Editor's Pick" means

A small subset of reviews carry an Editor's Pick flag. To earn it, a tool must:

  • Score 8.5+ on the headline rating.
  • Be the tool we'd recommend to a peer agency in the same category, unprompted.
  • Have survived at least one full retest cycle (6 months) without its score dropping.
  • Have transparent pricing and a fair refund policy. Tools with predatory billing practices are disqualified regardless of score.

Retesting cadence

Tools change. Pricing tiers shift, features land, AI models are swapped underneath us. We revisit every published review on a fixed cadence:

  • Every 6 months — a full retest with fresh client accounts and the current pricing tiers. The original review is updated in place; the previous review's exact text is archived in our internal CMS for accountability.
  • Within 30 days of any major vendor change — new pricing tier, ownership change, public outage longer than 24 hours, or a security incident. We add a dated note at the top of the review.
  • On reader request — if multiple readers email us about something we got wrong, we re-test that specific aspect and publish the result.

What disqualifies a tool

We won't publish a review of a tool that:

  • Refuses to allow refunds on annual plans, even within a reasonable trial window.
  • Has a documented history of selling user data to advertisers beyond standard cookie-based analytics.
  • Has been the subject of an FTC enforcement action or active class action in the past 24 months.
  • Uses dark patterns to prevent cancellation. We test the cancel flow on every paid tool, and tools that fail this test are flagged publicly.

How to flag a methodology issue

Our methodology is a living document. If you spot a flaw — something we should be measuring that we aren't, a weighting that doesn't match agency reality, a category we under-cover — email editor@agencyaistack.com with the subject line "Methodology". We read every one and revise this page when the feedback is good.

Last updated: May 5, 2026. Methodology changes are dated in the article and noted in our Editorial Policy.