Falcoscan.
An AI market intelligence platform for builders.
Founder + PM
Solo build, live product
Mar 2025 to present
Live at falcoscan.com

A thirty-second read.
Problem.
Legacy AI tool directories optimized for ad impressions, not discovery. Builders had no accessible, accurate, up-to-date way to see what exists and where there is still room to build.
Role.
Founder and Product Manager. I own strategy, research, PRDs, user stories, prioritization, design, build, and release.
Approach.
I scoped an MVP, wrote the PRD, prioritized a backlog, and shipped quickly. Data hygiene and saturation scoring became the product's core wedge.
Outcome.
Live at falcoscan.com. 6,400+ AI tools tracked across 29 markets. 134 active users. 2,000+ tracked events. 3:39 average engagement. Audiences in English and Spanish.
The landscape lacked a clean place for builders.
The AI tool landscape moves faster than any single analyst can track. Thousands of new tools launch every month. For builders, founders, and operators trying to answer a simple question (what exists, what is rising, and where is there still room to build), the options were all bad.
Three kinds of bad.
01
Click-bait directories.
Optimized for ad impressions, not discovery. Listings were shallow, unsourced, and often months out of date.
02
Fragmented industry reports.
Piecing together PDFs from different analyst firms to answer any single question.
03
Enterprise subscriptions.
Priced for Fortune 500 strategy teams. Out of reach for the independent builder who needed the insight most.
Every builder needed a clear picture of the market. There was no accessible, accurate, up-to-date place to find it.
I set out to build that.
One primary persona. Two adjacent segments.
I scoped the MVP against a single primary user. Narrow focus up front made every product decision easier.
Maya.
Technical founder, 2 to 8 years into her career, evaluating what to build next.
- Spot open markets before they fill up.
- Validate a product idea against real competition.
- Understand the landscape in hours, not weeks.
- Drowning in ad-driven directories.
- Can't afford Gartner.
- Spends Saturdays piecing together Twitter threads and launch pages.
- Trusts nothing she reads on legacy AI directories because half the tools are dead or duplicated.
Reads API docs for fun. Expects software to load fast and be honest about what it doesn’t know.
“I need to know which markets are still open. A directory of 10,000 tools doesn’t help. Saturation scoring does.”
Evaluating AI tools for internal use at their company. Care about accuracy and sourcing more than breadth. Secondary audience in year one.
Scanning the landscape for investment patterns. Tertiary audience. Not optimized for in the MVP but surface naturally through organic search.
Falcoscan becomes the trusted public layer for understanding the AI market. If a builder wants to know what exists and where to build, Falcoscan is the first place they open.
For builders, founders, and operators who need to understand the AI tool landscape,
Falcoscan is an AI market intelligence platform that scores market saturation and data-hygiene-verified listings across 29 markets.
Unlike legacy AI tool directories, Falcoscan is sourced, time-stamped, and built to answer where to build next, not how many tools can we list.
Month 0 to 3
Earn trust as the cleanest, most sourced AI tool dataset in the public web. Ship saturation scoring across 29 markets.
Month 3 to 6
Turn the catalog into a decision tool. Add opportunity signals, saved queries, and builder-facing alerts.
Month 6 to 12
Power the market-intelligence layer for other tools via API. Become infrastructure, not just a destination.
The PRD answered four questions.
Who is the user, what can they do, how well does it have to perform, and what is out of scope. Below is the excerpt that shaped the v1 build.
Functional requirements
What the product does.
Non-functional requirements
How well it performs.
User accounts beyond basic sign-in. API access. Paid tiers. Team features. Real-time alerts. These move to the roadmap.
Prioritized with RICE, scored against INVEST.
The MVP backlog held 23 stories. Five representative ones below show the shape of the work and how acceptance criteria were written.
As a builder evaluating a new product idea,
I need to browse AI tools grouped by market with a clear saturation score,
So that I can quickly see which categories are crowded and which still have room.
Acceptance criteria
Given I land on the browse page,
When I filter by a market (for example “AI writing tools”),
Then I see a list of tools in that market with a saturation indicator at the top.
As a builder burned by duplicate and dead listings on other sites,
I need every tool card to carry its source, a confidence score, and a last-verified timestamp,
So that I can trust what I am reading without cross-checking five other tabs.
Acceptance criteria
Given I view any tool listing,
When I look at the card,
Then source, confidence score, and “last verified” date are visible without clicking in.
As a Spanish-speaking builder in Latin America,
I need to browse Falcoscan in Spanish,
So that the product is useful without mental translation cost.
Acceptance criteria
Given I arrive from a Spanish locale,
When the page loads,
Then content renders in Spanish with a toggle to switch to English.
As a founder deciding where to build next,
I need to see the saturation landscape across all 29 markets at a glance,
So that I can identify open territory without reading 29 separate pages.
Acceptance criteria
Given I visit the market overview,
When the page loads,
Then all 29 markets render with their saturation state (crowded, open, rising) visible in one view.
As a builder tracking momentum in a specific market,
I need to see which tools are rising recently,
So that I can identify competitive threats before they become category leaders.
Acceptance criteria
Given I view a market detail page,
When I scroll to the “Rising” section,
Then I see tools tagged as rising based on recency and external signal.
Rolling Wave. Near-term committed, far-term directional.
Per the PMBOK recommendation for startup-stage products. Near-term is committed and detailed. Far-term is honestly uncertain. The roadmap is revisited at the end of every two-week cycle.
Now
- 01SEO content layer for top 10 market queries
- 02Directory submission and backlink outreach
- 03Persistent Spanish locale improvements
- 04Telemetry dashboard for weekly review
Next
- 01Saved watchlists (F7 from the PRD)
- 02Opportunity signal surface (rising and under-served markets)
- 03Weekly email digest for subscribed users
- 04Expanded market coverage beyond the initial 29
Later
- 01Public API for third-party integration
- 02Paid tier with team features
- 03Multi-region content personalization
- 04Agentic research assistant for custom market queries
Stack and the ingestion loop.
Seven layers, each chosen for a specific reason. The ingestion loop runs continuously so every listing is sourced, verified, and time-stamped.
Stack.
The ingestion loop.
01
Discover.
Apify and Firecrawl pull listings, pricing pages, and launch signals from known sources.
02
Deduplicate.
Incoming records matched against existing listings by normalized name, domain, and feature signature.
03
Enrich.
Claude passes classify the tool, extract metadata, and generate a short summary.
04
Score.
Confidence score attached based on source quality and corroboration. Saturation score updated per market.
05
Publish.
Listing persisted to Supabase, surfaced on the public browse layer, revalidated on the Next.js cache.
01
Discover.
Apify and Firecrawl pull listings, pricing pages, and launch signals from known sources.
02
Deduplicate.
Incoming records matched against existing listings by normalized name, domain, and feature signature.
03
Enrich.
Claude passes classify the tool, extract metadata, and generate a short summary.
04
Score.
Confidence score attached based on source quality and corroboration. Saturation score updated per market.
05
Publish.
Listing persisted to Supabase, surfaced on the public browse layer, revalidated on the Next.js cache.
Data hygiene is the product. Every listing carries a source, a confidence score, and a last-verified timestamp. If a listing cannot be verified in the current ingestion cycle, it is flagged as provisional.
The product’s job is not to show the most tools. It is to show tools that can be trusted.
Fig. 01. The browse surface. Every listing carries sourcing, confidence, and a timestamp.

What went live at falcoscan.com.
Below is what shipped in v1 at falcoscan.com, and what was intentionally deferred to the backlog.
- Public browse page across 29 markets
- Per-market saturation scoring (live)
- Tool detail pages with source, confidence, and timestamp
- Search across the full catalog of 6,400+ tools
- English and Spanish language support
- PostHog and Google Analytics telemetry instrumented on every critical event
- SEO foundation with structured data on every listing
- User accounts and watchlists
- Public API
- Email digests
- Dual-model reasoning routing (currently single-model for MVP cost discipline)
Live telemetry.
Week-over-week as of the current review cycle.
134
Real humans returning, not bounce traffic.
2,000+
Real interaction depth across browse, search, and detail.
3:39
Minutes per session. Long enough to indicate reading, not skimming.
6,400+
The dataset breadth users search.
29
Full saturation coverage at MVP launch.
EN · ES
Product reaches beyond one geography at MVP.
The growth motion right now is SEO. Every listing is a potential landing page. Every market is a potential search anchor. The next 90 days are measured against organic traffic growth and repeat-visitor rate.
Ninety days of running lessons.
I kept a running list from day one. These are the calls I’d make the same way, and the ones I’m adjusting for the next ninety.
Got right
- 01Shipped data hygiene before polish. Every listing sourced, deduplicated, and confidence-scored from day one. Trust compounds.
- 02Saturation scoring as the wedge. Turning the catalog into a decision tool is what separates Falcoscan from every click-bait directory.
- 03Supabase-first infrastructure. Storage, auth, realtime, one hop. Less surface area. Faster iteration.
- 04Bilingual from MVP. English plus Spanish opened the audience earlier than waiting for v2 localization.
- 05PostHog and Google Analytics instrumented on day one. Every hypothesis is wired to an event before it ships. No flying blind.
Adjusting next
- 01SEO should have been sprint one. I front-loaded product surface area. I'm now catching up on content and backlinks. The compounding was worth starting earlier.
- 02Eval harness was a week-three investment. Should have been week one. The next model change will be measured from minute zero.
- 03No user accounts at MVP was a cost-discipline call that bought ship speed but made retention measurement harder than it needed to be. Watchlists are coming in the next release.