This is the prospecting flow behind /prospecting.
It is not a generic scraper or a single API call. The output is a short list of prospects worth calling, with enough context to explain why each one was included.
/prospecting starts by reading what is already in the system, then decides whether to analyze existing coverage, run a targeted search, enrich new prospects, or add lot intel via aerials.
Process
Step 1: Read the existing pipeline first
Before searching for anything new, the skill reads the CRM and recent prospecting artifacts.
That does two things:
- shows where coverage is thin by city, category, and lead type
- avoids running the same search again if it already produced junk, dupes, or dead ends
Prospecting volume is cheap. Cleanup is not.
Step 2: Run a targeted business search
Discovery starts with Google Places text search:
- host:
places.googleapis.com - output: business name, phone, formatted address, category, place id, coordinates when available
The search is intentionally narrow. Instead of “all businesses in a city,” it is usually a category plus city, like:
independent auto repair shop Wylie TXveterinary clinic Plano TXchurch Allen TX
The search is not optimized for maximum volume. It is optimized for the next realistic call list.
That usually means narrower search terms and fewer results.
Step 3: Normalize and deduplicate
Each Places result is normalized into a prospect shape:
- company name
- phone
- street address
- city / zip
- source
- business category
The dedup pass checks phone numbers against existing CRM contacts, normalized to last 10 digits for format-agnostic comparison. If the contact already exists, it gets skipped as a dupe.
Step 4: Parcel enrichment
If the prospect survives dedup, the next question is who likely controls the lot.
Parcel data is looked up through the CRM parcel endpoint:
- host: CRM worker
- route:
/api/parcels/search
This adds ownership context, land use, and property clues that help separate:
- owner-operator
- tenant
- property manager
That classification changes the call angle. Talking to an owner is different from asking a front desk who manages the pavement.
Step 5: Classification and note generation
After parcel lookup, the prospect is classified and gets an initial CRM note.
The note provides the first call brief:
- why the lead was added
- what type of decision-maker is likely involved
- what angle makes sense on the first call
- whether it looks like a small realistic first-job target or a weaker lead
This makes the CRM usable before the first outbound call.
Step 6: Write to the CRM
Prospects, notes, parcel links, and later aerial uploads all go through the CRM worker:
- host: CRM worker
Typical writes include:
POST /api/contactsPATCH /api/contacts/:idPOST /api/contacts/:id/notesPOST /api/contacts/:id/images
The worker is the operational system of record for the pipeline. Local files are processing artifacts.
Step 7: Optional aerial recon
If a prospect looks promising but the lot needs visual confirmation, the aerial pipeline runs.
Hosts involved:
nominatim.openstreetmap.orgas fallback geocoder when coordinates are not already storedmt1.google.comfor raw satellite tiles- CRM worker for image upload and CRM notes
The aerial flow is:
- Resolve coordinates
- Fetch a 3x3 tile grid
- Stitch into a JPEG
- Upload the image to the CRM
- Review the lot and post an
[AERIAL]note
The aerial note captures things like:
- estimated space count
- surface type
- striping visibility
- ADA / fire lane visibility
- whether the lot actually looks like a practical target
A business can look good in Places and still be a bad striping lead once the lot is visible.
Hosts in play
This flow touches a small set of external hosts:
| Host | Role |
|---|---|
places.googleapis.com | business discovery, coordinates stored at prospect creation |
nominatim.openstreetmap.org | fallback geocoding when stored coordinates are unavailable |
mt1.google.com | satellite tiles for aerials |
| CRM worker | contacts, parcels, notes, image upload |
End state
Possible outputs:
- a new call-ready prospect with context
- a dupe that gets skipped cleanly
- a logged dead end that helps the next run avoid wasting time
The system filters and prepares leads. It does not replace judgment.
