beginner15 minagntdata Lead APIsagntdata Lead APIs

GTM Pipeline Daily Digest — read-only morning brief for your agent fleet

Every weekday at 8am, one read-only agent queries your pipeline tables for the last 24 hours, computes find/verify rates, and drops a clean stat block into Slack. You can't tune what you can't see — this is the observability layer for a fleet of GTM agents.

One-click build

Build this with agnt_

Skip the copy-paste. We'll spin up a builder session prepopulated with this blueprint's spec — providers, schedule, database schema, and the questions the agent should ask you to personalize it for your product.

Build with agnt_

Sign up free · no credit card

The motion

Once you have a scraper, an enricher, and a sequencer all running overnight on different cadences, the natural next question is "did it all work?" — and you don't want to answer that by opening five tables in a SQL client every morning. This agent is the read-only observability layer. It runs at 8am on weekdays, queries the pipeline tables for the last 24 hours of activity, computes derived rates (emails-found rate, verify rate), and posts a single formatted message to Slack. workspace_db is read_only=true so the agent literally cannot write to anything — that's the whole guarantee. It's the smallest agent in the pipeline by design: one model (haiku), one connector tool (Slack), six SQL queries, one message. Pair this with your scraper, enricher, and sequencer to close the observability loop.

Pipeline health in one Slack message

New jobs, new leads, emails-found rate, verify rate, leads pushed to sequencer — the six numbers that actually matter, refreshed every morning before your team logs in.

Zero-data days are visible

If the scraper failed overnight, all counts come back zero — and the agent still posts. You spot the failure at 8am instead of three days later when a customer asks where their leads are.

Read-only, by configuration

workspace_db is set to `read_only: true`. The agent literally cannot UPDATE or INSERT. The safest agent in your fleet.

Cheapest agent in the pipeline

Six SQL queries + one Slack call + haiku. Costs cents per month to run, even on a daily cron.

You don't see how your overnight pipeline is doing until you go look. This agent makes you not have to look — every weekday morning, the stat block lands in Slack with last-24-hour counts and derived rates. Read-only, so it's impossible for the agent to corrupt anything. One Slack message per weekday morning. That's the whole job.

Click any node to inspect
Read-only Mon–Fri morning digest. Six queries, one Slack message.
One-click build

Build this with agnt_

Skip the copy-paste. We'll spin up a builder session prepopulated with this blueprint's spec — providers, schedule, database schema, and the questions the agent should ask you to personalize it for your product.

Build with agnt_

Sign up free · no credit card

Or copy a prompt into another platform

Prefer to build with OpenClaw, Hermes, or Claude Code? Drop this prompt into your agent of choice — it seeds the goal, the agntdata endpoints to use, and a step-by-step plan.

Prefer the manual walkthrough? ↓
You are helping me build a GTM Pipeline Daily Digest agent. Every weekday morning, it queries my workspace DB for the last 24 hours of pipeline activity across all my GTM agents (scraper, enricher, sequencer) and posts a clean stat block to a Slack channel. Read-only — never writes to the DB, never pushes leads anywhere.

This is the observability layer for a multi-agent GTM pipeline. You can't tune a fleet you can't see.

REFERENCE DOCS
- Full agntdata API documentation: https://agnt.mintlify.app/apis/overview
- agntdata Slack connector — `slack_post_message` is the only tool this agent uses.
- Workspace DB read-only access via agnt.db.

ABOUT MY PIPELINE
- Slack channel ID for the digest: <SLACK_CHANNEL_ID> (e.g. C0XXXXXXXXX — grab it from Slack URL when viewing the channel)
- Local timezone: <YOUR_TZ> (e.g. America/New_York)
- Tables to query: <YOUR_TABLES> — the tables your scraper / enricher / sequencer agents write to. Default assumes the canonical pipeline shape: `job_posts` (or `leads`), `hiring_leads` (or `leads`), `agent_config`. Adjust to your schema.

WHAT TO BUILD
- A scheduled agent that runs Mon–Fri at 8am local (claude-haiku-4-5 — formatting + simple aggregation, no judgment needed).
- Per run: execute 6 read-only SQL queries, compute find/verify rates, post a structured Slack message.
- Even on a zero-data day, ALWAYS post — that's how you know the scraper failed.

QUERIES
For a hiring-signal pipeline like the one this blueprint ships for, the canonical queries are:

1. Jobs scraped in last 24h
2. Total jobs all-time
3. Hiring leads added in last 24h
4. Total hiring leads all-time
5. Enrichment funnel (group by enrichment_status, where updated_at >= now() - 24h)
6. Leads pushed to sequencer in last 24h

If your pipeline is shaped differently, adapt the queries — but keep them all read-only.

DELIVERY
A single formatted Slack message with:
- 📋 Jobs scraped: <new>  (<total> total)
- 👥 Hiring leads added: <new>  (<total> total)
- 📧 Emails found: <count>  (<find_rate>% find rate)
- ✅ Verified: <count>  (<verify_rate>% verify rate)
- ⚠️ Risky: <count>
- ❌ Not found: <count>
- 🚀 Pushed to sequencer: <count>

GUARDRAILS
- Always post, even if counts are zero (zero = "the scraper didn't run, investigate").
- Never fabricate numbers — every value comes from a live DB query.
- Read-only DB access. Don't try to update anything.

When you're ready, ask me for the Slack channel ID, timezone, and the table names.

Paste into OpenClaw to scaffold this agent. Tweak the inputs and goal at the top of the prompt.

How to build it

6 steps. Each one links to the underlying agntdata endpoints — open them in a new tab to inspect parameters and pricing as you build.

One key gives you the Slack connector, workspace DB access, and the meta-agent builder. Credit-based pricing — this agent runs on cents per month.

On the agntdata dashboard, install the Slack connector. Pick the channel where the digest should land. Note the channel ID (C0XXXXXXXXX from the URL) — you'll paste it into the prompt.

This agent reads from the tables your scraper + enricher write to. Default expects `job_posts` and `hiring_leads`. If yours are named differently, adapt the queries during setup.

Click "Build with agnt_". The meta-agent asks for the Slack channel ID + timezone + table names, then deploys the agent with workspace_db read_only=true so it physically cannot write.

Default: `0 8 * * 1-5` (8am weekdays). Dry-run once to confirm the formatted message looks right and the channel ID resolves. Then turn the schedule on.

Most useful when paired with a scraper (e.g. `linkedin-hiring-signal-scraper`), an enricher (e.g. `linkedin-hiring-lead-enricher`), and a sequencer (e.g. `gtm-email-sequencer`). The digest tells you whether the other three are doing their job overnight.

Ship this blueprint today

One click spins up a builder session prefilled with this blueprint's spec. We'll ask you a handful of personalization questions, then generate the agent.

Related blueprints

Browse all →
LinkedInagntdata Lead APIsbeginner20 min

A champion moving to a new company is one of the highest-conversion outbound signals in B2B. They already trust your product; their new employer doesn't. This agent walks your champion + customer list daily, detects company changes from LinkedIn, and pings Slack the morning it happens.

Signal DetectionCold OutboundLifecycle MarketingFounderAccount Executive
X (Twitter)LinkedInagntdata Lead APIsadvanced25 min

Hand any X username to this agent and get back a qualified, ICP-scored lead with a verified email and a resolved LinkedIn profile.

Inbound EnrichmentLead ScoringData EnrichmentFounderRevOps
LinkedInagntdata Lead APIsadvanced25 min

Hand any LinkedIn profile URL to this agent and get back a qualified, ICP-scored lead with a verified email and a website summary attached.

Inbound EnrichmentLead ScoringData EnrichmentFounderRevOps