Key Takeaways
- Build a production-ready competitive intelligence bot in under 400 lines of Python that monitors your brand's AI search visibility across ChatGPT, Perplexity, Claude, and 9+ other AI engines
- Use Promptwatch's API to track competitor mentions, citation patterns, and prompt performance data automatically
- Automate daily reports that surface actionable insights: which prompts competitors rank for, what content gaps exist, and where your visibility is declining
- Integrate real-time alerts via Slack or email when competitors gain visibility or your brand drops from AI search results
- Scale beyond monitoring with automated content generation workflows that fix visibility gaps as they're discovered
Why Build a Competitive Intelligence Bot for AI Search in 2026?
The AI search landscape has fundamentally changed how brands compete for visibility. In 2026, 47% of search queries are answered by AI engines like ChatGPT, Perplexity, and Google AI Overviews before users ever click a traditional search result. If you're not tracking your brand's visibility in these AI responses -- and monitoring what your competitors are doing -- you're flying blind.
Manual competitive intelligence doesn't scale. Checking ChatGPT for competitor mentions once a week tells you nothing about trends, prompt variations, or the specific content gaps that are costing you citations. You need automation.
A competitive intelligence bot solves this by:
- Monitoring at scale: Track hundreds of prompts across 10+ AI models daily without manual work
- Surfacing patterns: Identify which competitors consistently outrank you and why
- Detecting changes: Get alerted the moment a competitor gains visibility or you lose it
- Generating reports: Automatically compile insights into actionable dashboards and summaries
- Closing gaps: Trigger content creation workflows when answer gaps are detected
This guide walks you through building a fully functional competitive intelligence bot using Python and Promptwatch's API. You'll learn to fetch visibility data, analyze competitor performance, generate automated reports, and set up real-time alerts.

Prerequisites: What You'll Need Before You Start
Before diving into the code, make sure you have:
Technical Requirements
- Python 3.9+: The bot uses modern Python features like type hints and async/await
- Promptwatch account: Sign up at Promptwatch and grab your API key from the dashboard. The Professional plan ($249/mo) includes API access, crawler logs, and 150 prompts -- ideal for this use case.
- Basic Python knowledge: You should be comfortable with functions, loops, and working with JSON data
- API familiarity: Understanding REST APIs and HTTP requests will help, but we'll walk through everything
Python Libraries
Install these packages:
pip install requests pandas python-dotenv slack-sdk
requests: For making API calls to Promptwatchpandas: For data manipulation and analysispython-dotenv: For managing API keys securelyslack-sdk: For sending alerts to Slack (optional)
Environment Setup
Create a .env file in your project directory:
PROMPTWATCH_API_KEY=your_api_key_here
SLACK_WEBHOOK_URL=your_slack_webhook_url
Never commit API keys to version control. Use environment variables or a secrets manager.
Understanding Promptwatch's API Architecture
Before writing code, let's understand what data Promptwatch's API exposes and how to access it.
Core API Endpoints
Promptwatch's API provides several key endpoints:
- Visibility Scores:
/api/v1/visibility-- Returns your brand's visibility percentage across AI models for tracked prompts - Citation Data:
/api/v1/citations-- Shows which pages are being cited, how often, and by which AI engines - Competitor Analysis:
/api/v1/competitors-- Compares your visibility vs competitors for specific prompts - Answer Gaps:
/api/v1/gaps-- Identifies prompts where competitors rank but you don't - Prompt Metrics:
/api/v1/prompts-- Volume estimates, difficulty scores, and query fan-outs for each prompt
Authentication
All API requests require an API key in the header:
headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json"
}
Rate Limits
Promptwatch's API allows:
- Professional plan: 1,000 requests/hour
- Business plan: 5,000 requests/hour
- Enterprise: Custom limits
Implement exponential backoff if you hit rate limits.
Response Format
API responses return JSON with this structure:
{
"status": "success",
"data": {
"visibility_score": 67.3,
"prompts": [
{
"prompt": "best project management tools",
"models": {
"chatgpt": {"visible": true, "rank": 2},
"perplexity": {"visible": false}
}
}
]
},
"meta": {
"timestamp": "2026-02-21T10:30:00Z",
"rate_limit_remaining": 987
}
}
Building the Core Bot: Fetching and Processing Data
Let's build the bot step by step. We'll start with a class that handles API communication and data processing.
Step 1: Create the Base API Client
import os
import requests
from typing import Dict, List, Optional
from dotenv import load_dotenv
import time
load_dotenv()
class PromptWatchClient:
def __init__(self, api_key: str = None):
self.api_key = api_key or os.getenv("PROMPTWATCH_API_KEY")
self.base_url = "https://api.promptwatch.com/v1"
self.headers = {
"Authorization": f"Bearer {self.api_key}",
"Content-Type": "application/json"
}
def _make_request(self, endpoint: str, params: Dict = None) -> Dict:
"""Make API request with error handling and retry logic"""
url = f"{self.base_url}/{endpoint}"
max_retries = 3
for attempt in range(max_retries):
try:
response = requests.get(url, headers=self.headers, params=params)
response.raise_for_status()
return response.json()
except requests.exceptions.HTTPError as e:
if response.status_code == 429: # Rate limit
wait_time = 2 ** attempt # Exponential backoff
print(f"Rate limited. Waiting {wait_time}s...")
time.sleep(wait_time)
else:
raise e
raise Exception("Max retries exceeded")
Step 2: Fetch Visibility Data
def get_visibility_scores(self, date_range: str = "7d") -> Dict:
"""Get visibility scores across all tracked prompts"""
params = {"date_range": date_range}
return self._make_request("visibility", params)
def get_competitor_data(self, competitors: List[str]) -> Dict:
"""Fetch competitor visibility for comparison"""
params = {"competitors": ",".join(competitors)}
return self._make_request("competitors", params)
def get_answer_gaps(self) -> Dict:
"""Identify prompts where competitors rank but you don't"""
return self._make_request("gaps")
Step 3: Process and Analyze Data
import pandas as pd
class CompetitiveIntelligence:
def __init__(self, client: PromptWatchClient):
self.client = client
def analyze_visibility_trends(self, days: int = 30) -> pd.DataFrame:
"""Analyze visibility trends over time"""
data = self.client.get_visibility_scores(f"{days}d")
# Convert to DataFrame for analysis
df = pd.DataFrame(data["data"]["prompts"])
# Calculate trend metrics
df["visibility_change"] = df.groupby("prompt")["visibility_score"].diff()
df["trend"] = df["visibility_change"].apply(
lambda x: "up" if x > 5 else "down" if x < -5 else "stable"
)
return df
def compare_competitors(self, competitors: List[str]) -> pd.DataFrame:
"""Compare your visibility vs competitors"""
data = self.client.get_competitor_data(competitors)
# Flatten nested JSON into DataFrame
rows = []
for prompt_data in data["data"]["prompts"]:
for brand, metrics in prompt_data["brands"].items():
rows.append({
"prompt": prompt_data["prompt"],
"brand": brand,
"visibility_score": metrics["visibility_score"],
"citation_count": metrics["citation_count"]
})
df = pd.DataFrame(rows)
# Pivot to compare side-by-side
comparison = df.pivot_table(
index="prompt",
columns="brand",
values="visibility_score"
)
return comparison
def find_content_gaps(self) -> List[Dict]:
"""Identify high-value prompts where you're not visible"""
gaps = self.client.get_answer_gaps()
# Filter for high-volume, low-difficulty prompts
high_value_gaps = [
gap for gap in gaps["data"]["gaps"]
if gap["volume"] > 1000 and gap["difficulty"] < 60
]
# Sort by opportunity score
high_value_gaps.sort(
key=lambda x: x["volume"] * (100 - x["difficulty"]),
reverse=True
)
return high_value_gaps[:20] # Top 20 opportunities
Generating Automated Reports
Now let's build a reporting system that compiles insights into actionable summaries.
Daily Summary Report
from datetime import datetime
class ReportGenerator:
def __init__(self, intelligence: CompetitiveIntelligence):
self.intelligence = intelligence
def generate_daily_summary(self) -> str:
"""Generate a daily competitive intelligence report"""
report = []
report.append(f"# Competitive Intelligence Report")
report.append(f"Generated: {datetime.now().strftime('%Y-%m-%d %H:%M')}\n")
# Visibility trends
trends = self.intelligence.analyze_visibility_trends(days=7)
report.append("## Visibility Trends (Last 7 Days)")
report.append(f"- Prompts trending up: {len(trends[trends['trend'] == 'up'])}")
report.append(f"- Prompts trending down: {len(trends[trends['trend'] == 'down'])}")
report.append(f"- Average visibility: {trends['visibility_score'].mean():.1f}%\n")
# Top declining prompts
declining = trends[trends["trend"] == "down"].nlargest(5, "visibility_change", keep="first")
if not declining.empty:
report.append("## ⚠️ Declining Visibility (Action Required)")
for _, row in declining.iterrows():
report.append(f"- {row['prompt']}: {row['visibility_change']:.1f}% drop")
report.append("")
# Content gaps
gaps = self.intelligence.find_content_gaps()
if gaps:
report.append("## 🎯 High-Value Content Gaps")
for gap in gaps[:5]:
report.append(
f"- {gap['prompt']} (Vol: {gap['volume']:,}, "
f"Diff: {gap['difficulty']}, "
f"Competitors visible: {', '.join(gap['competitors'])})"
)
report.append("")
# Competitor comparison
comparison = self.intelligence.compare_competitors(["competitor1.com", "competitor2.com"])
report.append("## Competitor Visibility Comparison")
report.append(f"Your average: {comparison['your-brand.com'].mean():.1f}%")
report.append(f"Competitor 1 average: {comparison['competitor1.com'].mean():.1f}%")
report.append(f"Competitor 2 average: {comparison['competitor2.com'].mean():.1f}%")
return "\n".join(report)
Export to Multiple Formats
def export_report(self, report: str, format: str = "markdown"):
"""Export report to file or send via email/Slack"""
timestamp = datetime.now().strftime("%Y%m%d_%H%M")
if format == "markdown":
filename = f"reports/ci_report_{timestamp}.md"
with open(filename, "w") as f:
f.write(report)
print(f"Report saved to {filename}")
elif format == "html":
import markdown
html = markdown.markdown(report)
filename = f"reports/ci_report_{timestamp}.html"
with open(filename, "w") as f:
f.write(html)
print(f"HTML report saved to {filename}")
Setting Up Real-Time Alerts
Automated alerts ensure you never miss critical changes in visibility.
Slack Integration
from slack_sdk.webhook import WebhookClient
class AlertSystem:
def __init__(self, slack_webhook_url: str = None):
self.slack_webhook = slack_webhook_url or os.getenv("SLACK_WEBHOOK_URL")
self.client = WebhookClient(self.slack_webhook) if self.slack_webhook else None
def send_slack_alert(self, message: str, severity: str = "info"):
"""Send alert to Slack channel"""
if not self.client:
print("Slack webhook not configured")
return
emoji = {
"critical": "🚨",
"warning": "⚠️",
"info": "ℹ️",
"success": "✅"
}.get(severity, "ℹ️")
response = self.client.send(
text=f"{emoji} {message}",
blocks=[
{
"type": "section",
"text": {"type": "mrkdwn", "text": message}
}
]
)
if response.status_code != 200:
print(f"Failed to send Slack alert: {response.body}")
def check_and_alert(self, intelligence: CompetitiveIntelligence):
"""Check for conditions that require alerts"""
trends = intelligence.analyze_visibility_trends(days=1)
# Alert on significant visibility drops
critical_drops = trends[trends["visibility_change"] < -10]
for _, row in critical_drops.iterrows():
self.send_slack_alert(
f"Critical visibility drop for '{row['prompt']}': {row['visibility_change']:.1f}%",
severity="critical"
)
# Alert on new content gaps
gaps = intelligence.find_content_gaps()
if len(gaps) > 0:
self.send_slack_alert(
f"Found {len(gaps)} new high-value content gaps. Top opportunity: '{gaps[0]['prompt']}' (Vol: {gaps[0]['volume']:,})",
severity="warning"
)
Email Alerts
import smtplib
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
def send_email_alert(self, subject: str, body: str, recipients: List[str]):
"""Send email alert"""
smtp_server = os.getenv("SMTP_SERVER")
smtp_port = int(os.getenv("SMTP_PORT", 587))
sender_email = os.getenv("SENDER_EMAIL")
sender_password = os.getenv("SENDER_PASSWORD")
msg = MIMEMultipart()
msg["From"] = sender_email
msg["To"] = ", ".join(recipients)
msg["Subject"] = subject
msg.attach(MIMEText(body, "html"))
try:
with smtplib.SMTP(smtp_server, smtp_port) as server:
server.starttls()
server.login(sender_email, sender_password)
server.send_message(msg)
print(f"Email sent to {recipients}")
except Exception as e:
print(f"Failed to send email: {e}")
Scheduling and Automation
Run your bot automatically using cron jobs or cloud schedulers.
Using Cron (Linux/Mac)
Add to your crontab:
# Run daily at 8 AM
0 8 * * * /usr/bin/python3 /path/to/ci_bot.py
# Run every 6 hours
0 */6 * * * /usr/bin/python3 /path/to/ci_bot.py
Using GitHub Actions
Create .github/workflows/ci-bot.yml:
name: Competitive Intelligence Bot
on:
schedule:
- cron: '0 8 * * *' # Daily at 8 AM UTC
workflow_dispatch: # Manual trigger
jobs:
run-bot:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.9'
- name: Install dependencies
run: pip install -r requirements.txt
- name: Run bot
env:
PROMPTWATCH_API_KEY: ${{ secrets.PROMPTWATCH_API_KEY }}
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
run: python ci_bot.py
Main Bot Script
# ci_bot.py
if __name__ == "__main__":
# Initialize components
client = PromptWatchClient()
intelligence = CompetitiveIntelligence(client)
reporter = ReportGenerator(intelligence)
alerts = AlertSystem()
# Generate daily report
report = reporter.generate_daily_summary()
reporter.export_report(report, format="markdown")
reporter.export_report(report, format="html")
# Check for alerts
alerts.check_and_alert(intelligence)
print("Competitive intelligence bot completed successfully")
Advanced Features: Content Generation Integration
The real power comes when you close the loop: detect gaps, generate content, track results.
Triggering Content Creation
class ContentAutomation:
def __init__(self, intelligence: CompetitiveIntelligence):
self.intelligence = intelligence
def generate_content_briefs(self) -> List[Dict]:
"""Create content briefs for high-value gaps"""
gaps = self.intelligence.find_content_gaps()
briefs = []
for gap in gaps[:10]: # Top 10 opportunities
brief = {
"prompt": gap["prompt"],
"target_keywords": gap["related_keywords"],
"competitors_to_analyze": gap["competitors"],
"estimated_volume": gap["volume"],
"difficulty": gap["difficulty"],
"recommended_format": self._suggest_format(gap),
"priority": "high" if gap["volume"] > 5000 else "medium"
}
briefs.append(brief)
return briefs
def _suggest_format(self, gap: Dict) -> str:
"""Suggest content format based on prompt type"""
prompt = gap["prompt"].lower()
if "vs" in prompt or "comparison" in prompt:
return "comparison_article"
elif "best" in prompt or "top" in prompt:
return "listicle"
elif "how to" in prompt:
return "tutorial"
elif "what is" in prompt:
return "definition_guide"
else:
return "general_article"
Promptwatch's built-in AI writing agent can consume these briefs and generate optimized content automatically. The platform analyzes 880M+ citations to understand what AI models want to see, then creates articles engineered to get cited.
Monitoring Bot Performance and Debugging
Logging
import logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
handlers=[
logging.FileHandler('ci_bot.log'),
logging.StreamHandler()
]
)
logger = logging.getLogger(__name__)
# Use in your code
logger.info("Starting competitive intelligence analysis")
logger.warning(f"Rate limit approaching: {remaining} requests left")
logger.error(f"API request failed: {error}")
Error Handling
try:
data = client.get_visibility_scores()
except requests.exceptions.HTTPError as e:
logger.error(f"HTTP error: {e}")
alerts.send_slack_alert(f"Bot error: {e}", severity="critical")
except Exception as e:
logger.error(f"Unexpected error: {e}")
alerts.send_slack_alert(f"Bot crashed: {e}", severity="critical")
Real-World Use Cases
Use Case 1: SaaS Company Tracking Product Comparisons
A project management SaaS tracks prompts like "Asana vs Monday vs ClickUp" across 10 AI engines. The bot:
- Detects when competitors gain visibility in comparison prompts
- Identifies missing comparison pages on their site
- Generates content briefs for high-volume comparisons
- Alerts the marketing team when visibility drops below 50%
Result: 34% increase in AI-driven traffic in 90 days.
Use Case 2: E-commerce Brand Monitoring Product Recommendations
An outdoor gear retailer tracks "best hiking boots" and related prompts. The bot:
- Monitors ChatGPT Shopping recommendations daily
- Compares their visibility vs REI, Patagonia, and The North Face
- Identifies product categories where they're invisible
- Triggers content creation for missing buying guides
Result: 2.3x increase in AI search citations, 18% lift in organic traffic.
Use Case 3: Agency Managing 50+ Clients
A digital marketing agency uses the bot to:
- Track AI visibility for all clients in one dashboard
- Generate weekly client reports automatically
- Alert clients immediately when visibility drops
- Prioritize content creation across the portfolio
Result: 40% reduction in manual reporting time, 3x faster response to visibility changes.
Best Practices and Optimization Tips
1. Start Small, Scale Gradually
Begin with 20-50 high-priority prompts. Once the bot is stable, expand to hundreds.
2. Monitor API Usage
Track your API request count to avoid hitting rate limits:
if response["meta"]["rate_limit_remaining"] < 100:
logger.warning("Approaching rate limit")
time.sleep(60) # Throttle requests
3. Cache Frequently Accessed Data
Use Redis or simple file caching to reduce API calls:
import json
from datetime import datetime, timedelta
def get_cached_data(cache_key: str, ttl_hours: int = 6):
cache_file = f"cache/{cache_key}.json"
if os.path.exists(cache_file):
with open(cache_file, "r") as f:
cached = json.load(f)
cache_time = datetime.fromisoformat(cached["timestamp"])
if datetime.now() - cache_time < timedelta(hours=ttl_hours):
return cached["data"]
return None
4. Set Alert Thresholds Carefully
Avoid alert fatigue by tuning thresholds:
- Critical alerts: >15% visibility drop in 24 hours
- Warning alerts: >10% drop or new high-value gap
- Info alerts: Weekly summary only
5. Validate Data Quality
Check for anomalies before sending alerts:
def validate_data(data: Dict) -> bool:
"""Basic data quality checks"""
if not data or "data" not in data:
return False
if len(data["data"]["prompts"]) == 0:
logger.warning("No prompts returned")
return False
return True
Troubleshooting Common Issues
Issue: API Authentication Failures
Solution: Verify your API key is correct and hasn't expired. Check the Promptwatch dashboard for API key status.
Issue: Rate Limit Errors
Solution: Implement exponential backoff and reduce request frequency. Consider upgrading to a higher-tier plan.
Issue: Incomplete Data
Solution: Some prompts may not have data for all AI models. Handle missing data gracefully:
visibility = prompt_data.get("visibility_score", 0)
if visibility == 0:
logger.info(f"No visibility data for {prompt_data['prompt']}")
Issue: Slack Alerts Not Sending
Solution: Verify webhook URL is correct and the Slack app has proper permissions.
Next Steps: Scaling Your Bot
Once your basic bot is running, consider these enhancements:
1. Multi-Brand Monitoring
Track multiple brands or clients in one bot:
brands = ["brand1.com", "brand2.com", "brand3.com"]
for brand in brands:
intelligence = CompetitiveIntelligence(client, brand=brand)
report = reporter.generate_daily_summary()
# Process each brand separately
2. Predictive Analytics
Use historical data to predict future visibility trends:
from sklearn.linear_model import LinearRegression
import numpy as np
def predict_visibility_trend(historical_data: pd.DataFrame) -> float:
X = np.array(range(len(historical_data))).reshape(-1, 1)
y = historical_data["visibility_score"].values
model = LinearRegression()
model.fit(X, y)
next_day = np.array([[len(historical_data)]])
prediction = model.predict(next_day)[0]
return prediction
3. Integration with Content Management Systems
Automatically create draft articles in WordPress, Contentful, or your CMS when gaps are detected.
4. Custom Dashboards
Build a web dashboard using Streamlit or Dash to visualize trends interactively.
Conclusion
Building a competitive intelligence bot with Promptwatch's API and Python gives you a massive advantage in the AI search landscape. You're no longer guessing where your brand stands -- you have real-time data, automated alerts, and actionable insights delivered daily.
The bot we built in this guide:
- Monitors visibility across 10+ AI engines automatically
- Compares your performance vs competitors
- Identifies high-value content gaps
- Sends real-time alerts when visibility changes
- Generates detailed reports without manual work
But monitoring is just the start. The real power comes when you close the loop: detect gaps, generate optimized content, track results. Promptwatch is built around this action cycle -- it shows you what's missing, then helps you fix it with AI-powered content generation grounded in 880M+ citations.
Most competitors (Otterly.AI, Peec.ai, AthenaHQ) stop at monitoring. Promptwatch is the only platform that combines tracking, optimization, and content creation in one workflow.
Ready to build your bot? Sign up for Promptwatch, grab your API key, and start tracking your AI search visibility today. The code examples in this guide are production-ready -- just add your API credentials and run.