Key takeaways
- Promptwatch's API exposes prompt-level visibility data -- brand mentions, citation counts, model coverage, and more -- that you can pipe directly into Tableau or Power BI.
- A prompt coverage heatmap lets you see at a glance which prompts you're winning, which you're losing, and which AI models are responsible for the gaps.
- The build involves three steps: pull data from the API, reshape it into a prompt × model matrix, then configure the heatmap visualization.
- You don't need a data engineering background. A basic Python or JavaScript script handles the heavy lifting, and both Tableau and Power BI can connect to the output with a few clicks.
Why build a prompt coverage heatmap?
If you're tracking AI visibility across multiple models -- ChatGPT, Perplexity, Claude, Gemini, and the rest -- you're probably staring at tables of numbers. A prompt coverage heatmap turns that data into something your whole team can read in ten seconds.
The idea is simple: rows are prompts (e.g. "best project management software for agencies"), columns are AI models, and the cell color shows your brand's visibility score for that combination. Dark cell? You're getting cited. Light cell? You're invisible. The pattern that emerges tells you exactly where to focus.
Promptwatch tracks visibility across 10 AI models and stores prompt-level data including citation counts, mention rates, and competitor comparisons. Its API makes all of that available programmatically, which means you can pull it into any BI tool you already use.

Step 1: Get your API credentials
Log into your Promptwatch account and navigate to Settings > API. Generate an API key and note your workspace ID -- you'll need both for every request.
The base URL for the Promptwatch API is:
https://api.promptwatch.com/v1
Authentication uses a Bearer token in the request header:
Authorization: Bearer YOUR_API_KEY
Keep this key out of version control. If you're building a scheduled refresh in Power BI or Tableau, store it as an environment variable or use your BI tool's credential manager.
Step 2: Pull prompt visibility data
The endpoint you want is /prompts/visibility. It returns a list of prompts in your tracked set, along with per-model visibility scores.
Here's a minimal Python script to fetch the data and write it to a CSV:
import requests
import pandas as pd
API_KEY = "your_api_key_here"
WORKSPACE_ID = "your_workspace_id"
headers = {
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json"
}
params = {
"workspace_id": WORKSPACE_ID,
"date_from": "2026-01-01",
"date_to": "2026-04-18"
}
response = requests.get(
"https://api.promptwatch.com/v1/prompts/visibility",
headers=headers,
params=params
)
data = response.json()
rows = []
for item in data["prompts"]:
prompt_text = item["prompt"]
for model, score in item["model_scores"].items():
rows.append({
"prompt": prompt_text,
"model": model,
"visibility_score": score,
"citation_count": item["citations"].get(model, 0)
})
df = pd.DataFrame(rows)
df.to_csv("prompt_coverage.csv", index=False)
print(f"Exported {len(df)} rows")
The resulting CSV has one row per prompt-model combination. That long format is exactly what both Tableau and Power BI expect for a heatmap.
A few things worth noting:
- The
visibility_scorefield is a 0-100 index. 0 means your brand never appears for that prompt on that model. 100 means you appear in every response. - The
citation_countfield shows raw citation volume over the date range -- useful for weighting your analysis. - If you have a lot of prompts (say, 150+ on the Professional plan), add pagination logic using the
pageandper_pageparams.
Step 3: Shape the data for a heatmap
Both Tableau and Power BI can work directly with the long-format CSV from step 2. You don't need to pivot it into a wide matrix -- the tools handle that internally. But there are a few transformations worth doing before you load the data.
Normalize prompt labels
Long prompts don't fit well in a visualization. Either truncate them in your script:
df["prompt_short"] = df["prompt"].str[:60] + "..."
Or create a numeric prompt ID and maintain a separate lookup table. The lookup approach is cleaner if you're building a dashboard that others will filter.
Add a coverage category
A categorical field makes the color scale more readable than a continuous gradient:
def categorize(score):
if score >= 70:
return "Strong"
elif score >= 40:
return "Moderate"
elif score > 0:
return "Weak"
else:
return "Not visible"
df["coverage_tier"] = df["visibility_score"].apply(categorize)
Add model groupings (optional)
If you're tracking 10 models, grouping them by type can help:
model_groups = {
"chatgpt": "OpenAI",
"perplexity": "Perplexity",
"claude": "Anthropic",
"gemini": "Google",
"google_ai_overviews": "Google",
"deepseek": "Other",
"grok": "Other",
"meta_ai": "Meta",
"copilot": "Microsoft",
"mistral": "Other"
}
df["model_group"] = df["model"].map(model_groups)
Save the final file as prompt_coverage_clean.csv.
Step 4: Build the heatmap in Tableau
Tableau is the faster option for heatmaps. The built-in square mark type is designed for exactly this use case.
Connect to your data
- Open Tableau Desktop and click "Connect to Data."
- Select "Text File" and load
prompt_coverage_clean.csv. - Tableau will auto-detect the column types. Make sure
visibility_scoreis recognized as a number (Measure), not a string.
Build the view
- Drag
modelto Columns. - Drag
prompt_shortto Rows. - In the Marks card, change the mark type from Automatic to Square.
- Drag
visibility_scoreto Color in the Marks card. - Click the Color legend and select "Edit Colors." Choose a diverging palette -- something like orange-blue or red-green -- with white or light gray at the midpoint.
At this point you have a working heatmap. The darker squares show where your brand is visible; the lighter ones show gaps.
Refine it
- Drag
citation_countto Size to make high-volume prompts visually larger. - Drag
coverage_tierto Detail so you can filter by tier in the final dashboard. - Right-click the
visibility_scorecolor legend and select "Edit Colors > Advanced" to set the midpoint at 50. - Sort rows by average visibility score (descending) so your strongest prompts appear at the top.

Publish or export
If you're sharing with stakeholders, publish to Tableau Server or Tableau Cloud. The CSV refresh can be automated via Tableau Prep or by scheduling your Python script to overwrite the source file.
Step 5: Build the heatmap in Power BI
Power BI doesn't have a native heatmap visual, but the Matrix visual with conditional formatting gets you 90% of the way there. For a true color-grid heatmap, use the free "Heatmap" custom visual from the AppSource marketplace.
Load the data
- Open Power BI Desktop and click "Get Data > Text/CSV."
- Load
prompt_coverage_clean.csv. - In Power Query, verify that
visibility_scoreis a Decimal Number type.
Option A: Matrix with conditional formatting
- Add a Matrix visual to the canvas.
- Set
prompt_shortas Rows,modelas Columns, andvisibility_scoreas Values (with aggregation set to Average). - In the Format pane, go to Cell elements > Background color.
- Turn it on and click "Advanced controls."
- Set the format style to "Gradient" with minimum color (red or light gray) at 0 and maximum color (dark blue or green) at 100.
This gives you a color-coded matrix that reads exactly like a heatmap. It's interactive by default -- clicking a cell cross-filters other visuals on the page.
Option B: Custom heatmap visual
- In the Visualizations pane, click the three dots and select "Get more visuals."
- Search for "Heatmap" and install the visual from the AppSource marketplace (it's free).
- Configure it with
prompt_shorton the Y axis,modelon the X axis, andvisibility_scoreas the Value.
The custom visual gives you more color control and looks cleaner, but the Matrix approach is more flexible for filtering and drill-through.

Add slicers
Add slicers for model_group, coverage_tier, and date range. This lets stakeholders filter down to "show me only Google models" or "show me only prompts where we're weak."
Step 6: Automate the data refresh
A static heatmap is useful once. An auto-refreshing one is useful every week.
Tableau
Schedule your Python script as a cron job (Linux/Mac) or Task Scheduler job (Windows) to overwrite the CSV daily. If you're on Tableau Cloud, use Tableau Bridge to keep the live connection to a local file, or publish the data source directly via the Tableau REST API.
Power BI
The cleanest approach for Power BI is to replace the CSV with a Python script data source:
- In Power BI Desktop, go to "Get Data > Python Script."
- Paste your API fetch script directly. Power BI will run it on each refresh and load the resulting DataFrame.
- Publish to Power BI Service and set up a scheduled refresh (daily or weekly).
You'll need to configure a gateway if the Python environment is on your local machine. Alternatively, push the data to a cloud database (Google BigQuery works well) and connect Power BI to that instead.

Comparison: Tableau vs Power BI for this use case
| Dimension | Tableau | Power BI |
|---|---|---|
| Native heatmap support | Yes (Square marks) | Requires custom visual or Matrix workaround |
| Setup time | ~15 minutes | ~20 minutes |
| Data refresh automation | Tableau Prep or REST API | Python script source or gateway |
| Interactivity | Strong (actions, filters) | Strong (slicers, drill-through) |
| Sharing | Tableau Server/Cloud | Power BI Service |
| Cost | Tableau Creator from ~$75/mo | Power BI Pro from ~$10/mo |
| Best for | Teams already on Tableau | Microsoft-stack organizations |
Both tools produce excellent heatmaps. If your team is already in the Microsoft ecosystem, Power BI is the obvious choice. If you need more visual customization or you're presenting to clients, Tableau's output tends to look more polished out of the box.
What to look for in the heatmap
Once your visualization is live, here's how to read it:
- Dark row, all columns: You're visible across all AI models for that prompt. Protect this position -- monitor for competitor gains.
- Dark row, some columns: You're winning on certain models but missing others. This is your most actionable gap. Check which models are light and look at what content those models are citing instead of you.
- Light row, all columns: You have no visibility for this prompt anywhere. Run an Answer Gap Analysis in Promptwatch to see what content competitors have that you don't, then create something better.
- Single dark cell: One model is citing you for a prompt but others aren't. The content exists -- it may just need structural improvements (clearer headings, more direct answers, better schema markup) to get picked up more broadly.
The heatmap doesn't tell you why you're invisible -- that's where Promptwatch's Answer Gap Analysis and citation data come in. Think of the heatmap as the triage layer: it shows you where to look, and the platform shows you what to do about it.
Extending the build
A few directions worth exploring once the basic heatmap is working:
- Competitor overlay: Pull competitor visibility scores from the same API endpoint and build a side-by-side comparison. Two heatmaps on the same dashboard -- yours and your main competitor's -- make gaps obvious immediately.
- Trend view: Add a date dimension and build a small multiples view showing how your coverage has changed week over week. This is particularly useful after publishing new content.
- Prompt volume weighting: Promptwatch provides prompt volume estimates. Multiply visibility score by volume to get a "weighted coverage" metric that prioritizes high-traffic prompts over obscure ones.
- Alert triggers: In Power BI, you can set data alerts that email you when a measure drops below a threshold. Set one for average visibility score so you get notified if a model update suddenly drops your coverage.
Wrapping up
The build itself is straightforward: fetch data from the Promptwatch API, reshape it into a long-format prompt × model table, and configure a color-coded matrix in Tableau or Power BI. The value isn't in the technical complexity -- it's in having a visualization that makes AI visibility gaps impossible to ignore in a stakeholder meeting.
Once the heatmap is live and refreshing automatically, you'll spend less time explaining why AI visibility matters and more time acting on the gaps it reveals.
