Skip to main content
The prompt results page shows you exactly how AI models respond to a specific prompt. Here you’ll see your performance metrics, which sources are being cited, and every individual response. At the top of the prompt results page, you’ll find key information and actions.
Prompt Results Header

Last updated

See when data was last collected for this prompt. Click Refresh to manually trigger a new run and get fresh results.

Quick actions

Organize and navigate without leaving the page:
  • Add tags - Group this prompt with tags like “Competitor Comparison”, “Feature Query”, or “Pricing”
  • Attach personas - Assign personas (e.g., “Enterprise Buyer”, “SMB Owner”) to segment your analysis
  • View prompt - Jump to the prompt configuration page

Core metrics

The header displays your four key performance indicators: Each metric shows:
  • Current value
  • Trend direction (up/down arrow)
  • Change from previous period

Improvements

If you have enabled improvements for this prompt, you will see recommended actions to boost your visibility.
Improvements Section
The improvements section shows:
RecommendationDescription
Pages to optimizeSpecific pages on your site that could help you rank for this prompt
Content gapsTopics you should cover that competitors are ranking for
Similarity scoreHow well each page matches the prompt (content relevance, not difficulty)
The similarity score tells you how well a page content matches the prompt, not how difficult it is to rank.
  • High similarity (70%+): Content already relevant. Focus optimization here first.
  • Medium similarity (40-70%): Partially relevant. May need expansion.
  • Low similarity (below 40%): Different topic. Consider creating new content instead.
Example for “best CRM for sales teams”:
  • /solutions/sales-crm at 89% directly addresses the query
  • /features/reporting at 45% related but not specific enough
  • /blog/marketing-tips at 12% wrong topic entirely
Enable improvements in your prompt settings to get AI-powered recommendations for this prompt.

Sources

The sources section shows which websites AI models cited when answering this prompt.
Sources Section

Source table

Sources are grouped by domain with these columns:
ColumnDescription
DomainThe source website (e.g., g2.com, wikipedia.org)
CategoryType of source (competitor, review site, publication, etc.)
Share of VoiceWhat percentage of the answer meaning came from this source
AttributionHow the source was used (direct quote, paraphrase, background)
PositionWhere this source ranks among all sources
CitationsHow many times this source was cited
URLsNumber of specific pages cited from this domain

Understanding source attribution

Share of Voice for sources shows where the answer meaning comes from, not just which links appeared.
SourceShare of Voice
wikipedia.org68%
g2.com18%
your-company.com8%
techcrunch.com6%
In this example, even though Wikipedia has just one link, 68% of the actual meaning came from Wikipedia. Authoritative sources often shape the answer even when not prominently displayed.

What to look for

PatternWhat it meansAction
Competitor dominatesTheir content shapes answersCreate competitive content
Wikipedia highGeneric info dominatesAdd unique, specific details
Review sites prominentThird-party validation mattersImprove review presence
Your site lowYour content is not usedOptimize pages for AI

Responses

Below sources, you will find every individual AI response for this prompt.
Responses Table

Understanding the table

Each row is one response from one AI model:
ColumnDescription
PlatformWhich AI model (ChatGPT, Claude, Perplexity)
CountryGeographic setting for the query
PersonaWhich persona was used, if any
PositionWhere you ranked (1st, 2nd, not mentioned)
MentionedWhether your brand appeared
SentimentHow you were described
DateWhen captured
One prompt in Attensira creates multiple responses. If you track 2 countries and 3 AI models, that is 6 individual responses per run.

Response detail

Click any row to see the full response:
Response Detail
The detail view shows:
  • Full AI response with your brand highlighted
  • Sources cited in this specific response
  • Competitor mentions and how they were described
  • Sentiment breakdown

Filtering responses

Use filters to find patterns:
FilterUse case
By PlatformCompare ChatGPT vs Claude vs Perplexity
By CountrySee geographic differences
By PositionFind where you win (Position = 1) or lose
By MentionedFocus on responses where you are missing

Finding patterns

Look across responses to answer:
  • Why do I rank better on Claude? Filter by platform, compare attributes
  • Why am I missing in the UK? Filter by country, check local competitors
  • What makes me win? Filter Position = 1, analyze common factors

Next steps