Sentiment measures how AI talks about you when it mentions your brand. Getting mentioned isn’t enough — what matters is whether AI recommends you enthusiastically or with hesitation.
Understanding Sentiment
When AI mentions your brand, it doesn’t just list your name — it describes you. That description carries an opinion, and that opinion shapes how users perceive you before they ever visit your site.
Here’s the same brand mentioned with different sentiment:
Positive Sentiment:
"For small agencies, Acme CRM is an excellent choice. Users consistently
praise its intuitive interface and responsive support team. It's
particularly well-suited for teams that prioritize ease of use."
Negative Sentiment:
"Acme CRM is an option to consider, though some users report frustration
with customer support response times. The learning curve can be steep
for non-technical teams, and pricing has increased significantly."
Neutral Sentiment:
"Acme CRM offers contact management, pipeline tracking, and email
integration. It serves small to medium businesses in various industries."
Same brand, three very different impressions. The user reading the positive version is primed to convert. The user reading the negative version is primed to look elsewhere.
Why Sentiment matters
In traditional search, users read your content and form their own opinion. In AI search, AI forms the opinion for them. The way AI describes you directly shapes perception before users click through.
The impact on conversion:
| Sentiment | User behavior | Business impact |
|---|
| Positive | Users arrive pre-sold, ready to convert | Higher conversion rates, shorter sales cycles |
| Neutral | Users arrive curious, need convincing | Normal conversion rates, typical sales process |
| Negative | Users arrive skeptical or don’t click at all | Lower conversions, objection-heavy sales calls |
You can have high Share of Voice with low Sentiment — meaning you’re mentioned often but described poorly. This is worse than not being mentioned at all, because you’re actively being positioned negatively in users’ minds.
How Sentiment is calculated
Sentiment is analyzed directly from AI responses. When you track a prompt, Attensira sends it to multiple AI platforms and analyzes the actual response text to score how positively or negatively each model describes you.
Sentiment scoring
Each response is scored on a 0-100 scale based on the language used:
| Score | Label | What AI typically says |
|---|
| 80-100 | Very Positive | ”Excellent,” “highly recommend,” “users love,” “best-in-class” |
| 70-79 | Positive | ”Great option,” “strong choice,” “well-regarded,” “popular” |
| 50-69 | Neutral | ”An option,” “offers features,” factual descriptions without opinion |
| 30-49 | Negative | ”Some concerns,” “users report issues,” “can be challenging” |
| 0-29 | Very Negative | ”Avoid,” “significant problems,” “better alternatives exist” |
How it works in practice
For each prompt you track, we get responses from multiple AI platforms and score each one:
Prompt: "What's the best CRM for marketing agencies?"
Responses:
├── ChatGPT response: "Acme CRM is excellent for agencies..." → Score: 85
├── Perplexity response: "Acme is a solid choice with good..." → Score: 78
├── Claude response: "For marketing agencies, Acme offers..." → Score: 72
├── Gemini response: "Acme CRM is well-regarded among..." → Score: 80
└── Grok response: "Acme has strong agency features..." → Score: 76
Average for this prompt: (85 + 78 + 72 + 80 + 76) / 5 = 78.2
Your overall sentiment is the average across all prompts and all platforms. A score below 50 indicates more negative mentions than positive.
Breaking down your sentiment
You can view sentiment in multiple ways:
- Overall sentiment — Average across all prompts and platforms
- Sentiment by platform — How each AI model perceives you (ChatGPT might score you 82, while Perplexity scores you 68)
- Sentiment by prompt — Which topics generate positive vs. negative responses
- Sentiment over time — Track how perception changes week to week
What drives Sentiment
AI forms opinions about your brand based on what it reads across the web. Key sources:
Your ratings on G2, Capterra, TrustRadius, and similar sites directly influence how AI describes you. A 4.8-star rating leads to “highly rated” descriptions. A 3.2-star rating leads to “mixed reviews.”
High review scores → "Users consistently rate Acme highly..."
Low review scores → "Acme receives mixed reviews, with some users noting..."
Community sentiment on Twitter/X, Reddit, LinkedIn, and industry forums shapes AI’s perception. Frequent complaints = negative sentiment. Enthusiastic users = positive sentiment.
How publications write about you matters. Positive feature articles boost sentiment. Critical coverage or negative news stories drag it down.
4. Customer testimonials and case studies
Published success stories and testimonials give AI positive language to draw from. Without them, AI has less positive material to reference.
5. Comparison content
When third parties compare you to competitors, who wins? If comparison articles consistently favor competitors, AI absorbs that positioning.
Sentiment vs. other metrics
Understanding how Sentiment interacts with your other metrics:
| Scenario | What it means | Priority action |
|---|
| High Share of Voice + High Sentiment | Ideal state — you’re mentioned often and positively | Maintain and expand |
| High Share of Voice + Low Sentiment | Dangerous — you’re visible but described negatively | Fix perception urgently |
| Low Share of Voice + High Sentiment | Hidden gem — those who know you love you | Increase awareness |
| Low Share of Voice + Low Sentiment | Significant work needed on both fronts | Address sentiment first, then visibility |
| High Position + Low Sentiment | You rank well but with caveats | Improve how you’re described |
| Low Position + High Sentiment | Described well when mentioned, but mentioned late | Work on position factors |
If you have high visibility but low sentiment, fixing sentiment should be your top priority. Being mentioned negatively is actively hurting you.
Common Sentiment patterns
The “but” problem
AI often uses “but” to introduce concerns:
"Acme CRM is feature-rich, but users report a steep learning curve."
"Acme offers good value, but customer support can be slow."
Track what comes after the “but” — these are your sentiment killers.
The comparison trap
AI often establishes sentiment through comparison:
Positive: "Unlike some competitors, Acme offers excellent support."
Negative: "While competitors offer free tiers, Acme's pricing starts at $99."
How you compare to competitors directly affects your sentiment.
The qualifier issue
Watch for qualifiers that soften recommendations:
Strong: "I recommend Acme for agencies."
Weak: "Acme could be worth considering for some agencies."
Weaker: "If budget isn't a concern, Acme might work."
More qualifiers = lower sentiment, even if the overall tone seems positive.
Improving your Sentiment
Step 1: Diagnose the problem
Identify which mentions are dragging down your score:
- Filter by sentiment — Find your lowest-scoring mentions
- Identify patterns — What concerns appear repeatedly?
- Trace to sources — Where is AI getting this negative information?
Common negative patterns:
- Support complaints
- Pricing concerns
- Feature gaps vs. competitors
- Reliability issues
- Learning curve complaints
Step 2: Address root causes
Sometimes sentiment reflects real problems. If users consistently complain about support, improving support will eventually improve sentiment.
Real improvements that affect sentiment:
- Fix product issues that generate complaints
- Improve customer support response times
- Address common feature requests
- Make onboarding easier
- Adjust pricing if it’s a consistent pain point
Step 3: Generate positive signals
Create content and experiences that give AI positive language to draw from:
Review management:
- Ask satisfied customers to leave reviews
- Respond professionally to negative reviews
- Keep review profiles current and complete
- Aim for recent reviews (AI may weight recency)
Content creation:
- Publish customer success stories
- Create case studies with specific results
- Share testimonials on your site
- Get featured in positive “best of” lists
PR and media:
- Pursue positive media coverage
- Respond to negative coverage with corrections
- Share company wins and milestones
- Earn industry awards and recognition
Step 4: Monitor changes
Sentiment changes slowly — it takes time for new content to be indexed and for AI models to update their understanding. Track sentiment over weeks and months, not days.
Sentiment improvement timeline:
├── Week 1-2: New reviews/content published
├── Week 2-4: Content gets indexed
├── Week 4-8: AI models begin incorporating new data
└── Week 8+: Sentiment scores start reflecting changes
Different AI platforms may perceive you differently based on their data sources:
| Platform | Primary sentiment sources |
|---|
ChatGPT | Broad web content, tends toward consensus view |
Claude | May weigh nuanced analysis, looks at full context |
Perplexity | Heavily citation-based, recent sources matter most |
Gemini | Google’s index, review sites weighted heavily |
Grok | Real-time X/Twitter sentiment, social signals |
If your sentiment differs significantly across platforms, investigate which data sources each platform emphasizes. A platform showing lower sentiment may be drawing from sources with more negative content about you.
Sentiment for different mention types
Not all mentions carry equal weight for sentiment analysis:
| Mention type | Sentiment impact |
|---|
| Primary recommendation | High impact — this is where sentiment matters most |
| Alternative mention | Medium impact — “Also consider” mentions |
| Comparison mention | Variable — depends on how comparison is framed |
| Feature mention | Lower impact — often neutral, factual |
| Warning mention | High negative impact — actively steering users away |
Focus on improving sentiment in your primary recommendations first, as these have the highest conversion impact.
Next steps