Scorecards

Scorecards are AI-powered evaluation tools that help managers and revenue operations teams assess the quality of sales calls. By defining custom evaluation criteria, you can have AI automatically analyze meeting transcripts and score how well reps performed, providing consistent coaching feedback at scale.

What are Scorecards?

Scorecards are structured templates that define what to evaluate in customer conversations. Unlike Talking Points (which guide reps during calls), Scorecards analyze calls after they're complete to help managers understand coaching opportunities.

Key benefits:

  • Consistent evaluation: Every call is scored using the same criteria
  • AI-powered assessment: Automatic analysis of call transcripts
  • Scalable coaching: Evaluate 100% of calls, not just those you have time to review
  • Data-driven insights: Track performance trends over time
  • Customizable criteria: Define what matters for your sales process

How Scorecards Work

Setup Phase

  1. RevOps or managers create scorecards defining evaluation criteria
  2. Questions are configured with different types (yes/no, scale, free-form)
  3. AI prompts are written to guide how AI should assess each criterion
  4. Preconditions are set (optional) to apply scorecards to specific call types

After a Meeting

  1. Meeting ends: Transcript and recording are available
  2. AI analyzes: Bigmind AI reviews the transcript against scorecard criteria
  3. Scores are generated: Each question is answered/scored automatically
  4. Managers review: View the scorecard results and provide additional feedback
  5. Coaching happens: Use scorecard data to coach reps on improvements

Scorecard Structure

Template

A scorecard template is the top-level container that defines:

  • Name: E.g., "Discovery Call Quality", "Demo Effectiveness"
  • Status: Active or inactive
  • Final Score:
    • Manual: Manager provides overall assessment
    • Calculated: Weighted average of question scores
  • Preconditions: When this scorecard should be used
  • Object type: What to evaluate (meeting/session)
  • Visibility: Everyone or specific users

Questions

Each scorecard contains evaluation questions that assess different aspects of the call:

Question configuration:

  • Name: What to evaluate (e.g., "Did the rep establish rapport?", "How well did they handle objections?")
  • Type: How to score it
    • Yes/No: Simple pass/fail criteria
    • Open-ended: Free-form assessment and feedback
    • Single select: Choose one rating (e.g., Poor, Fair, Good, Excellent)
    • Multi-select: Multiple applicable ratings
    • Range: Numeric scale (e.g., 1-5 or 1-10)
  • Method: How the answer is determined
    • Manual: Manager provides the assessment
    • Agentic (AI): AI analyzes and scores automatically
  • Agent prompt: Instructions for AI on how to evaluate (for agentic questions)
  • Weight: Importance in final calculated score (if using calculated scoring)

Setting Up Scorecards

1. Create a Scorecard

  1. Navigate to Settings → Coaching → Scorecards
  2. Click "Create Scorecard"
  3. Provide a name (e.g., "Enterprise Discovery Quality")
  4. Choose final score method:
    • Manual: For qualitative overall assessment
    • Calculated: For data-driven scoring
  5. Set status to Active when ready to use
  6. Save the scorecard

2. Add Evaluation Questions

For each aspect you want to evaluate:

  1. Click "Add Question"
  2. Write the evaluation criterion (e.g., "Did the rep uncover the economic buyer?")
  3. Choose question type:
    • Yes/No for binary checks
    • Range (1-5) for quality scales
    • Open-ended for detailed feedback
  4. Select method:
    • Agentic: For objective criteria AI can assess from transcript
    • Manual: For subjective criteria needing human judgment
  5. Write AI prompt (for agentic questions): Guide AI on how to evaluate
    • Example: "Review the transcript and determine if the sales rep explicitly identified who has budget authority. Look for direct questions about budget owners or discussions about who approves deals of this size."
  6. Set weight (for calculated scoring): Higher weights for more important criteria

3. Configure Final Scoring

If using Manual final score:

  • Write the final score question (e.g., "Overall call quality rating")
  • Managers will provide this after reviewing individual questions

If using Calculated final score:

  • Set weights for each question
  • Higher weights = more impact on final score
  • Example: Rapport (10%), Discovery (40%), Objection Handling (30%), Next Steps (20%)
  • Final score = weighted average of all question scores

4. Set Preconditions (Optional)

Control when scorecards apply:

  • All meetings: Evaluate every call
  • Deal-based: Only for calls associated with certain deals
    • Filter by deal stage, amount, type, etc.
    • Example: "Only score discovery calls for Enterprise deals"
  • Account-based: Only for calls with certain accounts
  • User-based: Only for calls by specific reps or teams

Using Scorecards

AI Evaluation Process

After a meeting ends:

  1. Scorecard is triggered: Based on preconditions, relevant scorecards are identified
  2. AI reviews transcript: Each agentic question is evaluated
    • AI reads the full transcript
    • Follows the evaluation prompt for each question
    • Provides scores and reasoning
  3. Results are stored: Scores are saved with reasoning/evidence
  4. Managers are notified: New scorecards are ready for review

Manager Review

Managers can:

  1. View scorecard results: See all evaluated questions and scores
  2. Read AI reasoning: Understand why AI scored each item as it did
  3. Override if needed: Adjust scores based on manager judgment
  4. Add manual answers: Complete any manual-only questions
  5. Provide overall assessment: Add final comments or rating
  6. Mark as verified: Indicate the scorecard has been reviewed

Coaching with Scorecards

Use scorecard data to drive coaching:

  • One-on-one reviews: Discuss specific calls and scores
  • Trend analysis: Track rep improvement over time
  • Team benchmarks: Compare performance across the team
  • Identify patterns: Find common strengths and weaknesses
  • Targeted training: Focus coaching on lowest-scoring areas

Scorecard Types and Examples

Discovery Call Scorecard

Evaluate how well reps conduct discovery:

  • Rapport building: Did they establish connection? (Yes/No)
  • Pain identification: Quality of pain discovery (1-5 scale)
  • Budget discussion: Did they discuss budget? (Yes/No)
  • Authority identification: Did they find decision maker? (Yes/No)
  • Timeline established: Did they establish timeline? (Yes/No)
  • Next steps secured: Was a next meeting scheduled? (Yes/No)
  • Overall discovery quality: Manager assessment (Open-ended)

Demo Call Scorecard

Assess demo effectiveness:

  • Agenda setting: Did they set expectations? (Yes/No)
  • Feature relevance: Showed relevant features? (1-5 scale)
  • Customer engagement: Customer asked questions? (Yes/No)
  • Objection handling: How well handled concerns? (1-5 scale)
  • Value articulation: Connected to business value? (Yes/No)
  • Call to action: Clear next steps defined? (Yes/No)

Negotiation Call Scorecard

Evaluate negotiation skills:

  • Preparation: Demonstrated understanding of needs? (Yes/No)
  • Value reinforcement: Articulated value vs price? (1-5 scale)
  • Concession strategy: Traded concessions appropriately? (Yes/No)
  • Deal structure: Proposed mutually beneficial terms? (1-5 scale)
  • Closing attempt: Asked for the business? (Yes/No)
  • Path to close: Defined clear path forward? (Yes/No)

Customer Success Check-in Scorecard

Assess CSM call quality:

  • Relationship check: Built rapport and connection? (Yes/No)
  • Value delivery: Discussed value being received? (Yes/No)
  • Challenges identified: Uncovered any issues? (Yes/No)
  • Expansion discussed: Explored growth opportunities? (Yes/No)
  • Renewal health: Assessed renewal likelihood? (1-5 scale)
  • Action items: Clear follow-ups defined? (Yes/No)

Best Practices

Designing Effective Scorecards

  • Focus on behaviors: Evaluate what reps do, not just outcomes
  • Make criteria specific: Vague questions yield inconsistent scoring
  • Balance quantity: 8-12 questions is usually sufficient
  • Mix question types: Yes/No for checklists, scales for quality
  • Align with methodology: Reflect your sales process and training

Writing AI Prompts

  • Be explicit: Tell AI exactly what to look for in the transcript
  • Provide context: Explain why this matters
  • Give examples: Show what good vs bad looks like
  • Define edge cases: Handle ambiguous situations
  • Test and refine: Review AI scores and improve prompts

Example AI prompt:

Evaluate whether the sales rep successfully identified the economic buyer (the person with budget authority to approve this purchase). Look for:

  • Direct questions about who controls the budget
  • Discussion of approval processes
  • Identification of specific individuals with financial authority

Score Yes if the rep explicitly identified a named individual with budget authority. Score No if they only identified influencers or technical buyers without confirming budget authority.

Managing the Review Process

  • Review regularly: Don't let scorecards pile up
  • Spot check AI: Periodically verify AI scoring accuracy
  • Use for coaching: Don't just score - actually coach from the data
  • Track trends: Look at patterns over time, not just individual calls
  • Adjust criteria: Refine questions as your process evolves

Coaching with Data

  • Focus on growth: Celebrate improvements, not just problems
  • Be specific: Reference actual call examples and scores
  • Find patterns: "You're consistently strong on rapport but missing next steps"
  • Set goals: Use scores to create measurable improvement targets
  • Share best practices: Highlight high-scoring calls as examples

Calculated vs Manual Scoring

When to Use Calculated Scoring

  • Objective criteria: When evaluation is mostly factual
  • High volume: When you need to score many calls
  • Consistency: When you want standardized scoring
  • Trending: When you want to track metrics over time
  • Example: Discovery call checklists (Did they ask about budget? Yes=1, No=0)

When to Use Manual Scoring

  • Subjective assessment: When nuance and judgment matter
  • Complex evaluation: When multiple factors interact
  • Coaching focus: When the goal is learning, not metrics
  • Quality over quantity: When reviewing select important calls
  • Example: Overall sales skill assessment by experienced managers

Hybrid Approach

Many teams use both:

  • Calculated scores for tactical execution (Did they cover the checklist?)
  • Manual scores for strategic assessment (How effective was their approach?)
  • Combine for comprehensive evaluation

Troubleshooting

AI scores seem inaccurate:

  • Refine the AI prompt to be more specific
  • Add examples of good vs bad to the prompt
  • Check that the transcript quality is good
  • Consider making the question manual if too subjective
  • Review several AI-scored calls to identify patterns

Scorecards not being generated:

  • Check that scorecard status is "Active"
  • Verify preconditions match the meeting/deal
  • Ensure the meeting has a transcript
  • Check that at least some questions are set to "Agentic"

Final score calculation is wrong:

  • Verify weights are set correctly for all questions
  • Check that weights sum to 100% (or intended total)
  • Ensure question types support numeric scoring (Yes/No, Range)
  • Review the calculation formula in settings

Too many scorecards per call:

  • Refine preconditions to be more specific
  • Consider consolidating similar scorecards
  • Use "User-based" preconditions to assign specific scorecards to specific teams
  • Deactivate unused scorecards

Scorecards vs Talking Points

Aspect Talking Points Scorecards
When used During the meeting (real-time) After the meeting (post-call)
Purpose Guide reps through conversation Evaluate call quality
User Sales reps Managers/RevOps
Output Discovery answers, CRM data Performance scores, coaching insights
Focus What information to gather How well the rep performed

Use Together: Talking Points help reps execute well; Scorecards help managers ensure they did.

Related Documentation