Back to snippets
product_manager_toolkit_rice_prioritization_interview_analysis_prd_templates.py
pythonA comprehensive product management toolkit containing two main scripts: (1) rice_prioritizer.py calculates RICE scores (Reach x Impact x Confidence / Effort) for feature prioritization, generates portfolio analysis, and creates quarterly roadmaps based on team capacity; (2) customer_interview_analyzer.py performs NLP-based analysis on interview transcripts to extract pain points, feature requests, jobs-to-be-done patterns, sentiment scores, key themes, and competitor mentions.
Agent Votes
0
0
product_manager_toolkit_rice_prioritization_interview_analysis_prd_templates.py
1# SKILL.md
2
3---
4name: product-manager-toolkit
5description: Comprehensive toolkit for product managers including RICE prioritization, customer interview analysis, PRD templates, discovery frameworks, and go-to-market strategies. Use for feature prioritization, user research synthesis, requirement documentation, and product strategy development.
6---
7
8# Product Manager Toolkit
9
10Essential tools and frameworks for modern product management, from discovery to delivery.
11
12## Quick Start
13
14### For Feature Prioritization
15```bash
16python scripts/rice_prioritizer.py sample # Create sample CSV
17python scripts/rice_prioritizer.py sample_features.csv --capacity 15
18```
19
20### For Interview Analysis
21```bash
22python scripts/customer_interview_analyzer.py interview_transcript.txt
23```
24
25### For PRD Creation
261. Choose template from `references/prd_templates.md`
272. Fill in sections based on discovery work
283. Review with stakeholders
294. Version control in your PM tool
30
31## Core Workflows
32
33### Feature Prioritization Process
34
351. **Gather Feature Requests**
36 - Customer feedback
37 - Sales requests
38 - Technical debt
39 - Strategic initiatives
40
412. **Score with RICE**
42 ```bash
43 # Create CSV with: name,reach,impact,confidence,effort
44 python scripts/rice_prioritizer.py features.csv
45 ```
46 - **Reach**: Users affected per quarter
47 - **Impact**: massive/high/medium/low/minimal
48 - **Confidence**: high/medium/low
49 - **Effort**: xl/l/m/s/xs (person-months)
50
513. **Analyze Portfolio**
52 - Review quick wins vs big bets
53 - Check effort distribution
54 - Validate against strategy
55
564. **Generate Roadmap**
57 - Quarterly capacity planning
58 - Dependency mapping
59 - Stakeholder alignment
60
61### Customer Discovery Process
62
631. **Conduct Interviews**
64 - Use semi-structured format
65 - Focus on problems, not solutions
66 - Record with permission
67
682. **Analyze Insights**
69 ```bash
70 python scripts/customer_interview_analyzer.py transcript.txt
71 ```
72 Extracts:
73 - Pain points with severity
74 - Feature requests with priority
75 - Jobs to be done
76 - Sentiment analysis
77 - Key themes and quotes
78
793. **Synthesize Findings**
80 - Group similar pain points
81 - Identify patterns across interviews
82 - Map to opportunity areas
83
844. **Validate Solutions**
85 - Create solution hypotheses
86 - Test with prototypes
87 - Measure actual vs expected behavior
88
89### PRD Development Process
90
911. **Choose Template**
92 - **Standard PRD**: Complex features (6-8 weeks)
93 - **One-Page PRD**: Simple features (2-4 weeks)
94 - **Feature Brief**: Exploration phase (1 week)
95 - **Agile Epic**: Sprint-based delivery
96
972. **Structure Content**
98 - Problem → Solution → Success Metrics
99 - Always include out-of-scope
100 - Clear acceptance criteria
101
1023. **Collaborate**
103 - Engineering for feasibility
104 - Design for experience
105 - Sales for market validation
106 - Support for operational impact
107
108## Key Scripts
109
110### rice_prioritizer.py
111Advanced RICE framework implementation with portfolio analysis.
112
113**Features**:
114- RICE score calculation
115- Portfolio balance analysis (quick wins vs big bets)
116- Quarterly roadmap generation
117- Team capacity planning
118- Multiple output formats (text/json/csv)
119
120**Usage Examples**:
121```bash
122# Basic prioritization
123python scripts/rice_prioritizer.py features.csv
124
125# With custom team capacity (person-months per quarter)
126python scripts/rice_prioritizer.py features.csv --capacity 20
127
128# Output as JSON for integration
129python scripts/rice_prioritizer.py features.csv --output json
130```
131
132### customer_interview_analyzer.py
133NLP-based interview analysis for extracting actionable insights.
134
135**Capabilities**:
136- Pain point extraction with severity assessment
137- Feature request identification and classification
138- Jobs-to-be-done pattern recognition
139- Sentiment analysis
140- Theme extraction
141- Competitor mentions
142- Key quotes identification
143
144**Usage Examples**:
145```bash
146# Analyze single interview
147python scripts/customer_interview_analyzer.py interview.txt
148
149# Output as JSON for aggregation
150python scripts/customer_interview_analyzer.py interview.txt json
151```
152
153## Reference Documents
154
155### prd_templates.md
156Multiple PRD formats for different contexts:
157
1581. **Standard PRD Template**
159 - Comprehensive 11-section format
160 - Best for major features
161 - Includes technical specs
162
1632. **One-Page PRD**
164 - Concise format for quick alignment
165 - Focus on problem/solution/metrics
166 - Good for smaller features
167
1683. **Agile Epic Template**
169 - Sprint-based delivery
170 - User story mapping
171 - Acceptance criteria focus
172
1734. **Feature Brief**
174 - Lightweight exploration
175 - Hypothesis-driven
176 - Pre-PRD phase
177
178## Prioritization Frameworks
179
180### RICE Framework
181```
182Score = (Reach × Impact × Confidence) / Effort
183
184Reach: # of users/quarter
185Impact:
186 - Massive = 3x
187 - High = 2x
188 - Medium = 1x
189 - Low = 0.5x
190 - Minimal = 0.25x
191Confidence:
192 - High = 100%
193 - Medium = 80%
194 - Low = 50%
195Effort: Person-months
196```
197
198### Value vs Effort Matrix
199```
200 Low Effort High Effort
201
202High QUICK WINS BIG BETS
203Value [Prioritize] [Strategic]
204
205Low FILL-INS TIME SINKS
206Value [Maybe] [Avoid]
207```
208
209### MoSCoW Method
210- **Must Have**: Critical for launch
211- **Should Have**: Important but not critical
212- **Could Have**: Nice to have
213- **Won't Have**: Out of scope
214
215## Discovery Frameworks
216
217### Customer Interview Guide
218```
2191. Context Questions (5 min)
220 - Role and responsibilities
221 - Current workflow
222 - Tools used
223
2242. Problem Exploration (15 min)
225 - Pain points
226 - Frequency and impact
227 - Current workarounds
228
2293. Solution Validation (10 min)
230 - Reaction to concepts
231 - Value perception
232 - Willingness to pay
233
2344. Wrap-up (5 min)
235 - Other thoughts
236 - Referrals
237 - Follow-up permission
238```
239
240### Hypothesis Template
241```
242We believe that [building this feature]
243For [these users]
244Will [achieve this outcome]
245We'll know we're right when [metric]
246```
247
248### Opportunity Solution Tree
249```
250Outcome
251├── Opportunity 1
252│ ├── Solution A
253│ └── Solution B
254└── Opportunity 2
255 ├── Solution C
256 └── Solution D
257```
258
259## Metrics & Analytics
260
261### North Star Metric Framework
2621. **Identify Core Value**: What's the #1 value to users?
2632. **Make it Measurable**: Quantifiable and trackable
2643. **Ensure It's Actionable**: Teams can influence it
2654. **Check Leading Indicator**: Predicts business success
266
267### Funnel Analysis Template
268```
269Acquisition → Activation → Retention → Revenue → Referral
270
271Key Metrics:
272- Conversion rate at each step
273- Drop-off points
274- Time between steps
275- Cohort variations
276```
277
278### Feature Success Metrics
279- **Adoption**: % of users using feature
280- **Frequency**: Usage per user per time period
281- **Depth**: % of feature capability used
282- **Retention**: Continued usage over time
283- **Satisfaction**: NPS/CSAT for feature
284
285## Best Practices
286
287### Writing Great PRDs
2881. Start with the problem, not solution
2892. Include clear success metrics upfront
2903. Explicitly state what's out of scope
2914. Use visuals (wireframes, flows)
2925. Keep technical details in appendix
2936. Version control changes
294
295### Effective Prioritization
2961. Mix quick wins with strategic bets
2972. Consider opportunity cost
2983. Account for dependencies
2994. Buffer for unexpected work (20%)
3005. Revisit quarterly
3016. Communicate decisions clearly
302
303### Customer Discovery Tips
3041. Ask "why" 5 times
3052. Focus on past behavior, not future intentions
3063. Avoid leading questions
3074. Interview in their environment
3085. Look for emotional reactions
3096. Validate with data
310
311### Stakeholder Management
3121. Identify RACI for decisions
3132. Regular async updates
3143. Demo over documentation
3154. Address concerns early
3165. Celebrate wins publicly
3176. Learn from failures openly
318
319## Common Pitfalls to Avoid
320
3211. **Solution-First Thinking**: Jumping to features before understanding problems
3222. **Analysis Paralysis**: Over-researching without shipping
3233. **Feature Factory**: Shipping features without measuring impact
3244. **Ignoring Technical Debt**: Not allocating time for platform health
3255. **Stakeholder Surprise**: Not communicating early and often
3266. **Metric Theater**: Optimizing vanity metrics over real value
327
328## Integration Points
329
330This toolkit integrates with:
331- **Analytics**: Amplitude, Mixpanel, Google Analytics
332- **Roadmapping**: ProductBoard, Aha!, Roadmunk
333- **Design**: Figma, Sketch, Miro
334- **Development**: Jira, Linear, GitHub
335- **Research**: Dovetail, UserVoice, Pendo
336- **Communication**: Slack, Notion, Confluence
337
338## Quick Commands Cheat Sheet
339
340```bash
341# Prioritization
342python scripts/rice_prioritizer.py features.csv --capacity 15
343
344# Interview Analysis
345python scripts/customer_interview_analyzer.py interview.txt
346
347# Create sample data
348python scripts/rice_prioritizer.py sample
349
350# JSON outputs for integration
351python scripts/rice_prioritizer.py features.csv --output json
352python scripts/customer_interview_analyzer.py interview.txt json
353```
354
355
356
357# customer_interview_analyzer.py
358
359```python
360#!/usr/bin/env python3
361"""
362Customer Interview Analyzer
363Extracts insights, patterns, and opportunities from user interviews
364"""
365
366import re
367from typing import Dict, List, Tuple, Set
368from collections import Counter, defaultdict
369import json
370
371class InterviewAnalyzer:
372 """Analyze customer interviews for insights and patterns"""
373
374 def __init__(self):
375 # Pain point indicators
376 self.pain_indicators = [
377 'frustrat', 'annoy', 'difficult', 'hard', 'confus', 'slow',
378 'problem', 'issue', 'struggle', 'challeng', 'pain', 'waste',
379 'manual', 'repetitive', 'tedious', 'boring', 'time-consuming',
380 'complicated', 'complex', 'unclear', 'wish', 'need', 'want'
381 ]
382
383 # Positive indicators
384 self.delight_indicators = [
385 'love', 'great', 'awesome', 'amazing', 'perfect', 'easy',
386 'simple', 'quick', 'fast', 'helpful', 'useful', 'valuable',
387 'save', 'efficient', 'convenient', 'intuitive', 'clear'
388 ]
389
390 # Feature request indicators
391 self.request_indicators = [
392 'would be nice', 'wish', 'hope', 'want', 'need', 'should',
393 'could', 'would love', 'if only', 'it would help', 'suggest',
394 'recommend', 'idea', 'what if', 'have you considered'
395 ]
396
397 # Jobs to be done patterns
398 self.jtbd_patterns = [
399 r'when i\s+(.+?),\s+i want to\s+(.+?)\s+so that\s+(.+)',
400 r'i need to\s+(.+?)\s+because\s+(.+)',
401 r'my goal is to\s+(.+)',
402 r'i\'m trying to\s+(.+)',
403 r'i use \w+ to\s+(.+)',
404 r'helps me\s+(.+)',
405 ]
406
407 def analyze_interview(self, text: str) -> Dict:
408 """Analyze a single interview transcript"""
409 text_lower = text.lower()
410 sentences = self._split_sentences(text)
411
412 analysis = {
413 'pain_points': self._extract_pain_points(sentences),
414 'delights': self._extract_delights(sentences),
415 'feature_requests': self._extract_requests(sentences),
416 'jobs_to_be_done': self._extract_jtbd(text_lower),
417 'sentiment_score': self._calculate_sentiment(text_lower),
418 'key_themes': self._extract_themes(text_lower),
419 'quotes': self._extract_key_quotes(sentences),
420 'metrics_mentioned': self._extract_metrics(text),
421 'competitors_mentioned': self._extract_competitors(text)
422 }
423
424 return analysis
425
426 def _split_sentences(self, text: str) -> List[str]:
427 """Split text into sentences"""
428 # Simple sentence splitting
429 sentences = re.split(r'[.!?]+', text)
430 return [s.strip() for s in sentences if s.strip()]
431
432 def _extract_pain_points(self, sentences: List[str]) -> List[Dict]:
433 """Extract pain points from sentences"""
434 pain_points = []
435
436 for sentence in sentences:
437 sentence_lower = sentence.lower()
438 for indicator in self.pain_indicators:
439 if indicator in sentence_lower:
440 # Extract context around the pain point
441 pain_points.append({
442 'quote': sentence,
443 'indicator': indicator,
444 'severity': self._assess_severity(sentence_lower)
445 })
446 break
447
448 return pain_points[:10] # Return top 10
449
450 def _extract_delights(self, sentences: List[str]) -> List[Dict]:
451 """Extract positive feedback"""
452 delights = []
453
454 for sentence in sentences:
455 sentence_lower = sentence.lower()
456 for indicator in self.delight_indicators:
457 if indicator in sentence_lower:
458 delights.append({
459 'quote': sentence,
460 'indicator': indicator,
461 'strength': self._assess_strength(sentence_lower)
462 })
463 break
464
465 return delights[:10]
466
467 def _extract_requests(self, sentences: List[str]) -> List[Dict]:
468 """Extract feature requests and suggestions"""
469 requests = []
470
471 for sentence in sentences:
472 sentence_lower = sentence.lower()
473 for indicator in self.request_indicators:
474 if indicator in sentence_lower:
475 requests.append({
476 'quote': sentence,
477 'type': self._classify_request(sentence_lower),
478 'priority': self._assess_request_priority(sentence_lower)
479 })
480 break
481
482 return requests[:10]
483
484 def _extract_jtbd(self, text: str) -> List[Dict]:
485 """Extract Jobs to Be Done patterns"""
486 jobs = []
487
488 for pattern in self.jtbd_patterns:
489 matches = re.findall(pattern, text, re.IGNORECASE)
490 for match in matches:
491 if isinstance(match, tuple):
492 job = ' → '.join(match)
493 else:
494 job = match
495
496 jobs.append({
497 'job': job,
498 'pattern': pattern.pattern if hasattr(pattern, 'pattern') else pattern
499 })
500
501 return jobs[:5]
502
503 def _calculate_sentiment(self, text: str) -> Dict:
504 """Calculate overall sentiment of the interview"""
505 positive_count = sum(1 for ind in self.delight_indicators if ind in text)
506 negative_count = sum(1 for ind in self.pain_indicators if ind in text)
507
508 total = positive_count + negative_count
509 if total == 0:
510 sentiment_score = 0
511 else:
512 sentiment_score = (positive_count - negative_count) / total
513
514 if sentiment_score > 0.3:
515 sentiment_label = 'positive'
516 elif sentiment_score < -0.3:
517 sentiment_label = 'negative'
518 else:
519 sentiment_label = 'neutral'
520
521 return {
522 'score': round(sentiment_score, 2),
523 'label': sentiment_label,
524 'positive_signals': positive_count,
525 'negative_signals': negative_count
526 }
527
528 def _extract_themes(self, text: str) -> List[str]:
529 """Extract key themes using word frequency"""
530 # Remove common words
531 stop_words = {'the', 'a', 'an', 'and', 'or', 'but', 'in', 'on', 'at',
532 'to', 'for', 'of', 'with', 'by', 'from', 'as', 'is',
533 'was', 'are', 'were', 'been', 'be', 'have', 'has',
534 'had', 'do', 'does', 'did', 'will', 'would', 'could',
535 'should', 'may', 'might', 'must', 'can', 'shall',
536 'it', 'i', 'you', 'we', 'they', 'them', 'their'}
537
538 # Extract meaningful words
539 words = re.findall(r'\b[a-z]{4,}\b', text)
540 meaningful_words = [w for w in words if w not in stop_words]
541
542 # Count frequency
543 word_freq = Counter(meaningful_words)
544
545 # Extract themes (top frequent meaningful words)
546 themes = [word for word, count in word_freq.most_common(10) if count >= 3]
547
548 return themes
549
550 def _extract_key_quotes(self, sentences: List[str]) -> List[str]:
551 """Extract the most insightful quotes"""
552 scored_sentences = []
553
554 for sentence in sentences:
555 if len(sentence) < 20 or len(sentence) > 200:
556 continue
557
558 score = 0
559 sentence_lower = sentence.lower()
560
561 # Score based on insight indicators
562 if any(ind in sentence_lower for ind in self.pain_indicators):
563 score += 2
564 if any(ind in sentence_lower for ind in self.request_indicators):
565 score += 2
566 if 'because' in sentence_lower:
567 score += 1
568 if 'but' in sentence_lower:
569 score += 1
570 if '?' in sentence:
571 score += 1
572
573 if score > 0:
574 scored_sentences.append((score, sentence))
575
576 # Sort by score and return top quotes
577 scored_sentences.sort(reverse=True)
578 return [s[1] for s in scored_sentences[:5]]
579
580 def _extract_metrics(self, text: str) -> List[str]:
581 """Extract any metrics or numbers mentioned"""
582 metrics = []
583
584 # Find percentages
585 percentages = re.findall(r'\d+%', text)
586 metrics.extend(percentages)
587
588 # Find time metrics
589 time_metrics = re.findall(r'\d+\s*(?:hours?|minutes?|days?|weeks?|months?)', text, re.IGNORECASE)
590 metrics.extend(time_metrics)
591
592 # Find money metrics
593 money_metrics = re.findall(r'\$[\d,]+', text)
594 metrics.extend(money_metrics)
595
596 # Find general numbers with context
597 number_contexts = re.findall(r'(\d+)\s+(\w+)', text)
598 for num, context in number_contexts:
599 if context.lower() not in ['the', 'a', 'an', 'and', 'or', 'of']:
600 metrics.append(f"{num} {context}")
601
602 return list(set(metrics))[:10]
603
604 def _extract_competitors(self, text: str) -> List[str]:
605 """Extract competitor mentions"""
606 # Common competitor indicators
607 competitor_patterns = [
608 r'(?:use|used|using|tried|trying|switch from|switched from|instead of)\s+(\w+)',
609 r'(\w+)\s+(?:is better|works better|is easier)',
610 r'compared to\s+(\w+)',
611 r'like\s+(\w+)',
612 r'similar to\s+(\w+)',
613 ]
614
615 competitors = set()
616 for pattern in competitor_patterns:
617 matches = re.findall(pattern, text, re.IGNORECASE)
618 competitors.update(matches)
619
620 # Filter out common words
621 common_words = {'this', 'that', 'it', 'them', 'other', 'another', 'something'}
622 competitors = [c for c in competitors if c.lower() not in common_words and len(c) > 2]
623
624 return list(competitors)[:5]
625
626 def _assess_severity(self, text: str) -> str:
627 """Assess severity of pain point"""
628 if any(word in text for word in ['very', 'extremely', 'really', 'totally', 'completely']):
629 return 'high'
630 elif any(word in text for word in ['somewhat', 'bit', 'little', 'slightly']):
631 return 'low'
632 return 'medium'
633
634 def _assess_strength(self, text: str) -> str:
635 """Assess strength of positive feedback"""
636 if any(word in text for word in ['absolutely', 'definitely', 'really', 'very']):
637 return 'strong'
638 return 'moderate'
639
640 def _classify_request(self, text: str) -> str:
641 """Classify the type of request"""
642 if any(word in text for word in ['ui', 'design', 'look', 'color', 'layout']):
643 return 'ui_improvement'
644 elif any(word in text for word in ['feature', 'add', 'new', 'build']):
645 return 'new_feature'
646 elif any(word in text for word in ['fix', 'bug', 'broken', 'work']):
647 return 'bug_fix'
648 elif any(word in text for word in ['faster', 'slow', 'performance', 'speed']):
649 return 'performance'
650 return 'general'
651
652 def _assess_request_priority(self, text: str) -> str:
653 """Assess priority of request"""
654 if any(word in text for word in ['critical', 'urgent', 'asap', 'immediately', 'blocking']):
655 return 'critical'
656 elif any(word in text for word in ['need', 'important', 'should', 'must']):
657 return 'high'
658 elif any(word in text for word in ['nice', 'would', 'could', 'maybe']):
659 return 'low'
660 return 'medium'
661
662def aggregate_interviews(interviews: List[Dict]) -> Dict:
663 """Aggregate insights from multiple interviews"""
664 aggregated = {
665 'total_interviews': len(interviews),
666 'common_pain_points': defaultdict(list),
667 'common_requests': defaultdict(list),
668 'jobs_to_be_done': [],
669 'overall_sentiment': {
670 'positive': 0,
671 'negative': 0,
672 'neutral': 0
673 },
674 'top_themes': Counter(),
675 'metrics_summary': set(),
676 'competitors_mentioned': Counter()
677 }
678
679 for interview in interviews:
680 # Aggregate pain points
681 for pain in interview.get('pain_points', []):
682 indicator = pain.get('indicator', 'unknown')
683 aggregated['common_pain_points'][indicator].append(pain['quote'])
684
685 # Aggregate requests
686 for request in interview.get('feature_requests', []):
687 req_type = request.get('type', 'general')
688 aggregated['common_requests'][req_type].append(request['quote'])
689
690 # Aggregate JTBD
691 aggregated['jobs_to_be_done'].extend(interview.get('jobs_to_be_done', []))
692
693 # Aggregate sentiment
694 sentiment = interview.get('sentiment_score', {}).get('label', 'neutral')
695 aggregated['overall_sentiment'][sentiment] += 1
696
697 # Aggregate themes
698 for theme in interview.get('key_themes', []):
699 aggregated['top_themes'][theme] += 1
700
701 # Aggregate metrics
702 aggregated['metrics_summary'].update(interview.get('metrics_mentioned', []))
703
704 # Aggregate competitors
705 for competitor in interview.get('competitors_mentioned', []):
706 aggregated['competitors_mentioned'][competitor] += 1
707
708 # Process aggregated data
709 aggregated['common_pain_points'] = dict(aggregated['common_pain_points'])
710 aggregated['common_requests'] = dict(aggregated['common_requests'])
711 aggregated['top_themes'] = dict(aggregated['top_themes'].most_common(10))
712 aggregated['metrics_summary'] = list(aggregated['metrics_summary'])
713 aggregated['competitors_mentioned'] = dict(aggregated['competitors_mentioned'])
714
715 return aggregated
716
717def format_single_interview(analysis: Dict) -> str:
718 """Format single interview analysis"""
719 output = ["=" * 60]
720 output.append("CUSTOMER INTERVIEW ANALYSIS")
721 output.append("=" * 60)
722
723 # Sentiment
724 sentiment = analysis['sentiment_score']
725 output.append(f"\n📊 Overall Sentiment: {sentiment['label'].upper()}")
726 output.append(f" Score: {sentiment['score']}")
727 output.append(f" Positive signals: {sentiment['positive_signals']}")
728 output.append(f" Negative signals: {sentiment['negative_signals']}")
729
730 # Pain Points
731 if analysis['pain_points']:
732 output.append("\n🔥 Pain Points Identified:")
733 for i, pain in enumerate(analysis['pain_points'][:5], 1):
734 output.append(f"\n{i}. [{pain['severity'].upper()}] {pain['quote'][:100]}...")
735
736 # Feature Requests
737 if analysis['feature_requests']:
738 output.append("\n💡 Feature Requests:")
739 for i, req in enumerate(analysis['feature_requests'][:5], 1):
740 output.append(f"\n{i}. [{req['type']}] Priority: {req['priority']}")
741 output.append(f" \"{req['quote'][:100]}...\"")
742
743 # Jobs to Be Done
744 if analysis['jobs_to_be_done']:
745 output.append("\n🎯 Jobs to Be Done:")
746 for i, job in enumerate(analysis['jobs_to_be_done'], 1):
747 output.append(f"{i}. {job['job']}")
748
749 # Key Themes
750 if analysis['key_themes']:
751 output.append("\n🏷️ Key Themes:")
752 output.append(", ".join(analysis['key_themes']))
753
754 # Key Quotes
755 if analysis['quotes']:
756 output.append("\n💬 Key Quotes:")
757 for i, quote in enumerate(analysis['quotes'][:3], 1):
758 output.append(f'{i}. "{quote}"')
759
760 # Metrics
761 if analysis['metrics_mentioned']:
762 output.append("\n📈 Metrics Mentioned:")
763 output.append(", ".join(analysis['metrics_mentioned']))
764
765 # Competitors
766 if analysis['competitors_mentioned']:
767 output.append("\n🏢 Competitors Mentioned:")
768 output.append(", ".join(analysis['competitors_mentioned']))
769
770 return "\n".join(output)
771
772def main():
773 import sys
774
775 if len(sys.argv) < 2:
776 print("Usage: python customer_interview_analyzer.py <interview_file.txt>")
777 print("\nThis tool analyzes customer interview transcripts to extract:")
778 print(" - Pain points and frustrations")
779 print(" - Feature requests and suggestions")
780 print(" - Jobs to be done")
781 print(" - Sentiment analysis")
782 print(" - Key themes and quotes")
783 sys.exit(1)
784
785 # Read interview transcript
786 with open(sys.argv[1], 'r') as f:
787 interview_text = f.read()
788
789 # Analyze
790 analyzer = InterviewAnalyzer()
791 analysis = analyzer.analyze_interview(interview_text)
792
793 # Output
794 if len(sys.argv) > 2 and sys.argv[2] == 'json':
795 print(json.dumps(analysis, indent=2))
796 else:
797 print(format_single_interview(analysis))
798
799if __name__ == "__main__":
800 main()
801
802```
803
804
805# rice_prioritizer.py
806
807```python
808#!/usr/bin/env python3
809"""
810RICE Prioritization Framework
811Calculates RICE scores for feature prioritization
812RICE = (Reach x Impact x Confidence) / Effort
813"""
814
815import json
816import csv
817from typing import List, Dict, Tuple
818import argparse
819
820class RICECalculator:
821 """Calculate RICE scores for feature prioritization"""
822
823 def __init__(self):
824 self.impact_map = {
825 'massive': 3.0,
826 'high': 2.0,
827 'medium': 1.0,
828 'low': 0.5,
829 'minimal': 0.25
830 }
831
832 self.confidence_map = {
833 'high': 100,
834 'medium': 80,
835 'low': 50
836 }
837
838 self.effort_map = {
839 'xl': 13,
840 'l': 8,
841 'm': 5,
842 's': 3,
843 'xs': 1
844 }
845
846 def calculate_rice(self, reach: int, impact: str, confidence: str, effort: str) -> float:
847 """
848 Calculate RICE score
849
850 Args:
851 reach: Number of users/customers affected per quarter
852 impact: massive/high/medium/low/minimal
853 confidence: high/medium/low (percentage)
854 effort: xl/l/m/s/xs (person-months)
855 """
856 impact_score = self.impact_map.get(impact.lower(), 1.0)
857 confidence_score = self.confidence_map.get(confidence.lower(), 50) / 100
858 effort_score = self.effort_map.get(effort.lower(), 5)
859
860 if effort_score == 0:
861 return 0
862
863 rice_score = (reach * impact_score * confidence_score) / effort_score
864 return round(rice_score, 2)
865
866 def prioritize_features(self, features: List[Dict]) -> List[Dict]:
867 """
868 Calculate RICE scores and rank features
869
870 Args:
871 features: List of feature dictionaries with RICE components
872 """
873 for feature in features:
874 feature['rice_score'] = self.calculate_rice(
875 feature.get('reach', 0),
876 feature.get('impact', 'medium'),
877 feature.get('confidence', 'medium'),
878 feature.get('effort', 'm')
879 )
880
881 # Sort by RICE score descending
882 return sorted(features, key=lambda x: x['rice_score'], reverse=True)
883
884 def analyze_portfolio(self, features: List[Dict]) -> Dict:
885 """
886 Analyze the feature portfolio for balance and insights
887 """
888 if not features:
889 return {}
890
891 total_effort = sum(
892 self.effort_map.get(f.get('effort', 'm').lower(), 5)
893 for f in features
894 )
895
896 total_reach = sum(f.get('reach', 0) for f in features)
897
898 effort_distribution = {}
899 impact_distribution = {}
900
901 for feature in features:
902 effort = feature.get('effort', 'm').lower()
903 impact = feature.get('impact', 'medium').lower()
904
905 effort_distribution[effort] = effort_distribution.get(effort, 0) + 1
906 impact_distribution[impact] = impact_distribution.get(impact, 0) + 1
907
908 # Calculate quick wins (high impact, low effort)
909 quick_wins = [
910 f for f in features
911 if f.get('impact', '').lower() in ['massive', 'high']
912 and f.get('effort', '').lower() in ['xs', 's']
913 ]
914
915 # Calculate big bets (high impact, high effort)
916 big_bets = [
917 f for f in features
918 if f.get('impact', '').lower() in ['massive', 'high']
919 and f.get('effort', '').lower() in ['l', 'xl']
920 ]
921
922 return {
923 'total_features': len(features),
924 'total_effort_months': total_effort,
925 'total_reach': total_reach,
926 'average_rice': round(sum(f['rice_score'] for f in features) / len(features), 2),
927 'effort_distribution': effort_distribution,
928 'impact_distribution': impact_distribution,
929 'quick_wins': len(quick_wins),
930 'big_bets': len(big_bets),
931 'quick_wins_list': quick_wins[:3], # Top 3 quick wins
932 'big_bets_list': big_bets[:3] # Top 3 big bets
933 }
934
935 def generate_roadmap(self, features: List[Dict], team_capacity: int = 10) -> List[Dict]:
936 """
937 Generate a quarterly roadmap based on team capacity
938
939 Args:
940 features: Prioritized feature list
941 team_capacity: Person-months available per quarter
942 """
943 quarters = []
944 current_quarter = {
945 'quarter': 1,
946 'features': [],
947 'capacity_used': 0,
948 'capacity_available': team_capacity
949 }
950
951 for feature in features:
952 effort = self.effort_map.get(feature.get('effort', 'm').lower(), 5)
953
954 if current_quarter['capacity_used'] + effort <= team_capacity:
955 current_quarter['features'].append(feature)
956 current_quarter['capacity_used'] += effort
957 else:
958 # Move to next quarter
959 current_quarter['capacity_available'] = team_capacity - current_quarter['capacity_used']
960 quarters.append(current_quarter)
961
962 current_quarter = {
963 'quarter': len(quarters) + 1,
964 'features': [feature],
965 'capacity_used': effort,
966 'capacity_available': team_capacity - effort
967 }
968
969 if current_quarter['features']:
970 current_quarter['capacity_available'] = team_capacity - current_quarter['capacity_used']
971 quarters.append(current_quarter)
972
973 return quarters
974
975def format_output(features: List[Dict], analysis: Dict, roadmap: List[Dict]) -> str:
976 """Format the results for display"""
977 output = ["=" * 60]
978 output.append("RICE PRIORITIZATION RESULTS")
979 output.append("=" * 60)
980
981 # Top prioritized features
982 output.append("\n📊 TOP PRIORITIZED FEATURES\n")
983 for i, feature in enumerate(features[:10], 1):
984 output.append(f"{i}. {feature.get('name', 'Unnamed')}")
985 output.append(f" RICE Score: {feature['rice_score']}")
986 output.append(f" Reach: {feature.get('reach', 0)} | Impact: {feature.get('impact', 'medium')} | "
987 f"Confidence: {feature.get('confidence', 'medium')} | Effort: {feature.get('effort', 'm')}")
988 output.append("")
989
990 # Portfolio analysis
991 output.append("\n📈 PORTFOLIO ANALYSIS\n")
992 output.append(f"Total Features: {analysis.get('total_features', 0)}")
993 output.append(f"Total Effort: {analysis.get('total_effort_months', 0)} person-months")
994 output.append(f"Total Reach: {analysis.get('total_reach', 0):,} users")
995 output.append(f"Average RICE Score: {analysis.get('average_rice', 0)}")
996
997 output.append(f"\n🎯 Quick Wins: {analysis.get('quick_wins', 0)} features")
998 for qw in analysis.get('quick_wins_list', []):
999 output.append(f" • {qw.get('name', 'Unnamed')} (RICE: {qw['rice_score']})")
1000
1001 output.append(f"\n🚀 Big Bets: {analysis.get('big_bets', 0)} features")
1002 for bb in analysis.get('big_bets_list', []):
1003 output.append(f" • {bb.get('name', 'Unnamed')} (RICE: {bb['rice_score']})")
1004
1005 # Roadmap
1006 output.append("\n\n📅 SUGGESTED ROADMAP\n")
1007 for quarter in roadmap:
1008 output.append(f"\nQ{quarter['quarter']} - Capacity: {quarter['capacity_used']}/{quarter['capacity_used'] + quarter['capacity_available']} person-months")
1009 for feature in quarter['features']:
1010 output.append(f" • {feature.get('name', 'Unnamed')} (RICE: {feature['rice_score']})")
1011
1012 return "\n".join(output)
1013
1014def load_features_from_csv(filepath: str) -> List[Dict]:
1015 """Load features from CSV file"""
1016 features = []
1017 with open(filepath, 'r') as f:
1018 reader = csv.DictReader(f)
1019 for row in reader:
1020 feature = {
1021 'name': row.get('name', ''),
1022 'reach': int(row.get('reach', 0)),
1023 'impact': row.get('impact', 'medium'),
1024 'confidence': row.get('confidence', 'medium'),
1025 'effort': row.get('effort', 'm'),
1026 'description': row.get('description', '')
1027 }
1028 features.append(feature)
1029 return features
1030
1031def create_sample_csv(filepath: str):
1032 """Create a sample CSV file for testing"""
1033 sample_features = [
1034 ['name', 'reach', 'impact', 'confidence', 'effort', 'description'],
1035 ['User Dashboard Redesign', '5000', 'high', 'high', 'l', 'Complete redesign of user dashboard'],
1036 ['Mobile Push Notifications', '10000', 'massive', 'medium', 'm', 'Add push notification support'],
1037 ['Dark Mode', '8000', 'medium', 'high', 's', 'Implement dark mode theme'],
1038 ['API Rate Limiting', '2000', 'low', 'high', 'xs', 'Add rate limiting to API'],
1039 ['Social Login', '12000', 'high', 'medium', 'm', 'Add Google/Facebook login'],
1040 ['Export to PDF', '3000', 'medium', 'low', 's', 'Export reports as PDF'],
1041 ['Team Collaboration', '4000', 'massive', 'low', 'xl', 'Real-time collaboration features'],
1042 ['Search Improvements', '15000', 'high', 'high', 'm', 'Enhance search functionality'],
1043 ['Onboarding Flow', '20000', 'massive', 'high', 's', 'Improve new user onboarding'],
1044 ['Analytics Dashboard', '6000', 'high', 'medium', 'l', 'Advanced analytics for users'],
1045 ]
1046
1047 with open(filepath, 'w', newline='') as f:
1048 writer = csv.writer(f)
1049 writer.writerows(sample_features)
1050
1051 print(f"Sample CSV created at: {filepath}")
1052
1053def main():
1054 parser = argparse.ArgumentParser(description='RICE Framework for Feature Prioritization')
1055 parser.add_argument('input', nargs='?', help='CSV file with features or "sample" to create sample')
1056 parser.add_argument('--capacity', type=int, default=10, help='Team capacity per quarter (person-months)')
1057 parser.add_argument('--output', choices=['text', 'json', 'csv'], default='text', help='Output format')
1058
1059 args = parser.parse_args()
1060
1061 # Create sample if requested
1062 if args.input == 'sample':
1063 create_sample_csv('sample_features.csv')
1064 return
1065
1066 # Use sample data if no input provided
1067 if not args.input:
1068 features = [
1069 {'name': 'User Dashboard', 'reach': 5000, 'impact': 'high', 'confidence': 'high', 'effort': 'l'},
1070 {'name': 'Push Notifications', 'reach': 10000, 'impact': 'massive', 'confidence': 'medium', 'effort': 'm'},
1071 {'name': 'Dark Mode', 'reach': 8000, 'impact': 'medium', 'confidence': 'high', 'effort': 's'},
1072 {'name': 'API Rate Limiting', 'reach': 2000, 'impact': 'low', 'confidence': 'high', 'effort': 'xs'},
1073 {'name': 'Social Login', 'reach': 12000, 'impact': 'high', 'confidence': 'medium', 'effort': 'm'},
1074 ]
1075 else:
1076 features = load_features_from_csv(args.input)
1077
1078 # Calculate RICE scores
1079 calculator = RICECalculator()
1080 prioritized = calculator.prioritize_features(features)
1081 analysis = calculator.analyze_portfolio(prioritized)
1082 roadmap = calculator.generate_roadmap(prioritized, args.capacity)
1083
1084 # Output results
1085 if args.output == 'json':
1086 result = {
1087 'features': prioritized,
1088 'analysis': analysis,
1089 'roadmap': roadmap
1090 }
1091 print(json.dumps(result, indent=2))
1092 elif args.output == 'csv':
1093 # Output prioritized features as CSV
1094 if prioritized:
1095 keys = prioritized[0].keys()
1096 print(','.join(keys))
1097 for feature in prioritized:
1098 print(','.join(str(feature.get(k, '')) for k in keys))
1099 else:
1100 print(format_output(prioritized, analysis, roadmap))
1101
1102if __name__ == "__main__":
1103 main()
1104
1105```