Teaching and Learning Goal: What skill or ability do you want students to acquire? What behavior do you want to change? What knowledge do you want to test? What assumptions (either students’ or the instructor’s) do you want to test? Focus on only one such goal
I want to examine how collaborative problem-solving through marketing case studies develops students’ marketing knowledge and collaborative skills over a 6-week intensive course. This project tests the effectiveness of structured group work in building both content mastery and teamwork capabilities, measuring actual learning outcomes rather than just performance scores.
Focus: Evaluating whether collaborative learning approaches lead to measurable improvements in marketing concept understanding, analytical confidence, and collaborative problem-solving skills in an intensive summer course format.
Teaching Question: Adapt the teaching and learning goal to a specific course. Make this question narrow and focused so that it can be measured.
Narrow, focused, and measurable question for MKT 327: “How does participation in collaborative marketing case studies affect students’ marketing concept familiarity, analytical confidence, and collaborative skills development over a 6-week intensive course?”
Specific Hypotheses:
- RQ1: Do students show significant improvement in marketing concept familiarity from Week 1 to Week 4?
- RQ2: How do students’ initial contribution expectations compare to their actual contribution patterns?
- RQ3: What are the perceived benefits and challenges of collaborative case study learning?
- RQ4: How does collaborative learning impact students’ confidence in analyzing marketing cases?
Assessment Technique: What instrument are you going to use to collect information? Is it simple enough that you know how to analyze the results? Will the information it provides answer the teaching question?
Data Collection Instruments:
- Pre-Course Survey (Week 1) - Baseline Assessment
- Marketing familiarity: 5-point Likert scale (1=not familiar to 5=extremely familiar)
- Expected contribution patterns: 5-point scale (1=much less than others to 5=much more than others)
- Group work confidence: Self-assessment scales
- Demographic information
- Mid-Course Survey (Week 4) - Progress Assessment
- Current marketing familiarity: Same 5-point scale for paired comparison
- Actual contribution patterns: Same 5-point scale for expectation vs reality analysis
- Collaboration quality ratings: 5-point scales
- Group member satisfaction: 5-point scales
- Confidence growth: Compared to Week 1
- Learning impact assessment: How group work affected understanding
- Most helpful learning methods: Multiple choice
- Benefits and challenges: Multiple choice selections
- Group vs individual preferences
- Performance Metrics
- Group case study scores: 4 assignments $\times 100$ points each
- Individual exam scores: 3 exams $\times 200$ points each (best 2 counted)
- Final course performance metrics
Analysis Plan: Pearson correlations between 5-point Likert attitude scales and percentage performance outcomes, descriptive statistics, effect size calculations using Cohen’s conventions, variance analysis to detect ceiling effects.
Classroom Practice: What assignment or activity are you going to use in the class to try to test the question? When are you going to do it? Who will conduct it? Will it be graded? Will it be anonymous or will students sign their names? How long will it take? How will students know what to do with it? Who will explain it? How will the relationship between this assignment and activity and the course be explained?
Implementation Details:
When: Summer 2025 (July 1 - August 15, 6-week intensive course) Who:
- 70 students initially enrolled
- 67 students stayed through course
- 62 completed pre-survey (Week 1 )
- 55 completed mid-survey (Week 4)
- 50 students completed BOTH surveys (matched dataset for paired analysis)
Structure:
- 14 groups of 5 students each
- Random group assignment (no pre-course screening used)
- Groups remained constant throughout course
Graded Assignments:
- Week 1: Group Case Study 1 (Nike, Mayo, Louis Vuitton)
- Week 2: Group Case Study 2 (Chase, Apple, Uber)
- Week 4: Group Case Study 3 (Redbull, Bestbuy, Airbnb)
- Week 6: Group Case Study 4 (IKEA, Starbucks, Honest Tea)
Individual Assessments:
- Week 3: Exam 1 (200 points)
- Week 5: Exam 2 (200 points)
- Week 7: Exam 3 (200 points)
Survey Administration:
- Surveys conducted via Qualtrics
- Pre-survey: First week of course (before collaborative work begins)
- Mid-survey: Week 4 (after 2 group case studies completed)
- Matched by email addresses for paired analysis
Course Integration: Group case studies comprised $40 \%$ of final grade, representing a significant pedagogical shift from previous individualbased assessments to collaborative learning format. Summary of Results: What does the information you collected through the assessment instrument tell you about your teaching question?
Sample Characteristics ( $\mathrm{n}=50$ matched students) Pre-Course Baseline (Week 1):
- Marketing Familiarity: $\mathrm{M}=2.48, \mathrm{SD}=0.88$ (5-point scale, range: 1-4)
- Not familiar at all: 6 students (12\%)
- Slightly familiar: 19 students (38\%)
- Moderately familiar: 20 students (40\%)
- Very familiar: 5 students ( $10 \%$ )
- Extremely familiar: 0 students (0\%)
- Expected Contribution Patterns: $\mathrm{M}=3.58, \mathrm{SD}=0.61$
- About equally to others: 24 students ( $48 \%$ )
- More than others: 23 students ( $46 \%$ )
- Much more than others: 3 students (6\%)
Mid-Course Progress (Week 4):
- Marketing Familiarity: $\mathrm{M}=3.48, \mathrm{SD}=0.80$ (5-point scale, range: 2-5)
- Slightly familiar: 3 students (6\%)
- Moderately familiar: 23 students ( $46 \%$ )
- Very familiar: 21 students ( $42 \%$ )
- Extremely familiar: 3 students (6\%)
- Actual Contribution Patterns: $\mathrm{M}=3.36, \mathrm{SD}=0.66$
- Less than others: 1 student (2\%)
- About equally to others: 33 students (66\%)
- More than others: 13 students (26\%)
- Much more than others: 3 students (6\%)
RIMARY RESEARCH FINDING - Significant Knowledge Growth
Marketing Concept Familiarity: PRE vs MID Comparison ( $\mathrm{n}=50$ matched students)
- Pre-Survey (Week 1): $\mathrm{M}=2.48, \mathrm{SD}=0.88$
- Mid-Survey (Week 4): $\mathrm{M}=3.48, \mathrm{SD}=0.80$
- Change: +1.00 points ( $+40.3 \%$ increase)
- Paired t-test: $\mathrm{t}(49)=7.827, \mathrm{p}<0.001$
- Effect Size: Cohen’s d = 1.11 (LARGE effect)
This represents the strongest and most important finding: Students showed substantial, statistically significant improvement in their understanding of marketing concepts after 4 weeks of collaborative case study learning.
Contribution Pattern Evolution ( $\mathrm{n}=50$ matched students)
- Pre-Survey (Expected): $\mathrm{M}=3.58, \mathrm{SD}=0.61$
- Mid-Survey (Actual): $\mathrm{M}=3.36, \mathrm{SD}=0.66$
- Change: - 0.22 points (not statistically significant)
- Pattern: Students initially expected to contribute “more than others” but adjusted to realistic assessment of contributing “about equally”
- Interpretation: Healthy recalibration of expectations based on actual collaborative experience
Group vs Individual Case Study Preference:
- Strongly prefer group case studies: 11 students (22\%)
- Somewhat prefer group case studies: 11 students (22\%)
- Total preferring group work: 22 students ( $44 \%$ )
- No preference: 12 students (24\%)
- Somewhat prefer individual: 8 students ( $16 \%$ )
- Strongly prefer individual: 8 students ( $16 \%$ )
Performance Outcomes Context
Group Work Performance:
- Mean $=98.55 \%, \mathrm{SD}=1.15$ (range: $96.25 \%-100 \%$ )
- Coefficient of Variation $=1.17 \%$ (extremely low variability)
- Clear ceiling effect evident - assessment too lenient
Individual Exam Performance:
- Mean $=91.86 \%, \mathrm{SD}=3.05$ (range: $81 \%-98 \%$ )
- Coefficient of Variation $=3.32 \%$ (normal academic variability)
- $7 \times$ higher variance compared to group work
Conclusion: What have you learned? What surprised you? What would you do differently? What implications does this have for your future classroom practice?
What I Learned
Primary Discovery: Collaborative learning in marketing case studies produces significant, measurable improvements in student learning outcomes. The matched-student analysis ( $\mathrm{n}=50$ ) revealed:
- Strong Knowledge Development: Students showed a $40.3 \%$ improvement in marketing concept familiarity ( $\mathrm{p}<0.001, \mathrm{~d}=1.11$ ), demonstrating that collaborative case study analysis effectively builds content mastery.
- Confidence Building: $94 \%$ of students reported increased confidence in analyzing marketing cases, suggesting the pedagogical approach successfully develops analytical capabilities.
- High-Quality Collaboration: Mean collaboration quality of $4.18 / 5.0$ and member satisfaction of $4.26 / 5.0$ indicate that most groups functioned effectively, creating positive learning environments.
- Perceived Learning Value: $82 \%$ of students reported that group work helped their learning, with specific benefits including diverse perspectives ( $62 \%$ ), knowledge sharing ( $56 \%$ ), and error detection ( $46 \%$ ).
- Assessment Design Matters: The ceiling effect in group performance scores ( $\mathrm{M}=98.55 \%, \mathrm{SD}=1.15 \%$ ) masked individual differences, but the comprehensive mid-course survey captured learning outcomes that performance scores alone could not measure.
What Surprised Me
- Magnitude of Knowledge Growth: The $40.3 \%$ improvement in marketing familiarity ( $\mathrm{p}<0.001$ ) exceeded expectations, showing that collaborative learning can produce large, statistically significant learning gains in a short timeframe.
- Realistic Self-Assessment Development: Students adjusted their contribution expectations ( $\mathrm{M}=3.58$ ) to match reality ( $\mathrm{M}=3.36$ ), showing healthy recalibration rather than inflated self-perception.
- Positive Attitudes Despite Challenges: Even while identifying real challenges ( $40 \%$ cited scheduling/communication issues), $82 \%$ still reported learning benefits and $58 \%$ (of those with preferences) preferred group over individual case studies.
- Multiple Learning Dimensions: The data revealed learning occurred across multiple dimensions - not just content knowledge, but also confidence, collaboration skills, and perspective-taking abilities.
- Assessment Limitations Revealed Through Multiple Measures: Performance scores alone (with ceiling effects) would have suggested little learning occurred, but the comprehensive survey data revealed substantial development across multiple dimensions.
What I Would Do Differently
Immediate Changes for Future Course Offerings:
- Revise Group Assessment Rubric:
- Increase rigor and discrimination to create meaningful performance variance
- Develop scoring criteria that capture individual learning within group context
- Align assessment difficulty with individual exam standards
- Target mean around $85-90 \%$ instead of $98.55 \%$
- Enhance Individual Accountability:
- Implement peer evaluation systems to capture individual contributions
- Include individual components within group projects
- Use Google Docs tracking to monitor actual contribution patterns
- Consider individual oral defenses or written reflections on group projects
- Expand Assessment Methods:
- Continue using comprehensive surveys to capture learning beyond performance scores
- Add qualitative reflections on collaboration experiences
- Implement pre-mid-post design to track learning trajectory
- Assess multiple learning outcomes: knowledge, skills, and attitudes
Implications for Future Classroom Practice
Evidence-Based Teaching Strategy: This research provides concrete evidence that collaborative marketing case study learning produces measurable improvements in:
- Marketing concept understanding ( $+40.3 \%, \mathrm{p}<0.001$ )
- Analytical confidence ( $94 \%$ increase)
- Collaborative capabilities (high quality ratings)
- Multiple skill dimensions (communication, perspective-taking, error detection)
See Classroom Assessment Techniques p. 59 for a helpful “Checklist for Avoiding Problems with Classroom Assessment Projects”
Key Pedagogical Shift:
- FROM: Questioning whether collaborative learning “works”
- TO: Understanding how to assess collaborative learning validly
The data clearly demonstrates that collaborative case study learning is pedagogically effective, but assessment design must capture learning across multiple dimensions rather than relying solely on group performance scores.
Broader Impact: This project demonstrates that systematic teaching inquiry can reveal:
- What works: Collaborative case study learning produces significant learning gains
- What needs improvement: Assessment design must match learning complexity
- How to measure success: Multiple outcome measures capture learning performance scores miss
The finding of $40.3 \%$ improvement in marketing knowledge ( $\mathrm{p}<0.001, \mathrm{~d}=1.11$ ) provides compelling evidence for the value of collaborative learning in marketing education, while the identified assessment limitations guide future improvement efforts.
Future Research Questions:
- How can group work rubrics maintain rigor while supporting collaborative learning?
- What individual accountability measures optimize both learning and assessment validity?
- How do collaborative learning outcomes develop over longer timeframes (full semester)?
- What specific instructional practices within groups maximize learning gains?
