How to Use AI to Build Authentic Speaking Assessments (for Business English)

When my business English students walked into their final exam last semester, they faced a crisis meeting at their own company, the fictional startup they'd been building all semester long. A plagiarism scandal had just hit their AI writing tool, Script Mind, and they had ten minutes to prepare before facing their board of directors.

The assessment measured whether students could think on their feet, navigate unexpected challenges in professional English, and perform in situations they'll face in their careers.

Here's how I use AI to create these personalized, scenario-based speaking exams.

Why I Moved from Generic to Personalized Scenarios

For years, I gave every student team the same final speaking assessment. I'd create a fictional company facing a crisis or dealing with some decision making, and all teams would role-play their way through the same situation. The approach worked. Students demonstrated the target language skills. They passed their exams. Their performance was fine.

But something was missing. Students treated the assessment like an academic hurdle rather than a meaningful challenge. They prepared responses, rehearsed phrases, and delivered what they thought I wanted to hear. The language was correct, but the engagement felt hollow.

Being the teacher that I am, I wanted to offer them more. I wanted students invested in the outcome, thinking critically about solutions, and using English as a genuine communication tool rather than performing for a grade.

So I started experimenting with personalization. What if each team faced a crisis at their own company? What if the scenario connected to work they'd already done in class? What if the challenge felt specific to their context rather than generic?

The shift changed everything (including my syllabus!). Students leaned forward during their assessments. They debated solutions with what looked like conviction. They forgot they were being evaluated and simply communicated. The language skills I was measuring remained the same, but the authenticity and engagement increased dramatically.

AI tools made this personalization possible at scale. After all, I am just the one person.

Building Context Throughout the Semester

In my Business English and Meetings in English college level courses, students spend the entire semester creating a fictional company. Each team develops:

  • A complete company profile with mission, vision, and products

  • Team roles (CEO, Marketing Manager, Finance Officer, etc.)

  • A brand identity with visual materials

  • Business challenges and growth plans

  • They even get to negotiate partnerships with other “brands”

Students practice English while building a rich, personalized context that I can later use to create their speaking assessment. By the time we reach the final exam, students embody roles they've inhabited for months, discussing a business they are interested in and understand. They can even refer to prior meetings where X and Y came up.

Using AI to Create Tailored Crisis Scenarios

Once students have built their companies, I use AI tools to design personalized assessments. Here's my process:

1. Analyze Student Work for Authentic Details

I provide the AI with information about each team's company, including:

  • Their product or service descriptions

  • Team member roles and responsibilities

  • Previous challenges they've discussed

  • Industry context and competitors

The AI helps me come up with realistic crisis scenarios that would affect their specific business. For Script Mind (the AI writing assistant my students created), a plagiarism scandal touched on their core product, threatened their reputation, and required immediate strategic response.

2. Generate Realistic Crisis Contexts

Using AI language assessment tools, I create detailed scenario briefs that include:

  • The triggering event (a social media post, a client complaint, a news article)

  • Specific metrics showing impact (user drop-offs, revenue changes, media mentions)

  • Stakeholder concerns (who's affected and what they're saying)

  • Decision points (what the team must address during their meeting)

These scenarios are calibrated to each team's company, industry, and previous discussions.

3. Design the Surprise Element

Real professional communication includes unexpected challenges. To prevent over-preparation, I add a surprise element that students receive only 10 minutes before their speaking assessment begins. Sort of “when you were sleeping, this happened”.

Examples include:

  • A key investor threatening to pull funding

  • A competitor launching a similar product

  • New information about the crisis that changes everything

  • A regulatory investigation announcement

The surprise forces students to think critically in English, not just recite memorized phrases.

The Assessment Structure

Here's how the assessment unfolds:

24 Hours Before: The Crisis Context

Students receive the detailed crisis scenario via email. Students have time to:

  • Read and analyze the situation thoroughly

  • Research relevant information if needed

  • Ask their favorite AI

  • Discuss initial reactions with their team

  • Prepare potential solutions and talking points

This preparation mirrors real professional contexts where you receive meeting agendas and briefing materials in advance.

On Arrival: The Surprise Element

When students arrive for their assessment, they receive the surprise element. This new information changes something significant about the crisis. Students must immediately decide: Does everyone on the team need to know about this before the meeting starts? Or will someone reveal it strategically during the discussion?Teams have 10 minutes to process the new information, adjust their strategy & make any last-minute preparations.

The Crisis Meeting

Students participate in a board meeting where they must:

  • Present their understanding of the crisis

  • Incorporate the surprise element strategically

  • Debate solutions as a team

  • Respond to unexpected points raised by teammates

  • Reach consensus on action items

  • Assign responsibilities for next steps

Personalized Assessment Without Semester-Long Projects

You might be thinking: "This sounds great, but my course doesn't have time for semester-long company building." The good news is you can create personalized AI-generated scenarios without months of preparation. Here are several approaches:

Option 1: Build Context Through Mini-Projects

Instead of a full semester project, assign 2-3 smaller tasks that build context:

  1. Students create a company one-pager

  2. Students present a product pitch

  3. Students write a quarterly report

    By the final exam, you have enough material for AI to generate personalized crisis scenarios. Students have some familiarity with their "company," even without deep investment.

Option 2: Personalize Generic Scenarios

Start with a base scenario (like a customer complaint or team conflict), then use AI to adapt it:

  • Change the industry to match student interests

  • Adjust the stakeholders to reflect team composition

  • Modify the complexity based on proficiency level

  • Add surprise elements specific to each group

This approach requires less upfront work while maintaining personalization benefits.

Option 3: Use Current Events or Case Studies

Assign students a real company or recent news event to research early in the course. For the speaking assessment, use AI to generate:

  • A follow-up crisis for that company

  • A decision meeting about next steps

  • A stakeholder negotiation based on the case

Students engage with authentic materials throughout the course, and AI creates the assessment scenario.

Beyond Business English

While I focus on Business English, this AI-assisted assessment methodology works for various contexts:

Academic English: Research team meetings where students discuss unexpected peer review feedback or defend methodology choices

Medical English: Patient case consultations where new symptoms or test results emerge mid-discussion

Legal English: Client meetings with last-minute evidence or regulatory changes

General English (Advanced): Community problem-solving scenarios based on student interests (environmental projects, social initiatives)

The key is building some context (whether over weeks or months) that AI can transform into personalized speaking assessments.

Ensuring Fairness Across Personalized Scenarios

One of the biggest concerns teachers raise about personalized assessments is fairness: "If every team gets a different scenario, how do you ensure they're equally challenging?" This is a legitimate question. Personalization only works if it maintains assessment validity and equity across all students.

Here's how I approach it:

Standardize the Core Challenge Structure

While the surface details differ (company names, products, industries), I ensure every scenario contains the same structural elements:

  • One primary crisis requiring immediate attention

  • 3-4 data points showing measurable impact

  • At least two competing stakeholder interests

  • A decision that requires weighing trade-offs

  • A surprise element delivered 10 minutes before the meeting

This structural consistency means all teams face equivalent cognitive and linguistic demands, even though the content varies.

Use AI to Calibrate Difficulty

When I generate scenarios with AI, I explicitly prompt for difficulty calibration.

I generate multiple scenarios, then review them side-by-side to check:

  • Does each require similar vocabulary range?

  • Do they demand comparable grammatical structures?

  • Is the information density similar?

  • Would solving one crisis take significantly more time than another?

If one scenario seems notably harder or easier, I ask the AI to adjust it.

Create a Difficulty Rubric

I've developed a simple checklist to evaluate scenario complexity:

Information Complexity (How much information must students process?)

  • Low: 1-2 data points, clear cause and effect

  • Medium: 3-4 data points, some ambiguity

  • High: 5+ data points, multiple interpretations

Stakeholder Complexity (How many competing interests?)

  • Low: 2 stakeholders with different but compatible goals

  • Medium: 3 stakeholders with some conflicting interests

  • High: 4+ stakeholders with directly opposing priorities

Time Pressure (How urgent is the decision?)

  • Low: General timeline, room for deliberation

  • Medium: Decision needed this week, some urgency

  • High: Immediate action required, high stakes

Solution Clarity (How obvious is the best path forward?)

  • Low: One clear solution, debate is about implementation

  • Medium: 2-3 viable solutions, legitimate disagreement possible

  • High: No obviously correct answer, trade-offs in every direction

I aim for all scenarios to hit similar levels across these dimensions. Most of my assessments target medium complexity in each category.

Key Takeaways for Language Teachers

We're at an interesting moment in language education. AI enables teachers to create personalized, authentic assessments at scale. The speaking assessments I design with AI measure more than language proficiency. They prepare students for the messy, unpredictable reality of professional communication.

If you're exploring AI for language teaching and assessment:

✓ Start with context (even minimal context enables personalization)

✓ Prioritize authenticity (use AI to create scenarios that mirror real-world complexity)

✓ Embrace unpredictability (surprise elements ensure students demonstrate genuine proficiency)

✓ Be transparent (discuss your use of AI with students and model ethical AI integration)

I often wonder if the future of language assessment combines human judgment with AI capabilities? In this case, I think it does. Using AI as a tool creates more meaningful, personalized opportunities for students to demonstrate what they can do with language.

When students walk out of these speaking exams feeling challenged, but genuinely proud of how they navigated an unexpected crisis in English, you know you've created something valuable.

You've given them proof that they're ready for whatever comes next.

Want to learn more about implementing AI in your language teaching practice? Email me at hola@marianaslearning.space to discuss personalized assessment strategies, or share your own experiences with AI-powered language evaluation in the comments below.

Next
Next

How to Become a Better Teacher: What I Learned From Being a Language Student