Executive Summary: This case study profiles a judiciary Mediation/ADR center that implemented 24/7 Learning Assistants to provide just-in-time coaching, microlearning, and job aids across roles. By instrumenting the assistant and courses and linking activity data to the case system, the team correlated training exposure with operational outcomes—specifically settlement rates and no-show rates—while improving consistency and speed to proficiency.
Focus Industry: Judiciary
Business Type: Mediation/ADR Centers
Solution Implemented: 24/7 Learning Assistants
Outcome: Correlate training to settlement and no-show rates.
Cost and Effort: A detailed breakdown of costs and efforts is provided in the corresponding section below.
Our Project Role: Elearning development company

The Judiciary Mediation and ADR Center Operates Under High Stakes and Tight Constraints
People come to this Mediation and ADR center to solve real problems fast. A few hours can decide whether someone keeps a home, gets paid for work, or settles a family dispute without a court fight. Every appointment matters. When a participant misses a session, a scarce slot is lost and the docket backs up. When a mediator guides a hard conversation well, cases settle and stress falls for everyone involved.
Here is the business snapshot. The center is part of the judiciary and runs across multiple locations with in-person and virtual sessions. The team includes mediators, case managers, and intake staff, along with part-time and volunteer support. Cases range from small claims and landlord–tenant to family matters. Sessions are short and tightly scheduled, often back to back, and participants speak many languages.
- Caseloads swing day to day, and last-minute filings are common
- Court calendars set strict timelines that leave little room to adjust
- Staff experience varies across sites and shifts
- Coaching can differ by location, which leads to uneven practice
- There is very little time for classroom training during the workday
- Confidentiality and data security rules are strict
- Many participants need clear, plain-language scripts and language access
- Technology and connectivity vary for remote sessions
In this setting, what staff say and do in the moment shapes results. A clear confirmation script can lower no-shows. A well-timed question can move a case toward agreement. A checklist can keep a session on track and within the rules. Because time is tight, learning must fit into small gaps between cases and support people while they work.
Leaders also want proof that learning makes a difference. They aim to see how training connects to the outcomes that matter most: higher settlement rates and fewer no-shows. To do that, they need simple, reliable ways to capture what people practice and use at work, and to compare that activity with case results across sites and case types.
Inconsistent Coaching and Limited Training Time Create a Clear Performance Challenge
Our staff had talent and heart, but the support around them was uneven. Coaching looked different from one location to the next. A new mediator might hear one set of tips on Monday and a different set on Friday. People wanted clear, steady guidance, yet the day-to-day rush made that hard to deliver.
The variety of cases made the gaps show up fast. One center used a strong confirmation script before sessions. Another center skipped it when phones were busy. Some mediators opened with joint conversation. Others moved to private talks right away. Intake staff handled language access and reminders in different ways. None of this came from bad intent. It came from busy calendars and good people doing their best without shared tools.
- Coaching varied by site, shift lead, and mentor
- New hires and volunteers started fast with little time to shadow
- Updates to rules, forms, and scripts did not reach everyone at the same time
- Job aids lived in email threads or personal notes and were easy to miss
- Back-to-back sessions left almost no time for classes or long refreshers
- Remote sessions added tech steps that some teams learned and others did not
Time was the biggest constraint. Staff moved from case to case with short breaks. A one-hour class meant five or more cases delayed. People needed help in the moment, not weeks later. They asked for short practice, clear checklists, and simple scripts they could trust under pressure.
Leaders also lacked a clean way to see what worked. Course completions did not show which scripts people used on the job. One-on-one coaching left no record. Data sat in different systems, so it was hard to link training activity to results like settlement or no-show rates. The team needed a way to give consistent support and also prove impact without adding extra steps for staff.
We Adopt a 24/7 Learning Assistant Strategy Embedded in Daily Work
We chose a simple idea that fits the way people actually work. Give everyone a 24/7 Learning Assistant they can open on any device in seconds. Make it part of the daily flow, not a separate class. The goal was clear. Provide steady coaching, short practice, and trusted scripts at the exact moment of need.
Access had to be fast. Staff can open the assistant from the case page, the scheduling view, or a link in calendar invites. A QR code at each desk and intake window points to the same place. No extra logins, no hunting through folders. If a session is remote, the link sits beside the video call controls. If a session is in person, it is on the tablet or phone.
- One-tap scripts and checklists: confirmations, openings, caucus transitions, closings, and follow-up reminders
- Scenario practice in two to five minutes: small claims, landlord and tenant, family matters, and language access situations
- Job aids that travel with you: interpreter requests, remote setup steps, form walkthroughs, and de-escalation tips
- Search that understands intent: find the right rule, template, or script using plain language
- Personalized suggestions: content adjusts by role, case type, and recent activity
- Multi-language support: side-by-side English and translated phrases for key moments
Learning happens in short bursts. Before a case, the assistant offers a quick prep checklist. During a session, it shows a small prompt or a sample question. After a case, it suggests a two-minute practice based on what just happened. This keeps momentum without pulling people out of work.
We set clear guardrails. The assistant does not store case notes. Access is role-based, so mediators, case managers, and intake staff see the tools they need. Content owners can update a script once and push it to every site. This keeps coaching consistent and current.
We rolled out in phases. A pilot group shaped the tone, the length of practice, and the placement of links. Local champions modeled use in huddles and helped peers build the habit. We built in quick feedback buttons so staff could flag gaps or suggest new prompts, and we updated content weekly at first.
From the start, we planned to measure what people used and when. The strategy called for linking learning activity to outcomes like settlement and no-show rates. That set us up to focus the assistant on the moments that drive results and to refine it with real evidence, not guesses.
The Team Deploys Targeted Microlearning and Job Aids Powered by 24/7 Learning Assistants
To fix uneven coaching without pulling people out of work, we built short, targeted learning that lives inside the 24/7 Learning Assistants. Every piece is practical, fast, and tied to a real task. Staff can tap it before, during, or after a case and get what they need in seconds.
- Before a case: 60 to 90 second prep checks for small claims, landlord-tenant, and family matters; quick guides for language access; confirmation call and text scripts to reduce no-shows
- During a session: openers that set ground rules, sample questions to unlock interests, caucus transitions, de-escalation phrases, and short prompts for documenting agreements
- After a case: wrap-up checklists, follow-up reminder templates, and simple next-step maps for forms and referrals
Microlearning sits in tiny chunks so people can practice a skill in two to five minutes and get right back to the docket. Each drill focuses on one move: reflecting a tense statement, framing an issue, or guiding option generation. The assistant gives a quick scenario, asks you to pick or type a response, then shows a strong example you can reuse.
- Quick drills: fast scenarios with coaching on tone and phrasing
- Skill refreshers: one-page tips on listening, neutrality, and fairness
- Phrase finder: plain-language lines for common moments, side by side with translated versions where needed
- Checklist mode: step-by-step flows for remote setup, interpreter use, and agreement review
- Form walkthroughs: simple visuals that show what to fill out and where to send it
Content adjusts by role and case type. Mediators see openings, reframes, and caucus choices. Case managers see scheduling, confirmation, and agreement routing steps. Intake staff see eligibility questions, language access steps, and plain-language explanations of the process.
We made the job aids easy to trust. Every script and checklist uses plain language and a consistent style across sites. When policy or forms change, content owners update one source and the assistant shows the latest version everywhere. Staff can also print a one-page version for a desk or intake window when needed.
We kept the flow simple. The assistant surfaces the most likely tool based on where you are in the day. Opening a landlord-tenant case shows the right prep list. Starting a remote session offers the tech check. After a session, a short practice pops up that fits what just happened.
Feedback is built in. A quick thumbs-up or note from staff tells us if a script landed or needs work. We review suggestions weekly and add or refine content so the assistant stays sharp and relevant to the toughest moments on the calendar.
We Connect the Assistants to Case Data With the Cluelabs xAPI Learning Record Store
To see what helped in real cases, we connected the 24/7 Learning Assistants to the Cluelabs xAPI Learning Record Store (LRS). We set the assistant and the short courses to send a small xAPI record each time someone used a checklist, ran a script, or finished a drill. We then linked those records to the case system with anonymous codes for the mediator and case. This gave us a clean, privacy-first way to line up learning activity with outcomes.
- What we track: which job aid or script was used, when it was used, how long it was open, which scenario practice was completed, the role of the user, the site, and the case type selected
- What we do not track: case notes, party names, private messages, or audio and video from sessions
- How we protect privacy: we convert staff and case IDs into anonymous codes, limit access by role, and keep only the data needed to study patterns
With the LRS in place, we built near real-time dashboards and simple exports to our analytics tools. Leaders can filter by site, case type, and time window to see how learning shows up in the work. For example, they can check whether the confirmation script is used in the days before a hearing, or how often mediators open the agreement review checklist during small-claims sessions.
- Faster feedback loops: weekly reviews surface the most used tools and the moments where staff still need help
- Clear patterns: consistent use of reminders lines up with fewer no-shows, and steady use of opening checklists lines up with better session flow and more agreements
- Targeted updates: we promote high-impact scripts in the assistant, retire confusing content, and add quick drills for weak spots
We treated governance as a core feature, not an afterthought. Staff learned how the data works and what it does not do. Views for managers are aggregated and focus on coaching, not surveillance. We keep audit logs, set clear retention timelines, and review access regularly.
The setup was straightforward to run in phases:
- Instrument the assistant and modules to send xAPI statements to the Cluelabs LRS
- Map each statement to an anonymous staff code, a case type, and a time stamp
- Link the LRS to the case system with pseudonymized mediator and case codes
- Build dashboards that show learning activity next to outcomes by site and timeframe
- Use the insights to refine scripts, checklists, and microlearning every week
This connection let us move from guesses to evidence. We could finally see which moments of support mattered for settlement and no-show rates, and we could improve the assistant with confidence.
Training Exposure Correlates With Higher Settlement Rates and Lower No-Show Rates
Once the assistants were in daily use and the data began to flow, we could answer the big question: does training show up in results that matter? The short answer is yes. When staff used the job aids and short practice more often, we saw a steady link to higher settlement rates and fewer no-shows across sites and case types.
- Reminders reduce misses: use of the confirmation call and text scripts in the week before sessions lined up with lower no-show rates
- Strong starts matter: opening checklists used early in sessions paired with smoother flow and more agreements
- Quick practice pays off: two- to five-minute scenario drills done before a case correlated with higher settlement in small claims and landlord-tenant matters
- Review prevents rework: the agreement review checklist tied to fewer errors and fewer return visits to fix forms
- Language access runs cleaner: the interpreter workflow checklist matched shorter delays and better completion rates for multilingual cases
We grouped use into simple bands—light, moderate, and high—and compared outcomes by site and case type. The same pattern showed up again and again: more timely use of a few key tools linked to better results. This is correlation, not proof of cause, but the signals were strong and consistent enough to guide action.
- We lifted what works: high-impact scripts moved to the top of the assistant and into calendar invites
- We nudged at the right time: day-of prompts encouraged a 60-second prep and the correct opening checklist
- We focused practice: weekly huddles featured one quick drill tied to recent gaps in cases
- We trimmed noise: low-use or unclear content was retired or rewritten in plain language
- We closed loops fast: new insights from the dashboards drove small updates every week
Staff felt the difference. Fewer empty chairs. Smoother starts. Clearer agreements. New hires got up to speed faster, and volunteers had scripts they could trust. Leaders, in turn, could point to a visible line from training activity to outcomes, and choose where to invest next with more confidence.
We Share Lessons Learned for Learning and Development Teams and Operations Leaders
Here are the takeaways we would repeat. They come from helping busy teams in a court setting and will fit many other operations. Keep it simple, keep it close to the work, and show proof early.
- Put help where work happens: place one-tap links to the assistant in the case system, calendar invites, and huddle notes
- Start with three moments that matter: confirmation reminders, a clear opening, and an agreement review checklist
- Keep it tiny and plain: 60 to 90 second prep, two to five minute drills, one page checklists, and plain language
- Use local champions: ask respected staff to demo the assistant in huddles and share one tip per week
- Treat privacy as a promise: do not collect case notes, use anonymous codes, and limit who sees the data
- Measure with the Cluelabs xAPI Learning Record Store (LRS): track a short list of events, link to case data, and review patterns weekly
- Nudge at the right time: send day-before reminders for confirmations and day-of prompts for the opening checklist
- Update often: keep one source of truth for scripts and push small fixes weekly so every site stays in sync
- Tailor by role and case type: mediators, case managers, and intake staff should see only what they need
- Plan for low tech days: offer printable one pagers and a simple offline view for key checklists
- Support language access: pair plain English with translated phrases and a clear interpreter workflow
- Coach with data, not surveillance: use team views and trends for support, not for individual scorecards
- Test small changes: try one tweak for a week, compare outcomes, then keep it or roll it back
- Close the loop with staff: share quick wins like fewer no-shows and smoother starts to build trust and momentum
- Build an onboarding path: add the assistant link to welcome emails, first shift checklists, and the first week plan
- Make success easy to share: create templates for scripts and drills so new sites can copy what works
If you are starting now, pick the three moments that matter most, place the links where people work, instrument them with the Cluelabs LRS, and review results every week. Small, steady moves will raise settlement rates, cut no-shows, and make the workday lighter.
Decide If 24/7 Learning Assistants With an xAPI LRS Fit Your Mediation and ADR Operation
The Mediation and ADR center faced tight calendars, varied case types, and uneven coaching across sites. Staff had little time for classes, yet every conversation carried high stakes. The 24/7 Learning Assistant solved for this by putting help right where work happens: one-tap scripts, short drills, and clear checklists on any device. Content was role-based and consistent across locations, so new hires and volunteers had the same trusted guidance as seasoned staff.
Leaders also needed proof that training mattered. With the Cluelabs xAPI Learning Record Store (LRS), every use of a script or checklist sent a small record that was linked to the case system using anonymous codes. Dashboards showed how training activity lined up with outcomes such as settlement and no-show rates. This let the team promote high-impact tools, refine weak spots, and keep coaching focused on what worked, all while protecting privacy.
The result was practical support in the flow of work and a clear line to results. The center raised consistency, reduced misses, and improved session flow without pulling people off the docket. Below are five questions to help you decide if a similar approach will fit your organization.
- Can you name the two or three outcomes this solution should move, and can you measure them now? Identifying outcomes focuses the design on what matters most. If you can track results today (for example, settlement rates or no-show rates), you can link xAPI activity from the assistant to those outcomes and see patterns fast. If you cannot, start by defining proxies or building the basic reporting you need.
- Where can staff reach help in one or two taps during real work? Fast access drives use. Map the moments where people need support and place links in those tools: the case page, calendar invites, video call controls, and intake screens. If you cannot embed access, adoption will lag. Plan for low-tech days with printable one-pagers.
- Which three moments in your workflow most affect those outcomes? A narrow start keeps scope small and impact high. Common starter moments are confirmation reminders, opening a session, and agreement review. If you cannot agree on a short list, run a brief pilot to test which moments move the needle.
- What privacy, security, and governance rules must this solution meet? Trust is essential in court-related work. Decide what you will track and what you will not. Use pseudonymized IDs, role-based access, and aggregated team views for coaching, not surveillance. If these guardrails are hard to meet, limit tracking to a minimal set and document the controls clearly.
- Who will own content and adoption after launch? Without clear owners, content goes stale and use fades. Assign a content lead for each role, set a weekly or biweekly update rhythm, and recruit champions to demo new tips in huddles. Plan a simple measurement cadence using the Cluelabs LRS so teams see progress and keep momentum.
If you can answer these questions with confidence, you are ready to pilot. Start small, embed links where people already work, instrument the assistant with the LRS, and tune it weekly based on what the data and your staff tell you.
Estimate The Cost And Effort To Launch 24/7 Learning Assistants With An xAPI LRS
This estimate outlines the key cost components and a sample Year 1 budget for a typical Mediation and ADR operation implementing 24/7 Learning Assistants connected to the Cluelabs xAPI Learning Record Store (LRS). To make the math concrete, the example assumes 100 frontline users across three sites, a focused content set (about 32 scripts and checklists, 40 microlearning drills), and a pilot-to-scale rollout. Your numbers will vary, but the structure below will help you scope with confidence.
Key cost components explained
- Discovery and planning: Map workflows, define outcomes (for example, settlement and no-show rates), confirm guardrails, and align on scope, roles, and timeline.
- Workflow and learning design: Convert “moments that matter” into one-tap job aids, short drills, and assistant prompts; design role-based views and content standards.
- Content production: Draft and validate scripts, checklists, and microlearning; create print-friendly versions for low-tech days.
- Accessibility and plain-language review: Ensure materials meet accessibility requirements and read in clear, plain language suitable for diverse audiences.
- Translation and language access: Translate key scripts and phrases; align with interpreter workflows.
- Assistant build and configuration: Set up the 24/7 assistant, organize content, tune prompts, configure search, and enable role-based access.
- Technology and integration: Embed links in the case system and calendars, set up SSO where applicable, and stand up hosting and monitoring.
- Data and analytics: Instrument xAPI events, connect to the Cluelabs LRS, pseudonymize IDs, and build near–real-time dashboards.
- Security, privacy, and governance: Run privacy impact reviews, document controls, set retention, and establish manager-facing, aggregated views.
- Quality assurance and usability testing: Validate content accuracy, run scenario tests, verify accessibility, and tune microlearning.
- Pilot and iteration: Run an 8-week pilot with champions, gather feedback, and improve content and prompts quickly.
- Deployment and enablement: Deliver short live trainings, create quick-start guides, and place links where staff work.
- Change management and champions: Fund site champions, nudges, and comms to build daily habits.
- Technology subscriptions: Budget for the assistant platform, LLM usage, Cluelabs xAPI LRS, BI seats, and hosting.
- Ongoing content and assistant maintenance (Year 1): Keep scripts current, add drills for new gaps, and handle routine updates.
- Contingency: Reserve 10% for surprises (policy changes, new forms, extra translations).
| Cost Component | Unit Cost/Rate (USD) | Volume/Amount | Calculated Cost (USD) |
|---|---|---|---|
| Discovery and Planning | $150/hour | 120 hours | $18,000 |
| Workflow and Learning Design | $150/hour | 160 hours | $24,000 |
| Content Production (Scripts & Checklists) | $550/item | 32 items | $17,600 |
| Microlearning Production | $800/module | 40 modules | $32,000 |
| Accessibility & Plain-Language Review | $50/page | 100 pages | $5,000 |
| Translation & Language Access | $0.12/word | 15,000 words | $1,800 |
| Assistant Build & Configuration | $140/hour | 120 hours | $16,800 |
| Integration: SSO, Embeds, Links | $140/hour | 80 hours | $11,200 |
| Data & Analytics: xAPI + Dashboards | $160/hour | 140 hours | $22,400 |
| Security, Privacy, Governance Reviews | $160/hour | 80 hours | $12,800 |
| Quality Assurance & Usability Testing | $110/hour | 100 hours | $11,000 |
| Pilot and Iteration | $130/hour | 120 hours | $15,600 |
| Champion Stipends | $500/person | 6 champions | $3,000 |
| Deployment & Enablement Training | $120/hour | 30 hours | $3,600 |
| Rollout Materials and Print | $4/item | 500 items | $2,000 |
| Assistant Platform Licenses (Year 1) | $12/user/month | 100 users × 12 months | $14,400 |
| LLM Usage Budget (Year 1) | $750/month | 12 months | $9,000 |
| Cluelabs xAPI LRS Subscription (Year 1) | $200/month (budgetary) | 12 months | $2,400 |
| BI/Analytics Tool Licenses | $20/user/month | 10 users × 12 months | $2,400 |
| Hosting/Storage/Monitoring | $150/month | 12 months | $1,800 |
| Ongoing Content & Assistant Maintenance (Year 1) | $75/hour | 8 hours/week × 50 weeks | $30,000 |
| Contingency (10% of Subtotal) | N/A | 10% of $256,800 | $25,680 |
| Grand Total (Year 1) | N/A | N/A | $282,480 |
Effort and timeline at a glance
- Weeks 1–2: Discovery, guardrails, and success metrics.
- Weeks 3–6: Workflow and learning design; draft core scripts and checklists.
- Weeks 5–10: Assistant build, content production, accessibility checks (in parallel).
- Weeks 7–10: xAPI instrumentation, LRS setup, dashboards.
- Weeks 11–12: QA, usability, privacy reviews, go/no-go for pilot.
- Weeks 13–20: Pilot (8 weeks), weekly improvements, champion-led nudges.
- Weeks 21–24: Scale-up rollout, enablement sessions, steady-state support.
What drives cost up or down
- Scope of content: Fewer, higher-impact moments reduce production time and translation volume.
- Integration complexity: Embedding links is fast; deep case-system integration or SSO adds hours.
- Data depth: A small xAPI event set is cheaper to instrument; advanced dashboards add effort.
- Change management: Funding champions and nudges pays off in adoption but requires budget.
- Licensing and usage: Rightsize assistant licenses and LLM usage; pilots often fit into free or lower tiers.
Use this structure to build your own estimate. Start with the three moments that matter, keep content small and useful, instrument with the Cluelabs LRS, and iterate weekly with what the data and your staff show you.