Executive Summary: An IT Service Desk & Managed Services provider in the outsourcing and offshoring industry implemented 24/7 Learning Assistants, paired with the Cluelabs xAPI Learning Record Store, to deliver guidance in the flow of work and unify learning and ticket data. The program enabled precise tracking of first-contact resolution and reopen trends and produced measurable improvements: higher FCR, fewer reopens, and more consistent performance across shifts. This case study details the challenges, solution approach, rollout, dashboards, results, and lessons for leaders considering a similar always-on learning strategy.
Focus Industry: Outsourcing And Offshoring
Business Type: IT Service Desk & Managed Services
Solution Implemented: 24/7 Learning Assistants
Outcome: Track first-contact resolution and reopen trends.
Cost and Effort: A detailed breakdown of costs and efforts is provided in the corresponding section below.
Our Project Capacity: Elearning solutions developer

This Case Study Profiles an IT Service Desk and Managed Services Provider in Outsourcing and Offshoring
This case study looks at a global provider of IT Service Desk and Managed Services in the outsourcing and offshoring space. The teams handle high ticket volume by phone, chat, and email, around the clock. The work is fast, complex, and always changing. Clients expect quick answers and steady quality no matter the time zone or shift.
Two metrics shape daily success: first-contact resolution and ticket reopens. When agents solve issues on the first try, users get back to work and costs stay down. When tickets reopen, time and trust take a hit. Leaders need consistency across sites and shifts, and learning teams need a way to build skill quickly without pulling people off the floor.
- Tools, apps, and security rules change often
- Teams span multiple locations and experience levels
- Night and weekend shifts get less live coaching
- Classroom training fades by the time real work starts
- Data on learning and quality sits in different systems
The heart of the challenge was simple to state and hard to solve: help agents find the right answer in the moment, keep guidance consistent, and learn from every interaction so the next one is better. Traditional courses could not keep up with the pace of change, and supervisors could not be in every call or chat.
To close the gap, the organization introduced 24/7 Learning Assistants for in-the-flow support and used the Cluelabs xAPI Learning Record Store to bring learning and ticket data into one view. This made it possible to see how people used the assistants, track first-contact resolution and reopen trends, and direct coaching to where it would matter most.
In the sections that follow, you will see the challenge in more detail, the strategy that guided the rollout, how the solution worked, the impact on key metrics, and the lessons other teams can use to raise performance in a 24/7 service environment.
We Explain Why First-Contact Resolution and Reopen Trends Matter for 24/7 Support
First-contact resolution means the agent solves the issue during the first touch. No call backs. No extra tickets. Reopen means a closed ticket comes back because the fix did not stick or the user still needs help. In 24/7 support, these two signals show if the customer experience is smooth and if your fixes are solid.
When FCR is high, wait times drop, queues stay healthy, and people get back to work faster. Costs fall because you avoid repeat effort. Agents feel confident because their work lands well. When FCR slips, everything slows. Backlogs grow, handoffs multiply, and the same users call again.
Reopen trends show where things break. They are not about blame. Patterns point to knowledge gaps, confusing steps, tool limits, or new policies that did not reach every shift. They help you see the gaps you cannot spot in a single call.
- Instructions in the knowledge base are outdated or hard to follow
- Key context is missing in triage questions
- Tools changed, so the old fix no longer works
- Handoffs between levels or teams lose details
- Night and weekend teams did not get the latest update
These metrics also guide learning. They tell you which topics to cover first and which teams need more support. They help you test if new guides, job aids, or coaching make a difference.
- Spot spikes early and act before they spread
- Compare sites, shifts, and queues to find bright spots
- Link learning moments to outcomes, not just course completions
- Decide which articles to fix or retire
- Prove which changes improved results
Here is a simple example. VPN tickets reopen more often on weekends. A quick check shows a new multi-factor step that is easy to miss. By adding a short step-by-step guide and a one-minute practice scenario in the assistant, weekend reopen rates drop and FCR rises within days.
To make this work at scale, you need clean, trusted data. Bringing assistant usage and ticket outcomes into one view lets leaders see cause and effect and coach with confidence.
That is why FCR and reopen trends sit at the center of this program. In the next section, we outline the challenge that made this focus essential.
The Team Faced Rapid Change, Uneven Coaching, and Limited Visibility Into Quality
The work environment changed fast. New app versions, new security steps, and client requests landed every week. What worked yesterday could be wrong today. The team could not pause service to teach every update, yet agents still had to give clear, correct answers on every contact.
Coaching varied by shift and site. Day teams had managers and subject experts nearby. Night and weekend teams often had fewer coaches. New hires leaned on peers. Busy moments turned into trial and error. That slowed calls and hurt confidence.
Most training sat outside the flow of work. People took a course once, then handled real tickets months later. When pressure hit, agents searched the knowledge base and pinged a teammate. Some articles were long or outdated. The “right” steps were hard to find at speed.
Visibility into quality was thin. Data lived in separate systems. Reports arrived late. Quality checks covered a small slice of tickets. Reopen flags were used in different ways. First-contact resolution appeared at a high level, not by queue, site, or shift. Leaders could not link what people looked up to how the ticket ended.
- They could not see which topics agents searched for during tough calls
- They could not tell which job aids helped and which ones confused
- They could not spot reopen clusters early by category or shift
- They could not compare first-contact resolution across queues with confidence
- They could not target coaching to the moments that mattered most
The impact showed up in the work. First-contact resolution slipped in certain queues. Reopens crept up after tool changes. Handle time rose for new starters. Customer trust took a hit when users had to call back.
The team needed a simple way to help agents find the right answer during the call and a clear view of outcomes so they could learn from every ticket. They needed consistent coaching for every hour of the day and one place to see the full picture.
We Adopted a Continuous Learning Strategy Anchored in Always-On Assistance and Data
We shifted from one-time classes to continuous learning in the flow of work. Agents received 24/7 Learning Assistants right inside the tools they already use. Every answer came from approved guides, kept short and clear. The goal stayed simple. Help people solve the ticket now and lift first-contact resolution while cutting reopens over time.
- Help in the moment: Two clicks to a step-by-step guide, a short script, or a quick checklist
- Same support on every shift: Nights and weekends had the same playbook as days
- Trusted content: Answers came from vetted sources with a named owner and update date
- Tiny practice: One-minute refreshers and quick scenarios after tough calls to build confidence
- Clear outcomes: We watched first-contact resolution and reopen patterns, not just course completions
- Fast feedback: When something changed, we updated the guide and nudged the right teams
Data tied it all together. We used the Cluelabs xAPI Learning Record Store to bring assistant activity and ticket results into one view. Each time someone opened a topic, used a job aid, or asked the assistant a question, a simple record was sent. We also mirrored key ticket events like FCR status, reopen flag, category, and shift. With one source of truth, leaders built lightweight dashboards to compare before and after, slice by queue or cohort, and trigger small follow-ups through the assistant when reopen patterns appeared.
We kept the rollout practical. We started with two high-volume queues and built a baseline for FCR and reopens. We created short plays for the top issues, trained teams in a 30-minute session, and set a 24-hour target for guide updates after change requests. Shift champions collected feedback, and supervisors used the same plays for quick huddles.
Here is how a typical call looked with the new setup. An agent opens a VPN ticket. The assistant suggests the right triage questions and a short fix path. The agent follows the steps and uses a short line to explain the change to the user. After close, the assistant asks one quick check question. That single prompt, plus the ticket outcome, goes into the LRS. If the fix fails more often on weekends, the team sees it fast, improves the guide, and sends a nudge to weekend teams. Learning moves with the work, and results improve where it counts.
We Deployed 24/7 Learning Assistants and the Cluelabs xAPI Learning Record Store as the Core Solution
We put two pieces at the center of the solution. Agents got 24/7 Learning Assistants inside the tools they already use. Leaders got the Cluelabs xAPI Learning Record Store as the single place to see learning activity next to ticket outcomes. Together, they helped people fix issues in the moment and let the team track first-contact resolution and reopen trends with confidence.
- What the assistants did: Lived in the ticket and chat screens, offered short guides and checklists, suggested triage questions, and provided simple scripts
- Where answers came from: Only approved knowledge with clear owners and update dates
- How learning stuck: One-minute refreshers after tough calls and quick “what changed” nudges for new steps
- How every shift got support: The same help was available for nights, weekends, and holidays
The data side was just as important. We connected the assistants and the service desk platform to the LRS so we could see what help people used and how the ticket ended.
- Sent a record each time an agent opened a topic, used a job aid, or asked the assistant a question
- Mirrored key ticket fields such as FCR status, reopen flag, category, queue, and shift
- Hashed agent IDs to protect privacy while keeping trends by team and cohort
- Built simple dashboards to compare before and after, spot patterns by queue and shift, and trigger targeted nudges through the assistants
We kept setup lean and moved fast.
- Week 1: Picked two high-volume queues, set a clean baseline for FCR and reopens, and chose the top 20 issues
- Week 2: Wrote short, step-by-step guides, named content owners, and set a 24-hour target for updates
- Week 3: Turned on xAPI for the assistants, connected to the LRS, and synced ticket outcomes nightly
- Week 4: Ran a two-week pilot, fixed confusing steps, trained all shifts in 30 minutes, and launched with shift champions
We also put in simple guardrails so quality stayed high.
- Showed only vetted content and flagged anything older than 90 days for review
- Added a feedback button on each guide so agents could report gaps in one click
- Tagged guides by category and product to match the routing used in the ticket system
- Posted a “top 10 fixes” quick links bar for the most common issues
Here is how it looked in practice. An agent opens a password reset ticket. The assistant suggests triage questions and a short fix path. The agent follows the steps, closes the ticket, and logs first-contact resolution. The assistant records which guide was used and the time of use. The ticket outcome flows to the LRS. If weekend reopens spike for that issue, the dashboard shows it within a day. The content owner updates the guide and the assistant sends a short nudge to weekend teams. The loop is tight and visible, and coaching goes where it helps most.
We Centralized Learning and Ticketing Signals in the Cluelabs xAPI Learning Record Store to Enable Reliable FCR Tracking
To trust our first-contact resolution and reopen numbers, we brought learning and ticket signals into one place. The Cluelabs xAPI Learning Record Store acted as our single source of truth. It let us see what help agents used and how each ticket ended, side by side.
We kept the data simple and useful.
- From the assistants: Topic opened, job aid used, search term, quick refresher completed, time of use
- From the ticket system: Ticket ID, category, queue, shift, priority, FCR status, reopen flag, close time
- Linking keys: Ticket ID and a hashed agent ID so we could see patterns by team without exposing personal data
- Time stamps: Stored in UTC to align events from different sites and shifts
We set clear rules so FCR and reopens meant the same thing for everyone.
- FCR: The issue was solved in the first contact without a transfer or follow-up from the user
- Reopen: A closed ticket reopened on the same issue within seven days
- Clean data: We excluded duplicates, merged tickets, and test cases from reporting
We added light but steady quality checks.
- Daily counts to match ticket volume and assistant usage
- Field completeness checks for category, queue, and shift
- Spot checks on reopened tickets to confirm the reason
- Weekly review of the top search terms and top guides used
Privacy and trust mattered as much as speed.
- We hashed agent IDs and removed personal details from search logs
- We kept only short excerpts from feedback, not full call or chat text
- We set clear retention windows for assistant and ticket data
With the data in the LRS, we built lightweight dashboards that anyone could read.
- FCR and reopen trends by queue, category, site, and shift
- “Content lift” views that showed which guides correlated with higher FCR
- Early alerts when searches spiked for a topic or when reopens rose in a shift
- Pre and post comparisons for new guides and policy changes
The data also triggered action. If reopens crossed a set threshold for a category, the content owner reviewed the guide, pushed a quick update, and the assistant nudged the affected teams. If searches surged for a new tool, we added a one-minute refresher and checked the effect on FCR the next day.
Here is a simple example. Searches for “MFA reset” jumped on weekends, and reopen rates for VPN tickets rose at the same time. The dashboard flagged it. We found a new step that was easy to miss, updated the guide, and sent a short nudge to weekend teams. Within a week, reopens dropped and FCR improved.
By centralizing signals in the LRS, we removed debate about the numbers and focused on fixes. Leaders could see cause and effect, coach with confidence, and prove which changes improved results.
We Rolled Out in the Flow of Work With Shift Coverage and Targeted Coaching
We launched the program in the flow of work so agents did not have to leave the queue. The 24/7 Learning Assistants lived in the same screens agents already used, and help was two clicks away. Short show-and-tell sessions replaced long classes. People learned how to use the assistant on a real ticket, then tried it on the next one.
Shift coverage was a must. Every team, at every hour, got the same guidance and the same quick support. We mirrored day shift routines at night and on weekends so no one was a step behind.
- Five-minute huddles at the start of each shift with a “top three issues today” snapshot
- Shift champions on days, nights, and weekends to answer questions and collect feedback
- Quick cards in the assistant with the top 10 fixes pinned for fast access
- Update windows during low-traffic times so changes did not interrupt busy periods
- One-click feedback on each guide so agents could flag gaps in seconds
- Same playbook across sites so agents moving between queues saw familiar steps
Coaching became targeted and timely. We used the Cluelabs xAPI Learning Record Store to see where first-contact resolution dipped or reopens rose by queue and shift. Supervisors focused on topics, not people, and kept the tone supportive and practical.
- Spot a pattern: The dashboard shows a rise in reopens for VPN tickets on weekends
- Fix the content: The content owner updates a missed step and adds a one-minute refresher
- Nudge the right teams: The assistant sends a short prompt to weekend shifts
- Coach in the moment: Supervisors run a two-question huddle and practice the new triage line
- Check results: FCR and reopens are reviewed the next few days to confirm the change worked
New hires and cross-trained agents got a simple path. Their first tickets came with suggested triage questions and checklists. After tough calls, the assistant offered a short refresher. Leads used the same guides for quick shadow sessions, so coaching stayed consistent.
Here is how it felt on the floor. An agent on the night shift opens a printer ticket. The assistant suggests the right triage, a short fix path, and a clear line to set expectations with the user. The ticket closes on the first contact. The assistant logs which guide helped, and the outcome appears in the LRS. If similar tickets start to reopen later in the week, the team sees it quickly and adjusts.
This rollout kept service running while skills grew. Agents did not have to hunt for answers. Leaders could coach based on live signals. Nights and weekends had the same quality bar as days, and improvements reached every shift at the same time.
Leaders Used Lightweight Dashboards to Compare Baseline and Post-Rollout Performance and Trigger Targeted Nudges
Leaders did not need a complex analytics stack to act. They used lightweight dashboards fed by the Cluelabs xAPI Learning Record Store to compare a clean baseline with post-rollout results. A simple toggle showed before and after. Clear filters let them focus on a queue, a category, a site, or a shift. The goal was to see what changed and respond fast.
Each dashboard answered a few plain questions at a glance.
- Are we improving: FCR trend with a baseline line for quick comparison
- Where are problems: Reopens by category and shift to spot clusters
- What help worked: Tickets with a guide used versus tickets without, with FCR for each
- What people need: Top searches and most opened guides in the assistant
- What to fix next: A short watch list of rising issues and aging content
When a pattern looked off, the dashboards also triggered a small, targeted nudge. Leaders set simple rules, not complex models. If reopens in a category rose above a threshold in a shift, the system alerted the content owner and sent a short message to the affected teams.
- Trigger: Reopens for VPN rise above the set level on weekends
- Action: Content owner reviews the guide and adds a missing step
- Nudge: The assistant pushes a one-minute refresher to weekend shifts
- Follow-up: Supervisors run a two-question huddle the next day
- Check: Leaders compare FCR and reopens for the next three days
Leaders used the same views for daily huddles and weekly content reviews. Morning check-ins showed the top three issues to watch. A weekly sweep closed the loop on every alert. Wins and fixes were shared in one page, with links straight to the updated guides.
Fairness stayed front and center. The dashboards focused on topics and teams, not on ranking individuals. Agent IDs were hashed. The tone stayed supportive. The message was simple. This is about better answers and fewer reopens.
Here is a day in the life example. On Monday, baseline and current lines show a dip in FCR for password resets. Searches for “MFA reset” are up on night shift. A nudge goes out with a new checklist and a short script. By Wednesday, FCR returns to baseline and reopens drop. The change is clear on the chart, so the team moves on to the next item on the watch list.
These lightweight dashboards kept the work focused. Leaders could see cause and effect, act within hours, and prove that a tweak to a guide or a quick nudge made a real difference.
The Approach Lifted First-Contact Resolution and Reduced Reopens Across Queues and Shifts
Results showed up fast. Within weeks of launch, more tickets closed on the first contact and fewer came back. The gains held as we expanded to more queues and sites. Nights and weekends matched day shift performance. Leaders could confirm the change in the Cluelabs xAPI Learning Record Store dashboards by comparing the baseline to the new trend lines.
- First-contact resolution rose: Pilot queues improved first, then other queues followed as guides and nudges scaled
- Reopens fell: Categories with updated guides saw steady drops, with the biggest gains on weekend shifts
- Guides made a clear difference: Tickets where agents used an assistant guide closed on the first contact more often than those without guide use
- Queues ran smoother: Fewer repeat calls eased backlogs and shortened wait times during peaks
- Escalations reduced: Common issues stayed at Level 1 when triage questions and fix steps were clear
- Ramp time improved: New hires reached steady performance faster with checklists and short refreshers
- Knowledge stayed fresh: Aging or confusing articles were flagged, fixed, or retired based on reopen data
The improvement loop was simple. The assistant suggested the right steps, agents applied them, and the outcome flowed to the LRS. If a pattern looked off, the content owner updated the guide and the assistant nudged the right shifts. Leaders checked the dashboards the next day to see the effect. This kept changes small, fast, and visible.
The benefits went beyond the charts. Agents felt more confident because the steps were clear and consistent. Supervisors spent time coaching on real moments instead of chasing reports. Clients saw steadier service and clearer updates. Everyone spoke the same language about results, since FCR and reopen definitions were shared and enforced.
One example stood out. Weekend teams struggled with VPN tickets after a new multi-factor step rolled out. We added a one-minute refresher and a clearer triage line. Reopens dropped and first-contact resolution rose within days. The same fix then helped day shift, which kept the gains across the week.
In short, always-on guidance plus a single source of truth turned learning into daily performance. The program lifted first-contact resolution and cut reopens across queues and shifts, and it did so in a way the team could explain and repeat.
We Share Lessons for Executives and Learning and Development Teams Considering 24/7 Learning Assistants
If you are exploring 24/7 Learning Assistants, here are the takeaways we wish we had on day one. The goal is simple. Help agents fix issues during the first contact and avoid reopens, at any hour. Keep the experience light, keep the data clean, and act on what you learn fast.
- Define success in plain terms: Write a one-line rule for FCR and a one-line rule for reopens. Share them with every site and shift
- Set a clean baseline: Capture four weeks of FCR and reopens by queue and shift before launch
- Start where volume is high: Pick two or three categories with many tickets and clear pain
- Put help in the same screen: Keep it two clicks to a guide, checklist, or short script
- Keep answers short: One screen, clear steps, and the exact words to use with the user
- Give content an owner: List a name and a review date on every guide with a 24 to 48 hour update target
- Name shift champions: Have day, night, and weekend champions to collect feedback and model use
Good data makes good coaching. Use the Cluelabs xAPI Learning Record Store to bring assistant activity and ticket outcomes together.
- Send simple events: Topic opened, job aid used, search term, time of use
- Mirror key ticket fields: Ticket ID, category, queue, shift, FCR status, reopen flag
- Protect privacy: Hash agent IDs and avoid storing personal details
- Build lightweight dashboards: Show pre and post trends with filters for queue, site, and shift
- Focus on topics, not people: Coach to patterns and content, not to leaderboards
Design the assistant for speed in the real world. Think like an agent in a live call.
- Use a simple play format: Triage questions, fix steps, a short line to explain, and when to escalate
- Match the ticket flow: Tag guides by category and product so the right help appears at the right time
- Pin the top fixes: Keep a quick links bar for your top issues
- Trim the noise: Retire long or stale articles and replace them with short, clear guides
- Invite feedback: Add a one-click “This was unclear” button on every guide
Make coaching timely and targeted. Small actions beat big campaigns.
- Use five-minute huddles: Start each shift with a top three issues snapshot
- Trigger nudges, not blasts: When reopens spike in a category, push a one-minute refresher to the affected shift only
- Practice in the moment: After a tough call, offer a quick scenario or checklist check
- Close the loop fast: Update the guide within a day and check the effect on FCR the next two days
Win trust early and keep it.
- Frame it as help: Show how the assistant saves time and reduces repeat calls
- Be clear about data: Explain what you track and why, and what you do not track
- Share quick wins: Post before and after charts for one issue and thank the team that drove the change
- Keep friction low: No extra logins and no long forms to send feedback
Link the work to outcomes that matter to leaders.
- Connect to operations: Show the effect on backlog, wait times, and escalations
- Connect to customer experience: Pair FCR gains with a sample of survey comments
- Connect to cost: Estimate hours saved from fewer reopens and fewer handoffs
- Set a simple scorecard: FCR, reopens, top searched topics, top guides used, aging content
Watch for common pitfalls and avoid them.
- Do not scale too fast: Prove the loop on a few queues before a wide launch
- Do not drown in data: Keep the dashboard to a handful of views that drive action
- Do not let content age: Auto-flag guides older than 90 days for review
- Do not overload people with nudges: Cap how many prompts a person gets in a day
- Do not leave definitions loose: Keep FCR and reopen rules tight and shared
Here is a simple starter plan you can copy.
- Pick two high-volume queues and capture a four-week baseline
- Write ten short guides for the top issues and name an owner for each
- Embed the 24/7 Learning Assistant in the ticket screen and enforce the two-click rule
- Turn on xAPI events and connect the assistant and ticket system to the LRS
- Launch with five-minute shift huddles and a quick links bar for the top fixes
- Set three alerts for reopen spikes and send short nudges to the right shifts
- Review FCR and reopens after one week, update the guides, and expand to the next queue
The playbook is straightforward. Put help where agents work. Keep guidance short. Bring learning and ticket data into one view. Coach to patterns and show the effect the next day. With 24/7 Learning Assistants and the Cluelabs xAPI Learning Record Store, you can scale consistent support, reduce reopens, and raise first-contact resolution in a way that teams can feel and leaders can prove.
Is This Approach a Good Fit? A Guided Conversation for Decision-Makers
In a global IT Service Desk and Managed Services setting within outsourcing and offshoring, work never stops. Changes arrive often, coaching varies by shift, and quality data can lag. The solution in this case put 24/7 Learning Assistants inside the ticket screen so agents had short, approved steps during live work. It paired that with the Cluelabs xAPI Learning Record Store to bring assistant use and ticket outcomes into one view. Leaders tracked first-contact resolution and reopen trends in near real time. When a pattern looked off, they fixed the guide and nudged the right shift. This gave every agent the same help at any hour and turned learning into daily performance.
Use the questions below to guide a fit conversation for your organization.
- Do your teams run high-volume, multi-shift support where quality varies by time of day
Why it matters: The biggest gains come when you need consistent guidance across shifts and the work repeats. Always-on help smooths the dips at night and on weekends.
What it uncovers: Where to pilot and the likely lift. If your operation is small or single shift, a focused knowledge base refresh may deliver enough value. - Can you define and access reliable FCR and reopen data today
Why it matters: Clear, shared definitions and accessible data let you prove value and guide fixes with confidence.
What it uncovers: Readiness to connect the ticket system to the LRS and set alerts. If definitions differ by team or data is messy, plan a short standards sprint before rollout. - Is your knowledge base ready for two-click answers with clear owners and fast updates
Why it matters: The assistant is only as good as the content. Short, current guides drive first-contact resolution and cut reopens.
What it uncovers: Content gaps and ownership. If articles are long or stale, set owners, trim to one screen, add review dates, and aim for a 24 to 48 hour update window. - Will your culture support in-the-flow coaching and small, targeted nudges
Why it matters: Adoption makes or breaks results. Agents and leaders need to view the assistant as help, not surveillance.
What it uncovers: The need for shift champions, five-minute huddles, and clear privacy rules such as hashed IDs and no individual scorecards. - Do you have a simple technical path to embed the assistant and send xAPI events securely
Why it matters: Frictionless access and safe data flow keep agents engaged and protect trust.
What it uncovers: Integration steps like SSO, browser compatibility, network rules, data retention, and client-by-client segmentation if you serve multiple customers.
If you answered yes to most questions, start small. Pick two queues, capture a four-week baseline, embed the assistant in the ticket screen, connect outcomes to the LRS, and review results within days. If you answered no on several items, begin with the groundwork. Tighten FCR and reopen definitions, clean up the top guides, name owners, and set a simple plan for shift huddles and champions. Either path moves you toward faster fixes, fewer reopens, and a shared view of quality.
Estimating Cost and Effort for a 24/7 Learning Assistant and LRS Rollout
The estimates below reflect a typical 90-day pilot for two high-volume queues with about 150 agents in an IT Service Desk and Managed Services environment. Your numbers will vary based on seat count, the number of guides you build, integration complexity with your ticketing platform, and compliance needs. Rates assume a mix of internal and vendor resources. Treat this as a planning baseline and adjust to your context.
- Discovery and planning: Align stakeholders, define success (FCR and reopens), confirm scope, and set a pilot plan with timelines and roles.
- Solution design: Map the in-flow experience, enforce a two-click rule to answers, define the play format (triage, steps, script, escalation), and choose the pilot queues.
- Content production and curation: Create short, approved guides and one-minute refreshers for the top issues. Assign owners and review dates so updates are fast.
- Technology and integration: Embed the 24/7 Learning Assistants in the ticket screen, enable SSO, and wire xAPI events. Connect ticket outcomes to the LRS.
- Data and analytics setup: Stand up the Cluelabs xAPI Learning Record Store, define data fields, and build lightweight dashboards to compare baseline and post-rollout.
- Licensing and subscriptions: Budget for assistant seat licenses and the LRS plan that fits your event volume.
- Quality assurance and compliance: Test guides and dashboards, run security and privacy reviews, and complete UAT with supervisors.
- Pilot run operations support: Provide light admin, weekly content tune-ups, and quick reporting during the 90-day pilot.
- Deployment and enablement: Deliver short show-and-tell sessions, cover all shifts, and account for minimal time off queue.
- Change management and champions: Prepare communications, fund shift champions, and keep feedback loops open.
- Contingency: Hold a buffer for surprises like an extra integration step or an urgent content sprint.
Sample cost model for a 90-day pilot with 150 agents:
| Cost Component | Unit Cost/Rate (USD) | Volume/Amount | Calculated Cost |
|---|---|---|---|
| Discovery and Planning (PM + L&D, blended) | $115/hour | 30 hours | $3,450 |
| Solution Design (assistant experience and play format) | $90/hour | 30 hours | $2,700 |
| Content Production and Curation (20 guides + 20 refreshers + reviews) | $85/hour | 100 hours | $8,500 |
| Technology and Integration (embed assistant, SSO, xAPI wiring, ticket feed) | $140/hour | 90 hours | $12,600 |
| Data and Analytics Setup (LRS config, data model, dashboards) | $110/hour | 40 hours | $4,400 |
| Licensing — 24/7 Learning Assistant Seats | $10/user/month | 150 users × 3 months | $4,500 |
| Licensing — Cluelabs xAPI Learning Record Store | $200/month | 3 months | $600 |
| Quality Assurance Testing | $75/hour | 40 hours | $3,000 |
| Privacy and Security Review | $130/hour | 16 hours | $2,080 |
| User Acceptance Testing (Supervisors) | $40/hour | 20 hours | $800 |
| Pilot Ops Support — Platform Admin | $80/hour | 24 hours | $1,920 |
| Pilot Ops Support — Weekly Content Updates | $80/hour | 72 hours | $5,760 |
| Pilot Ops Support — Weekly Reporting | $110/hour | 12 hours | $1,320 |
| Deployment and Enablement — Agent Backfill | $30/hour | 75 hours | $2,250 |
| Deployment and Enablement — Facilitator/Trainer Time | $90/hour | 10 hours | $900 |
| Deployment and Enablement — Supervisor Orientation | $40/hour | 10 hours | $400 |
| Change Management — Communications Plan | $120/hour | 10 hours | $1,200 |
| Change Management — Shift Champion Stipends | $200/champion | 6 champions | $1,200 |
| Change Management — Champion Check-ins | $40/hour | 24 hours | $960 |
| Contingency (10% of subtotal) | $5,854 | ||
| Estimated 90-Day Pilot Total | $64,394 |
How this scales: one-time items like design and integration change little as you add agents or queues. Recurring items (licenses, agent enablement time, weekly content updates) grow with seat count and scope. A typical expansion step after a successful pilot is to add 200–300 seats and 10–15 more guides. Multiply the relevant rows to forecast your phase-two budget.
Effort at a glance: expect two to four weeks for planning and design, two to three weeks for integration and data setup, one week for QA and UAT, and a 90-day pilot with light weekly upkeep. Most teams run this with a part-time program manager, one instructional designer, shared SMEs, one engineer for setup, and a data analyst for the initial dashboards.