Executive Summary: This case study profiles a utility-scale wind and solar operator that implemented short, scenario-based Problem-Solving Activities to better prepare crews for real decisions in the field. By linking learning data to operations through the Cluelabs xAPI Learning Record Store, the organization correlated training proficiency to turbine availability and schedule clarity, enabling targeted refreshers and coaching. The result is a practical, scalable approach that improved uptime, reduced reschedules, and proved the business value of the solution.
Focus Industry: Oil And Energy
Business Type: Renewables (Wind/Solar/O&M)
Solution Implemented: Problem‑Solving Activities
Outcome: Correlate training to availability and schedule clarity.
Cost and Effort: A detailed breakdown of costs and efforts is provided in the corresponding section below.
Our Project Role: Custom elearning solutions company

A Utility-Scale Wind and Solar Operator Reimagined Learning to Meet Rising Stakes
Across a fast‑growing fleet of wind and solar sites, a utility‑scale operator saw that the way people learn shapes what happens on the towers, in the control room, and in day‑to‑day O&M work. Weather changes fast, sites are spread out, and safety is non‑negotiable. One missed step or a delayed decision can ripple through the schedule and cut into energy delivered. The stakes were clear. They needed people who could solve problems on the spot and a way to show that training made a real difference.
Traditional training could not keep up. Long slide decks and one‑off classes did not match the pace of the field. New platforms arrived, parts were tight, and teams shifted between roles. Both new hires and seasoned techs needed more practice with real situations, not more pages to read.
The team reimagined learning around short, realistic Problem‑Solving Activities. These scenarios mirrored daily work like fault triage, permit checks, work planning, and shift handovers. People practiced decisions, weighed tradeoffs, and saw how choices affected safety, time, and production. The goal was simple: build confident problem solvers who could keep assets available and keep plans on track.
They also set out to measure impact. The team used the Cluelabs xAPI Learning Record Store to capture how learners moved through scenarios and how long they took. They linked that training data with work orders and control system signals. This let them connect proficiency in practice to outcomes in the field, including turbine availability and the clarity of daily schedules.
This case study shares why the change was needed, how the solution came together, and what it delivered. It offers a practical look at retooling learning for renewables so teams can respond faster, plan better, and keep power flowing.
We Faced Operational and Scheduling Challenges That Blocked Performance
Running a large wind and solar fleet is a daily balancing act. Weather windows open and close, alarms pop up at odd hours, and crews work across wide distances. We saw smart, experienced people doing their best while performance slipped because plans changed too often and decisions took too long. The result was lost availability and a schedule that did not hold together.
- Plans changed midstream. A permit delay, a late part, or a new fault would push jobs to tomorrow. Techs bounced between tasks, which created more handoffs and lost time.
- Weather narrowed the workday. High winds, lightning, and heat limits forced last‑minute cancellations. Crews had to replan on the fly, sometimes more than once a day.
- Alarms were noisy and hard to triage. People did not always agree on the first steps to take. Some faults got a truck roll when a remote action would do, and others waited too long.
- Shift handovers lacked clarity. Notes were inconsistent, so the next crew repeated checks or missed a key step. Small gaps added up to hours of delay over a week.
- Parts and access slowed repairs. The right spares were not always on site, and long drives cut into work time. A missing tool could stall a whole plan.
- New platforms arrived fast. Procedures changed and not everyone learned them at the same pace. Teams mixed old habits with new steps and results varied by site.
- Contractors and cross‑site teams worked differently. Methods and quality were not consistent, which made planning and estimates less reliable.
- Training and operations did not connect. We could not tell which skills from training actually improved availability or kept the daily plan on track.
These issues fed each other. A single missed decision early in the day could ripple into three reschedules by evening. We needed people to solve problems faster and in the same way across sites, and we needed a clear line of sight from learning to outcomes like uptime and plan adherence.
Our Strategy Centered on Problem-Solving Activities and Measurable Impact
Our plan had two parts. First, give crews realistic practice with the same choices they face on site. Second, prove that practice moves the numbers that matter. We built everything around short, focused Problem‑Solving Activities and a clear way to track results.
- We picked the right moments. Fault triage, permit checks, shift handover, weather calls, remote reset or roll a truck, and parts picks.
- We kept it short. Most scenarios took five to ten minutes and worked on phones or tablets during pre‑shift or standby.
- We showed consequences. Every decision changed safety risk, time, and production so people could see tradeoffs right away.
- We built team versions. Crews practiced handovers and day plans so everyone learned a common approach.
- We wrote simple playbooks. Each scenario linked to a clear checklist that matched site standards.
- We coached in the moment. Hints and feedback nudged better decisions without giving away the answer.
We also made measurement part of day one. We tracked choices, time to resolution, and common mistakes in each activity with xAPI and sent the data to the Cluelabs xAPI Learning Record Store. We joined that training data with work orders and control system signals. This let us link practice to uptime and to a schedule that holds.
- We captured what people did. Decision paths, time spent, and error patterns for each scenario.
- We matched training to operations. Work‑order cycle time, deferrals, downtime, and availability sat next to learning data.
- We built simple views. Dashboards showed trends by site, platform, and role so managers could coach where it counted.
- We updated fast. We used the data to fix confusing steps, add new cases, and target refreshers.
Rollout was practical and paced. We started with two sites and a small set of high‑impact scenarios. We trained local leads, scheduled short practice into huddles, and offered offline access. We kept safety front and center. As the pilot worked, we scaled to more platforms and crews.
The strategy made learning active and measurable. Crews gained confidence. Managers saw where to help. The business saw a clearer line from training to turbine availability and to plans that stayed on track.
We Built Scenario-Based Problem-Solving Activities for Safety-Critical Work
We built the activities to mirror real moments that carry safety and production risk. Each one is short, clear, and safe to practice. People make choices, see what happens, and learn how to act fast without cutting corners. The tone is practical. It feels like a shift on site, not a classroom lecture.
Field leaders and technicians helped write every scenario. We picked the highest‑value moments and kept the language simple. The structure is always the same so it is easy to use during a huddle or on standby.
- Start with the scene. The activity sets the goal, site conditions, and any limits like weather or access time.
- Make a decision. Learners choose from a few clear actions and move forward based on the path they pick.
- Follow the safety gates. Checklists for permits, energy isolation, and rescue plans must be completed before work continues.
- See the impact. Each choice updates safety risk, time to finish, and expected energy delivered.
- Close the loop. A short debrief highlights what went well and what to try next time.
We covered common wind and solar situations and made sure each one reinforced safe habits.
- High winds and storm cells. Decide whether to pause, switch tasks, or stand down when a weather window narrows.
- Fault triage. Choose the first steps on alarms and decide when a remote reset is enough and when to send a crew.
- Energy isolation. Practice the lockout and verification steps before entering equipment, with prompts to catch easy‑to‑miss checks.
- Work planning. Build a day plan that balances permits, parts on hand, travel time, and safety rules.
- Shift handover. Write clear notes, flag deferrals, and set up the next crew to start fast and safe.
- Parts and tools. Pick spares and tools based on the fault and site conditions to avoid avoidable delays.
- Contractor coordination. Align methods and quality checks when mixed crews work together.
We offered solo and team modes. In solo mode, hints nudged better choices without giving the answer. In team mode, crews talked through options and aligned on a common playbook. New hires saw a guided path. Experienced techs saw tougher versions with fewer prompts and tighter time limits.
Every scenario included the exact checklists used on site. If a learner skipped a safety step, the simulation stopped and explained why, then let them try again the right way. The debrief asked a few short reflection questions and linked to a one‑page refresher for later use.
Accessibility mattered. All activities worked on phones and tablets, online or offline. Most took five to ten minutes so they fit into pre‑shift meetings and travel time. This kept practice frequent, realistic, and connected to the work that keeps people safe and assets available.
We Used the Cluelabs xAPI Learning Record Store to Connect Training to Operations
To prove that practice changed what happened on site, we needed a clear link from learning to operations. We used the Cluelabs xAPI Learning Record Store as our hub for that link. Think of it as a single place where every training action lands and can be compared with real field results.
We instrumented each Problem‑Solving Activity so it sent small data messages when a learner made a choice or finished a step. These messages said what they did and how long it took. That gave us a clean, play‑by‑play view of how people handled a scenario.
- Decision paths. Which options people chose and in what order
- Time to resolution. How long it took to reach a safe, effective outcome
- Safety gates. Which permit and isolation checks they confirmed
- Errors and retries. Where mistakes happened and where hints helped
Training data alone was not enough. We joined it with the data that runs the business. We connected the LRS to our work management and to control system signals so we could see the full picture.
- Work orders. Cycle time, deferrals, and close rates
- Scheduling. Reschedule rates and plan adherence
- Operations. Downtime reasons and asset availability by turbine and site
With all of this in one place, we built simple dashboards for site leaders, supervisors, and coaches. Views grouped results by site, turbine platform, role, and shift so patterns stood out fast.
- Do faster triage decisions in practice show up as higher availability on that platform
- Which safety steps cause repeat delays and need a refresher
- Where are handover notes weak and driving next‑day rework
- Which scenarios predict schedule changes later in the week
We turned insights into action.
- Targeted refreshers. Short tune‑ups for teams that struggled with a specific step
- Scenario updates. New branches and clearer prompts where people got stuck
- Coaching focus. Pre‑shift huddles that practiced the exact choices hurting plan stability
- Planning inputs. Schedulers saw which crews were ready for complex jobs
We kept the process simple and fair. We set clear definitions, checked data quality weekly, and shared results with teams so they could improve, not feel judged. The goal was safer, faster decisions and a plan that held.
The payoff was clarity. By pairing the LRS with work and operations data, we could correlate training proficiency with turbine availability and with a schedule that changed less. That gave leaders confidence to scale what worked and fix what did not.
We Unified Training, CMMS, and SCADA Data to Track Decisions and Outcomes
We pulled three streams into one view so we could see choices and results together. Training clicks and decisions came from the Cluelabs xAPI Learning Record Store. Work orders and schedules came from the CMMS, which is the maintenance and planning system. Alarms and runtime came from SCADA, which is the control system that reports how assets run in real time.
We matched records with simple, clear rules. We used asset IDs, work‑order numbers, and consistent names for sites and platforms. We aligned timestamps to one time zone and used short windows to connect practice to the next set of real jobs. Then we looked at groups by site, platform, role, and shift so patterns stood out.
- Training data. Scenario ID, decisions taken, time to resolution, safety steps confirmed, and retries
- CMMS data. Work‑order type, cycle time, deferrals, reschedules, close rates, and crew
- SCADA data. Alarm codes, start and clear time, downtime minutes, availability, and energy produced
We kept the data clean and fair. We set one naming standard, checked data quality each week, and shared what the numbers meant before using them. The goal was coaching and better plans, not blame.
- We used one asset list across training, CMMS, and SCADA
- We stored timestamps in UTC to avoid shift and daylight mistakes
- We linked training on a fault family to the next 30 days of jobs on that family
- We set a baseline for each site and platform to compare before and after practice
- We reviewed outliers with site leads to confirm what really changed
With the unified view, leaders could ask simple, high‑value questions and get clear answers.
- Do faster triage choices in practice lead to fewer truck rolls and fewer deferrals
- Which safety steps, when missed in practice, show up as repeat delays on jobs
- Where do weak handover notes drive next‑day rework and reschedules
- Which crews are ready for complex jobs based on recent practice and field results
Early signals turned into real wins.
- Sites that practiced remote reset triage saw fewer unnecessary truck rolls and faster closeout on nuisance alarms
- Teams that drilled shift handovers cut next‑day rework and reduced reschedule rates
- Better parts picks in scenarios led to fewer mid‑job pauses and a rise in plan adherence
The data did not sit on a shelf. It drove action.
- Targeted refreshers. Short practice for the exact steps that hurt uptime or plans
- Scenario updates. New branches for frequent faults and clearer prompts where people got stuck
- Scheduling inputs. Planners saw which crews handled certain faults well and assigned work with confidence
By unifying training, CMMS, and SCADA, we turned scattered signals into one picture. We could track decisions, see outcomes, and make changes that improved availability and kept the schedule steady.
Training Proficiency Correlated to Turbine Availability and Schedule Clarity
When we lined up training and field data, the pattern was clear. As crews got better at the scenarios, turbines stayed available longer and the daily plan held together. We saw fewer last‑minute changes and more jobs finished as planned. The link between practice and performance was not a guess. It showed up across sites and platforms.
We defined training proficiency in simple terms. People who finished scenarios with safe choices, used fewer hints, and reached a sound outcome in steady time counted as proficient. We looked at teams by cohort and tracked how their results moved after steady practice.
- Higher proficiency lined up with higher turbine availability on the same platform
- Teams with strong handover and planning practice had lower reschedule rates
- Faster triage in scenarios mirrored faster work‑order cycle time in the field
- Better remote reset choices led to fewer unnecessary truck rolls
- Smarter parts picks in practice raised first‑time fix rates and reduced deferrals
Site stories matched the data. At a coastal wind site, crews drilled weather calls and remote triage before storm season. They sequenced work earlier in the day and handled nuisance alarms without rolling a truck. Availability held through a rough month and the plan moved less. At a large solar site, brief handover practice tightened notes and flagged deferrals with clear next steps. The next day’s crews started faster and hit more of the plan.
We know many factors affect uptime and schedules. To keep it fair, we compared periods with similar weather, adjusted for planned outages, and set a baseline before the rollout. We looked for the same pattern across different sites, platforms, and roles. The trend held.
The outcome was confidence to act. Leaders used the insights to assign complex jobs to ready crews. Coaches focused refreshers on the exact steps that slowed work. Designers updated scenarios to match frequent faults. Most important, we could now show that better problem solving in training correlated with higher availability and with schedules that changed less.
We Learned How to Scale and Sustain Problem-Solving in Renewables
Scaling this approach across a wind and solar fleet took practical choices and steady habits. The big lesson is simple. Keep problem solving close to real work, make it easy to practice often, and use clear data to keep improving. Here is what made the change stick.
- Start small and prove value. Pick a few high‑impact scenarios and two sites. Run short cycles, measure results, and fix what slows people down. When crews and leaders see availability improve and plans hold, expansion is easy to approve.
- Co‑design with the field. Write scenarios with technicians, planners, and site leads. Use the same words they use. Link every choice to the real checklist. A five‑minute review with two field leaders stops confusion before it spreads.
- Fit practice into the day. Keep activities to five to ten minutes. Make them work on phones with offline access. Use pre‑shift huddles, travel time, and short standby windows. Frequent, small practice beats long classes every time.
- Make safety a hard gate. Stop the scenario if a permit, lockout, or rescue check is missed. Explain why, then let the learner try again. Repetition builds safe habits without risk.
- Instrument from day one. Send decisions, time to resolution, and retries to the Cluelabs xAPI Learning Record Store. Use one set of asset IDs and one time zone. Join training data with CMMS and SCADA so you can see practice and outcomes together.
- Turn data into action fast. Review dashboards weekly. If teams struggle with triage steps, schedule a short refresher. If handover notes look weak, add a new scenario and coach during the next huddle. Small fixes add up.
- Keep it fair and transparent. Share the metrics and what they mean. Compare like with like and set baselines. Focus on coaching, not blame. Celebrate wins and visible improvements.
- Use templates to scale. Build a standard scenario shell with the same sections, prompts, and debrief. Keep a checklist library by platform. Version everything so sites always have the latest safe method.
- Prepare leaders to coach. Show supervisors how to read the dashboards and ask simple questions. Give them a 15‑minute weekly script with one focus area, one refresher, and one shout‑out.
- Update content like a product. Schedule monthly reviews. Add new branches for common faults, retire old steps, and adjust for seasonal weather. Treat scenarios as living tools, not one‑time courses.
- Support contractors and new hires. Offer a short, role‑based path before dispatch. Include site rules, handover quality, and common fault families. Use the same playbook so mixed crews work the same way.
- Plan for real‑world constraints. Expect spotty connectivity. Design for low bandwidth and glove use. Enable night mode for dark environments. Keep print‑ready one‑pagers in trucks and containers.
- Keep motivation simple. Use light nudges, progress streaks, and peer shout‑outs. Tie completion of key scenarios to readiness for complex jobs. Make progress visible and meaningful.
- Resource what matters. Budget time for two to three subject‑matter experts per platform, a learning designer, and a data analyst for the first three months. Protect a small weekly window for field input and data review.
- Mind privacy and compliance. Limit personal data, control access, and log changes. State what the data will be used for, and keep to it. Good governance keeps trust high.
- Know what to stop. Retire scenarios that do not drive field results. Drop metrics that people cannot influence. Focus on the few measures that guide better decisions.
These habits made the program durable. New sites came online with a ready pack of scenarios and checklists. Crews kept skills fresh with short, frequent practice. Leaders used clear, shared data to guide coaching and planning. Most important, the link from better problem solving to higher availability and steadier schedules stayed visible, which kept support strong as we grew.
For teams across renewables, this is a repeatable path. Start with the decisions that matter most, build short practice that fits the work, connect training to operations with clean data, and keep improving in small steps. Do that, and problem solving becomes part of how the fleet runs every day.
Is This Problem-Solving and Data-Linked Learning Approach Right for You
The solution worked because it tackled the real pain points of a utility-scale wind and solar operator. Field work moved fast, sites were spread out, and safety was always first. Plans slipped when alarms were noisy, handovers lacked clarity, and crews made different calls on the same faults. We built short, scenario-based Problem-Solving Activities that mirrored daily choices like triage, permits, parts picks, and weather calls. We captured every choice with xAPI and sent it to the Cluelabs xAPI Learning Record Store, then joined it with CMMS and SCADA data. Leaders finally saw a clear line from practice to uptime and to a schedule that changed less. That insight let teams target refreshers, update scenarios, and coach in the moments that mattered.
- Do your biggest losses come from everyday decisions that people can practice in short scenarios
Why it matters: This approach boosts judgment and speed at common forks in the road, not rare events. If your pain lives in triage, handovers, permits, and parts choices, scenario practice will pay off.
What it reveals: A clear list of five to ten recurring decisions becomes your starting backlog. If issues are mostly one-off or equipment-only, focus first on process or tooling changes. - Can you connect training data to operations data with basic shared IDs and time rules
Why it matters: The win comes from proving impact. Without links between the LRS, CMMS, and SCADA or their equivalents, you cannot show real change in availability or schedule clarity.
What it reveals: The level of effort to align asset IDs, work-order numbers, site names, and timestamps. If this gap is large, plan a small integration sprint before scaling content. - Will field leaders and technicians co-design and make time for five to ten minutes of practice in the flow of work
Why it matters: Adoption lives or dies with the people who do the work. Co-design keeps language real and checklists accurate. Short sessions during huddles or travel keep practice frequent.
What it reveals: Whether you have SME time, supervisor support, mobile access, and a simple place in the day for practice. If not, start with a pilot site to prove fit and build momentum. - Which outcomes will you hold yourselves to, and how will you baseline them
Why it matters: Clear goals focus everyone. Pick a few metrics tied to value, such as turbine availability, reschedule rate, work-order cycle time, and first-time fix rate.
What it reveals: Your readiness to compare like with like, adjust for weather and planned outages, and review trends by site, platform, and role. If baselines are missing, set them before rollout. - How will you use the data for coaching while protecting trust, safety, and fairness
Why it matters: Data improves performance only if people trust it. Transparency and clear rules prevent the numbers from turning into blame.
What it reveals: The need for simple policies on access, retention, and feedback. It also shows whether supervisors know how to turn insights into short refreshers and supportive coaching.
If you can answer yes to most of these questions, start small. Build a handful of high-value scenarios, instrument them with xAPI, link the LRS to CMMS and SCADA, and review results weekly. Keep it practical, keep it transparent, and let the early gains guide what you scale next.
Estimating The Cost And Effort To Implement A Data-Linked Problem-Solving Program
Here is a practical way to budget a first wave of a similar program. The estimate reflects a utility-scale renewables context with short, scenario-based Problem-Solving Activities, an xAPI data backbone, and links to CMMS and SCADA. Numbers are budgetary placeholders so you can size the work and adjust to your rates and tools.
- Assumptions used for this estimate. Two pilot sites, about 120 learners, 20 scenarios covering high-value decisions, six months of LRS usage to cover build, pilot, and early scale, existing LMS and BI tools in place, and a small internal team available for co-design and coaching.
Key cost components explained
- Discovery and planning. Define business goals, target metrics, pilot scope, governance, and success criteria. Map the decisions that drive losses and pick the first 20 scenarios.
- Scenario design and co-design workshops. Pair an instructional designer with field SMEs to script realistic choices, safety gates, and debriefs that match site checklists.
- Content production and xAPI instrumentation. Build mobile-friendly scenarios, wire in xAPI statements for decisions and time-to-resolution, and package for your LMS.
- Technology and integration. Subscribe to the Cluelabs xAPI Learning Record Store, configure SSO and LMS connections, and build connectors to CMMS and SCADA so learning data can sit next to work orders and alarms.
- Data and analytics. Model the unified dataset, build simple dashboards by site, platform, and role, and set baselines so changes are clear and credible.
- Quality assurance and EHS compliance. Test across devices and conditions, validate wording and safety steps, and confirm accessibility and data privacy expectations.
- Pilot operations and iteration. Run a six-week pilot at two sites, monitor engagement and outcomes, and tune scenarios where people get stuck.
- Deployment and enablement. Train supervisors to coach from dashboards, deliver quick job aids, and add five to ten minutes of practice to huddles.
- Change management and communications. Explain the why, set fair use of metrics, and keep crews informed about what is changing and how success will be measured.
- Ongoing support and data ops (first three months). Update scenarios, refresh dashboards, handle questions, and keep data quality high.
| Cost Component | Unit Cost/Rate (USD) | Volume/Amount | Calculated Cost |
|---|---|---|---|
| Discovery & Planning | $115 per hour | 80 hours | $9,200 |
| Scenario Design & Co-Design Workshops | $95 per hour | 120 hours | $11,400 |
| Content Production & xAPI Instrumentation (20 Scenarios) | $90 per hour | 300 hours | $27,000 |
| Cluelabs xAPI Learning Record Store Subscription | $300 per month | 6 months | $1,800 |
| Systems Integration (LRS to CMMS & SCADA) | $140 per hour | 60 hours | $8,400 |
| SSO/LMS Integration & Security Review | $120 per hour | 16 hours | $1,920 |
| Data Modeling & Dashboards | $110 per hour | 80 hours | $8,800 |
| QA & Device Testing | $60 per hour | 40 hours | $2,400 |
| EHS/Compliance Review | $130 per hour | 20 hours | $2,600 |
| Pilot Program Management | $120 per hour | 30 hours | $3,600 |
| Scenario Tuning During Pilot | $90 per hour | 20 hours | $1,800 |
| Site Coordinator Time (2 Sites) | $85 per hour | 32 hours | $2,720 |
| Train-the-Trainer & Supervisor Coaching | $110 per hour | 21 hours | $2,310 |
| Job Aids & Quick Guides | $150 each | 10 | $1,500 |
| Change Management & Communications | $90 per hour | 20 hours | $1,800 |
| Content Updates (First 3 Months) | $100 per hour | 30 hours | $3,000 |
| Data QA & Dashboard Refresh (First 3 Months) | $110 per hour | 24 hours | $2,640 |
| Helpdesk & Field Support (First 3 Months) | $60 per hour | 18 hours | $1,080 |
| Contingency | N/A | 10% of subtotal | $9,397 |
| Total Estimated Cost | — | — | $103,367 |
Effort and timeline at a glance
- Weeks 1–2: Discovery, metrics, and backlog of scenarios
- Weeks 3–6: Design and build 20 scenarios, xAPI instrumentation, QA, and EHS review
- Weeks 4–7: Integrate LRS, CMMS, and SCADA, build dashboards, set baselines
- Weeks 7–12: Pilot at two sites, coach supervisors, tune scenarios, and review early impact
Levers to reduce or scale cost
- Start with 10 scenarios. Halves production, QA, and EHS review time while you prove value.
- Use existing tools. Keep LMS and BI as-is and add only the LRS to minimize new licenses.
- Template everything. A standard scenario shell cuts design and QA time by 20 to 30 percent.
- Phase integrations. Start with LRS plus CMMS, then add SCADA once the first wins are visible.
- Grow with usage. Choose an LRS plan that fits expected xAPI volume and upgrade as adoption rises.
Use this breakdown as a starting point. Swap in your labor rates, tool pricing, and scope, and you will have a clear budget and plan to bring a data-linked, problem-solving program to life.