Executive Summary: This case study follows an insurance MGA/MGU that implemented Collaborative Experiences to co-create simple underwriting checklists and embed new habits in the flow of work. Paired with an “Underwriting Checklist Coach” powered by the Cluelabs AI Chatbot eLearning Widget, the program reduced referral loops, accelerated quote turnaround, and improved consistency and auditability of decisions. The article outlines the challenge, the step-by-step approach, the measurable outcomes, and practical lessons for executives and L&D teams exploring Collaborative Experiences in complex, high-stakes underwriting environments.
Focus Industry: Insurance
Business Type: MGAs/MGUs
Solution Implemented: Collaborative Experiences
Outcome: Reduce referral loops with underwriting checklists.
Cost and Effort: A detailed breakdown of costs and efforts is provided in the corresponding section below.
Our Project Role: Elearning solutions developer

An Insurance MGA/MGU Operates in a High-Stakes Underwriting Environment
An MGA/MGU sits between carriers and brokers and has the authority to underwrite and bind on the carrier’s behalf. Teams handle complex risks, fast‑moving appetites, and strict guidelines. Every choice affects revenue, loss ratio, and compliance. Brokers expect speed. Carriers expect clean files. Underwriters need to move quickly and still make the right call.
The stakes are high because one slow or inconsistent step can ripple through the whole book. A typical day brings mixed‑quality submissions, shifting referral thresholds, and competing priorities across lines and regions. Work happens across email, spreadsheets, PDFs, and portals. Decisions must be clear, defensible, and fast.
- Quote turnaround drives broker satisfaction and hit rate
- Consistent decisions protect loss ratio and capacity
- Clean audit trails reduce carrier and regulatory risk
- Efficient workflows keep senior underwriters focused on true exceptions
- Faster ramp time for new hires increases available underwriting hours
Referral loops are a common pain. An assistant or junior underwriter hesitates, sends a question to a senior, waits for a reply, then follows up again when new information arrives. The loop repeats. These delays often come from scattered guidance, unclear thresholds, or checklists that live in someone’s head instead of the workflow.
- Guidelines sit in long documents that are hard to search in the moment
- Thresholds vary by product, carrier, and region and change often
- Checklist steps are not applied the same way across teams
- Coaching happens after the fact instead of at the point of decision
For leaders and L&D teams, the challenge is clear. Build shared habits that make good decisions repeatable. Make the right step the easy step for busy underwriters and assistants. Create a system that spreads know‑how beyond a few experts and holds up under audit.
This case study follows one MGA/MGU that took a practical path. They brought people together to turn real cases into simple, usable checklists and paired that with in‑the‑moment support inside daily work. The result cut referral loops and raised confidence without adding red tape.
Referral Loops Slow Quotes and Frustrate Underwriters
Referral loops happen when a junior underwriter or assistant is unsure and sends a question to a senior, then waits. New details arrive, more questions follow, and the back and forth continues. What starts as one quick check turns into a chain of emails and chat messages. The file stalls. The broker asks for an update. Everyone gets frustrated.
In an MGA/MGU, these loops show up often because products, carriers, and regions have different rules. A submission may be missing a document. Loss history may be unclear. A threshold may have changed last week. People want to do the right thing, but finding the answer fast is hard.
- Guidance lives in long PDFs and shared drives that are hard to search
- Referral thresholds vary by carrier and product and shift over time
- Document checklists are not applied the same way across teams
- Approvals are captured in email instead of a clear system of record
Here is how a loop can play out. An assistant reviews a small property risk at 9 a.m. They are not sure if prior losses push it over the referral line. They email a senior. The senior is in meetings until the afternoon. The broker calls at 2 p.m. asking for a quote. The senior replies at 3 p.m. asking for five years of loss runs. The broker sends three years. Another email goes out. Another wait begins. By the next morning, the broker has a quote from a competitor.
- Quotes slow down and hit rates drop
- Senior underwriters spend time on avoidable questions
- Files grow messy and are harder to audit
- New hires lose confidence and take longer to ramp
The hidden cost is context switching. Every handoff breaks focus. Underwriters juggle multiple open loops and must reread each file to pick up the thread. That adds minutes to each touch, which adds hours to the week.
Most loops are avoidable. Teams need clear, current checklists and a way to find answers in the moment. They also need a simple path to document decisions so work does not bounce back later. When those pieces are missing, even strong teams get stuck in cycles that slow quotes and strain relationships with brokers and carriers.
Collaborative Experiences Set the Strategy for Behavior Change
The team chose Collaborative Experiences because people change faster when they learn together on real work. Instead of long courses, they used short, hands-on sessions where underwriters, assistants, product, and compliance solved live cases, compared decisions, and agreed on what “good” looks like. This made the learning practical and helped new habits stick.
They started by listening. Small groups mapped the steps where quotes slowed down and named the exact moments that triggered a referral. From there, the group wrote a simple set of target behaviors that anyone could follow in the rush of a busy day.
- Ask one clear question first: Is this in appetite and eligible
- Use a checklist to gather all documents in one request
- Decide at the lowest level possible and log the reason
- Refer only when a rule requires it or the risk is unclear
The strategy had four parts that kept people close to the work and to each other.
- Co-create the checklists. The people who touch files wrote the steps, tested them on real cases, and trimmed anything that slowed them down
- Practice with real cases. Short scenario sprints in small groups built speed and confidence while exposing gaps in rules and data
- Teach back to the team. Each group explained its decisions and tradeoffs so patterns became shared habits, not personal tricks
- Support decisions in the flow of work. Job aids and a simple digital coach sat where the work happens so help was one click away
Each session followed a simple rhythm that fit the day.
- See it: a five-minute walkthrough of a messy submission and the known traps
- Try it: ten minutes to work the file using the draft checklist and note blockers
- Teach it: five minutes to explain the choice, the data used, and what to ask the broker
- Fix it: update the checklist on the spot and capture one rule to automate or clarify
Leaders played an active role. They joined the first sessions, modeled the checklist-first habit, and cleared small barriers, like where to store final rules or how to log a referral reason. They praised one-touch wins in standups so the new behavior felt visible and valued.
The team also kept score in simple, human terms. They tracked how many times a file bounced before a decision, how long quotes took, and how many submissions were complete on the first pass. They reviewed a few cases each week to see if the checklist and coaching matched the work on the desk.
By setting the strategy around shared practice, clear behaviors, and help at the point of need, Collaborative Experiences turned scattered knowledge into a repeatable way of working. It created the base for the tools and checklists that follow to make faster, cleaner decisions the new normal.
Teams Co-Create Underwriting Checklists Through Real Cases and Practice
To fix the back and forth, the team built checklists together, using real files from the desk. They pulled in underwriters, assistants, product, and compliance. Each group brought two messy submissions and worked through them step by step. When someone got stuck, they wrote the missing step in plain words. If a step felt slow or unclear, they trimmed it until anyone could use it in a busy morning.
They set a simple rule for format. Keep it to one page. Use yes or no gates. Make referral triggers obvious. Add the exact spot to log the decision. No filler. No long paragraphs. If a step did not change a decision, it did not make the cut.
- Eligibility gate: state, appetite, minimum premium, and excluded classes
- Risk facts to confirm: occupancy, construction, protection class, limits, and prior losses
- Documents to request in one touch: loss runs, accords, photos, inspections, financials
- Referral triggers: TIV over a set amount, coastal zone, frame over three stories, recent large loss, new venture
- Pricing guardrails: target rate range, allowed credits and debits, target loss ratio
- System steps: where to record notes, how to mark a referral, and what to attach for audit
Practice made the checklists real. Teams ran five to ten minute sprints. Pairs worked a case using the draft checklist, made a decision, and wrote one broker request that covered all missing items. Then they compared answers with another pair. Where choices differed, they talked it out and updated the checklist on the spot.
- Try it with a live case
- Compare decisions and notes
- Fix the step that caused confusion
- Lock the change and move on
Small design choices mattered. They used bold for “Must Have” and kept “Nice to Have” in regular text. They put referral triggers in a shaded box so they stood out. They linked a one-touch email template that filled in the document request based on the checklist. They also added a short “Why this step” note where new hires often asked for context.
To keep control, each product line named an owner. Owners posted the latest version in a single folder, archived old versions, and wrote a short change log. The team reviewed the log in weekly huddles so everyone heard about new thresholds or carrier asks before they hit the inbox.
Within two weeks, people reached for the checklist first. New hires felt sure about the next step. Seniors saw fewer “quick checks” land in their queue. The checklists became a shared way to work, not a document on a drive. Later, the same steps fed a simple digital coach so help was one click away during real quote work.
The Cluelabs AI Chatbot eLearning Widget Delivers Just in Time Checklist Guidance
To put the new checklists into daily work, the team used the Cluelabs AI Chatbot eLearning Widget as a simple coach in the sidebar. They called it the Underwriting Checklist Coach. It lived in two places people already used. It was inside short Articulate Storyline practice and on the underwriting intranet next to live files. Underwriters asked a quick question and got a clear, actionable answer tied to the exact checklist step.
The team loaded the bot with the sources people rely on. They added eligibility grids, appetite statements, referral thresholds, and the co-created checklists. A short custom prompt set the rules for tone and told the bot to cite the checklist section every time. The goal was not a long explanation. The goal was the next right step and the reason.
In practice it looked like this. A junior types, “Does this require referral” and the coach replies, “Yes. See Property Checklist v3, Referral Triggers, Item 2. TIV over $5M.” A senior asks, “What documents are missing for a one-touch request” and the coach lists the exact items with a short email template. An assistant checks, “Can we quote with three years of loss runs” and the coach answers, “Yes for Small Property, Section 3. Note exception if loss over $100K in last 36 months.”
- Triages a risk with a fast in or out of appetite check
- Builds a one-touch broker request for missing documents
- Checks referral thresholds by carrier, product, and state
- Reminds the user where to log the decision for audit
- Flags pricing guardrails and allowed credits or debits
The same coach showed up in practice and on the job, which made the habit stick. In Storyline, learners worked short scenarios and asked the bot for help when they hit a snag. On the intranet, the bot sat next to the submission screen so people did not switch windows or search long PDFs. Answers were short, cited the rule, and ended with one clear action.
Chat logs became a feedback loop. Each week, a small group reviewed common questions, tagged gaps, and updated the checklists and practice cases. If a rule changed, the owner uploaded the new file and the coach used it within minutes. The team also tuned the prompt to cut fluff and to push users to log decisions in the right place.
Governance kept trust high. Each product line had a named owner. Every answer cited the checklist name and section so users could verify the source. The coach reminded people to escalate when the rule called for it or when data was incomplete. Version notes appeared at the top of the chat so users knew they were on the latest rules.
The result felt like having a helpful senior at your desk. People moved faster, asked better questions, and made cleaner decisions. Most important, they avoided many of the back and forth loops that used to slow quotes and drain time from the team.
The Integrated Approach Reduces Referral Loops and Speeds Quote Turnaround
The gains came from the whole system working together. People learned on real cases, they wrote simple checklists they wanted to use, and the Underwriting Checklist Coach (powered by the Cluelabs AI Chatbot eLearning Widget) sat beside the work to guide the next step. The same rules showed up in practice and in production, answers cited the exact checklist section, and weekly reviews kept everything current. That blend cut the back and forth and sped up decisions without adding noise.
Here is a typical before-and-after. Before, an assistant faced a mid-size property risk and was unsure about a referral. They emailed a senior, waited, asked the broker for one more document, waited again, and lost a day. After, they asked the coach, saw the referral trigger, sent a single complete request to the broker, logged the reason in the system, and either quoted or escalated with the right notes. The file moved in hours, not days.
- Referral loops per file dropped as more decisions happened on the first touch
- Time to first decision shortened, which improved quote turnaround
- One-touch broker requests replaced piecemeal follow-ups
- Senior underwriters spent more time on true exceptions and complex risks
- Pricing and referral calls became more consistent across the team
- Notes and approvals landed in the right place for clean audits
- New hires ramped faster and asked better, more focused questions
The broker experience improved as well. Fewer “Where is my quote” emails, clearer document lists, and faster answers built trust. Internally, queues felt lighter because people were not juggling half-finished files and waiting on replies.
The team kept score in a simple way. They tracked loops per file, time to first decision, total quote turnaround, share of complete submissions on first pass, referral rate by product, and rework. Chat logs showed hot topics and confusing steps. Timestamps from the policy system and shared mailbox metrics gave a clear before-and-after view.
Two design choices made a big difference. First, guidance lived where people worked, so no one had to dig through long PDFs. Second, every answer cited the checklist and pushed users to log the decision. That created a reliable trail for audits and helped keep habits tight under pressure.
As results held, the approach spread to more products and regions with only light tweaks to thresholds and language. The core stayed the same: co-create the checklist, practice on real cases, and support the next step in the flow of work. The payoff was steady—fewer loops, quicker quotes, and a team that moved with confidence.
Analytics and Chat Logs Reveal Gaps and Inform Continuous Improvement
Data showed the team what to fix next. They looked at two simple sources. First, workflow numbers from the policy system and shared inbox. Second, chat logs from the Underwriting Checklist Coach. Together, these told a clear story about where people hesitated, what rules confused them, and which steps caused rework.
- Workflow data: time to first decision, referral loops per file, quote turnaround, and one-touch broker requests
- Audit signals: missing notes, approvals in email, and files that bounced back after review
- Chat insights: top questions, repeated phrases like “require referral” or “missing docs,” and topics with long back and forth
A short weekly review kept the loop tight. A product owner, a senior underwriter, an ops partner, and an L&D lead met for 30 minutes. They scanned a dashboard, opened a few real transcripts, and chose one of three fixes.
- Clarify the rule: add a plain example or tighten the wording
- Improve the step: adjust the checklist order or add a one-touch template
- Escalate a change: confirm new thresholds with the carrier and update the source
Chat logs sped up this work because they showed the exact moment of doubt. Here are a few examples that led to quick wins.
- Frequent “coastal distance” questions led to a simple map link and a clearer referral trigger
- Mixed definitions of “new venture” became a firm rule and a short note on required financials
- Several “What documents are missing” queries drove a tighter one-touch email template
- Confusion on credits by state prompted a guardrail table with state-specific limits
They watched the numbers to see if a fix worked. If a topic dropped out of the top five questions within two weeks, the change stuck. If it did not, they ran a 10-minute scenario sprint in a team huddle and updated the wording again.
- Weekly metrics: loops per file, time to first decision, quote turnaround, one-touch rate, referral rate by product, and audit exceptions
- Coach signals: top 10 queries, percent of answers with a checklist citation, and repeat users by topic
- People indicators: new hire ramp time and the share of decisions made without escalation
Governance kept changes safe and visible. Each product line had an owner who posted version notes at the top of the chat and in the checklist file. The coach always cited the checklist name and section so users could verify the source. When a rule changed, the new file went live the same day and the team heard about it in the next huddle.
The effect was steady and compounding. Small fixes each week removed common snags, so fewer questions needed a senior’s time. People spent more minutes deciding and fewer minutes searching. The work felt smoother because the system got a bit smarter with every case.
The Team Shares Lessons Learned for Scaling Collaborative Experiences and Chatbots
The team walked away with simple lessons that any MGA or MGU can use. The theme is clear. Build the checklist together, keep it short, and place help where people do the work. Use the chatbot as a coach, not a search engine. Review small bits each week and keep moving.
- Start where the pain is worst. Pick one product and one region. Set a baseline for loops per file and time to first decision
- Co-create the checklist. Write it in plain words using real cases. Keep it to one page with clear yes or no gates and bold referral triggers
- Name an owner. Each product line needs one person who updates rules, posts version notes, and archives old files
- Put help in the flow. Embed the coach next to live work and inside short practice so people use the same steps in both places
- Keep answers tight. The coach should answer in one or two lines, cite the checklist and section, and point to the next action
- Use chat logs as a lens. Review top questions weekly, fix unclear steps, and tune the prompt to cut fluff
- Make one-touch the norm. Add a broker email template that fills in from the checklist so teams ask once and move on
- Model the habit. Leaders and seniors use the checklist first, log reasons, and praise clean one-touch decisions in huddles
Design choices for the chatbot mattered. The Cluelabs AI Chatbot eLearning Widget worked well as a focused coach when it followed a few simple rules.
- Limit the sources. Load only the current checklists, eligibility grids, appetite notes, and referral thresholds
- Write a clear prompt. Tell it to answer briefly, cite the exact checklist section, and ask for one missing item if data is thin
- Show the version. Display checklist name and version at the top of the chat so users trust the guidance
- Nudge compliance. End answers with where to log the decision and when to escalate
- Guard the scope. If a question falls outside the sources, the coach should say so and send the user to the owner
Watch for common pitfalls and keep them small.
- Do not dump long PDFs into the coach and hope for the best
- Do not chase the perfect checklist before launch
- Do not let versions drift across shared drives
- Do not skip weekly reviews when things get busy
- Do not make referral the default when the rule does not require it
Use a light plan to scale once the first line is stable.
- Clone the format for the next product and tweak thresholds and language
- Build a small champion group to run scenario sprints and collect feedback
- Fold the checklist and coach into onboarding in week one
- Track the same few metrics across lines so trends are easy to spot
- Run a quarterly check of citations, versions, and audit trails
For leaders, the ask is simple. Protect one hour a week for practice, one hour for review, and give product owners the authority to update rules the same day. Fund the small things that speed the work, like the coach license, a dashboard, and time for champions. When you make the right step the easy step, referral loops shrink, quotes move faster, and teams gain confidence.
Deciding If a Collaborative Checklist Coach Fits Your MGA/MGU
This approach worked because it attacked the real friction in an MGA/MGU underwriting desk. Referral loops slowed quotes and drained senior time. Guidance lived in long documents, thresholds changed often, and checklists were not applied the same way across teams. The organization used Collaborative Experiences to co-create one-page underwriting checklists from real cases, then placed help at the point of work with the Cluelabs AI Chatbot eLearning Widget as an Underwriting Checklist Coach. The coach gave short, cited answers tied to the exact checklist step, while weekly chat-log reviews kept rules fresh. The result was fewer loops, faster quote turnaround, more consistent calls, and cleaner audits—all without adding extra meetings or heavy process.
If you are considering a similar path, use the questions below to guide your discussion and surface what must be true for success.
- Do we have referral loops worth fixing, and can we measure them today
Why it matters: A clear baseline tells you if the problem is big enough to prioritize and gives you a way to prove impact.
What it uncovers: If loops per file and time to first decision are high, the ROI is likely strong. If you cannot measure these yet, start by capturing simple counts from your policy system or shared inbox before you launch. - Are our rules and appetite stable enough to turn into one-page checklists, and will frontline staff help write them
Why it matters: The solution depends on simple, usable checklists built with the people who touch the work. Co-creation drives adoption and consistency.
What it uncovers: If rules are scattered or change weekly, begin with one product and lock a minimum viable checklist. If teams are willing to help write and test steps on real cases, momentum will build quickly. - Can we place guidance where work happens within our tech and security limits
Why it matters: Help must sit next to live submissions and practice scenarios to cut search time and context switching.
What it uncovers: If your intranet or submission screens can host a small widget, the coach can go live fast. If access is tight, start with an intranet sidebar. Confirm security rules for uploading sources and keep the coach limited to approved checklists, eligibility grids, appetite notes, and referral thresholds. - Who owns updates, and will leaders protect time for weekly practice and reviews
Why it matters: Trust depends on current rules and visible reinforcement. Without owners and cadence, versions drift and users stop relying on the coach.
What it uncovers: Name a product owner for each line, publish version notes, and hold a short weekly review using chat logs. Leaders should model the checklist-first habit and praise one-touch decisions to make the behavior stick. - How will we prove value and fund the next wave
Why it matters: Simple, shared metrics keep the team focused and help you scale to more products and regions.
What it uncovers: Track loops per file, time to first decision, quote turnaround, one-touch broker requests, referral rate by product, and audit exceptions. Early wins justify extending the coach and checklist format to new lines with light tweaks.
If your answers show a meaningful bottleneck, a willing frontline, a place to embed the coach, clear ownership, and a path to measure results, you are set up for a fast, low-friction pilot. Start small, learn weekly, and grow from real gains on the desk.
Estimating Cost And Effort For A Collaborative Checklist Coach Pilot
The estimate below covers a focused pilot for one product in one region over eight weeks. It reflects the work this organization did: co-create one-page underwriting checklists through Collaborative Experiences, build short practice scenarios, and deploy the Cluelabs AI Chatbot eLearning Widget as an Underwriting Checklist Coach on the intranet and inside Articulate Storyline practice. It assumes you already have an intranet and an authoring tool, and that the Cluelabs free tier is sufficient for the pilot. The figures use typical internal rates; adjust to your local costs and staffing model.
- Discovery and planning. Align leaders, confirm goals, define the pilot scope, and turn current pain points into a simple success dashboard and timeline
- Checklist co-creation sprints. Facilitate short, hands-on sessions using real submissions; capture clear yes/no gates, referral triggers, and one-touch document requests
- Checklist production and governance. Format to one page, set up a version log and a single source of truth, and name an owner per product line
- Scenario-based practice development. Build a handful of short, realistic Storyline scenarios that mirror live work and reinforce the checklist-first habit
- Chatbot configuration and integration. Curate sources (eligibility grids, appetite notes, thresholds, final checklists), write and test the prompt, embed the widget on the intranet and in Storyline
- Security, compliance, and QA. Run a light InfoSec and legal review, test for accurate citations and safe behavior, and tighten wording before go-live
- Data and analytics setup. Define a small set of metrics (loops per file, time to first decision, one-touch rate) and build a simple dashboard using system timestamps and chat logs
- Pilot enablement and office hours. Host short training, provide a quick-start guide, and offer weekly office hours to clear snags fast
- Change management and communications. Announce the why, the new habits, and where to find help; share early wins and tips
- Weekly review and continuous improvement. Spend 30 minutes each week with product, ops, and L&D to fix unclear steps and update sources based on chat insights
- Ongoing support during pilot. Give the checklist owner time to apply changes and keep the bot current; provide light tech support
Assumptions: 15 pilot users; three 90-minute co-creation sessions; six micro-scenarios; intranet widget placement available; existing Articulate license; Cluelabs free tier covers usage during pilot. Participant time is an opportunity cost and not monetized below.
| Cost Component | Unit Cost/Rate (USD) | Volume/Amount | Calculated Cost |
|---|---|---|---|
| Discovery and Planning (blended) | $110 per hour | 20 hours | $2,200 |
| Checklist Co-Creation Sprints – SME Participation | $95 per hour | 45 hours (10 participants × 4.5 hours) | $4,275 |
| Checklist Co-Creation – Facilitation and Synthesis | $110 per hour | 20 hours | $2,200 |
| Checklist Production and Formatting (Owner) | $95 per hour | 12 hours | $1,140 |
| Governance Setup (Versioning and Folder Structure) | $90 per hour | 4 hours | $360 |
| Storyline Micro-Scenario Development | $100 per hour | 36 hours (6 scenarios × 6 hours) | $3,600 |
| Scenario QA and Stakeholder Review | $110 per hour | 6 hours | $660 |
| Chatbot Source Preparation (SME) | $95 per hour | 12 hours | $1,140 |
| Prompt Design and Testing (ID) | $110 per hour | 10 hours | $1,100 |
| Widget Embedding on Intranet and in Storyline | $120 per hour | 8 hours | $960 |
| Security and Legal Review | $130 per hour | 6 hours | $780 |
| Cluelabs AI Chatbot eLearning Widget – Pilot Usage | $0 (free tier) | 1 pilot | $0 |
| One-Touch Broker Email Template Setup | $90 per hour | 4 hours | $360 |
| Data and Analytics Dashboard Setup | $90 per hour | 12 hours | $1,080 |
| Pilot Training Sessions and Office Hours (Facilitator) | $110 per hour | 10 hours | $1,100 |
| Change Management and Communications Pack | $110 per hour | 8 hours | $880 |
| Weekly Review – SMEs (8 weeks) | $95 per hour | 8 hours (2 SMEs × 0.5 hr × 8) | $760 |
| Weekly Review – Ops (8 weeks) | $90 per hour | 4 hours | $360 |
| Weekly Review – L&D/ID (8 weeks) | $110 per hour | 4 hours | $440 |
| Ongoing Support – Checklist Owner Updates (8 weeks) | $95 per hour | 16 hours (2 hrs/week) | $1,520 |
| Ongoing Support – Tech/Widget Troubleshooting | $120 per hour | 4 hours | $480 |
| Total Estimated Pilot Cost | $25,395 |
Notes:
- The Cluelabs free tier typically covers a pilot; budget for a paid plan when scaling to multiple products or higher traffic (pricing varies by usage; confirm with vendor)
- If you need new authoring tool licenses or an LRS, add those costs; many teams can begin with existing tools and a simple Excel/BI dashboard
- After the pilot, a light monthly run rate often includes the weekly review (about half the 8-week cost above), checklist owner time (about 8 hours/month), and minor tech support
- Scaling to additional products mainly adds co-creation sessions, a few new scenarios, and updated sources for the coach; reuse the format to keep costs predictable
Leave a Reply