Executive Summary: An organization in the security industry, operating a corporate security function with GSOC and physical teams, implemented a Tests and Assessments training solution supported by the Cluelabs AI Chatbot eLearning Widget to standardize clear, dignity‑first incident notices under pressure. The case details the challenges of busy operations, the assessment‑led strategy with scenario drills, and the integrated “Dignity‑First Notice Coach,” resulting in faster time to first notice, fewer edits, and stronger privacy compliance. It provides a practical blueprint for executives and L&D teams considering Tests and Assessments to drive measurable performance in real‑world operations.
Focus Industry: Security
Business Type: Corporate Security (GSOC + Physical)
Solution Implemented: Tests and Assessments
Outcome: Publish clear, dignity-first notices in busy moments.
Cost and Effort: A detailed breakdown of costs and efforts is provided in the corresponding section below.

A Corporate Security Operation With GSOC and Physical Teams Faces High Stakes
Around the clock, a corporate security operation brings together a Global Security Operations Center and on‑site physical teams. Analysts watch alerts and cameras. Dispatchers route calls. Officers respond on the ground. When something happens, the job is not only to act but to communicate fast and well. A clear notice can keep people safe, reduce confusion, and protect privacy. A vague or rushed message can spark rumors, slow response, and erode trust.
Here is the snapshot that sets the stage:
- Industry and business: Corporate security with GSOC and physical security teams across multiple sites
- Audience: Employees, contractors, visitors, and partners who need simple, direct guidance
- Channels: Email, text alerts, intranet, radios, digital signage, and live announcements
- Working reality: Shift work, high alert volume, multiple handoffs, and strict privacy rules
Incidents come in many forms. A smoke alarm trips in one building. A weather cell moves toward a campus. A medical event draws a crowd. A visitor will not badge in. A system outage stalls operations. In these busy moments, teams must publish the next best step in plain language and do it with dignity. That means sharing only what people need to know, avoiding blame, and showing respect for everyone involved.
The stakes are high:
- Safety: People need to know what to do right now
- Continuity: Clear steps keep work moving and reduce downtime
- Compliance: Notices must protect privacy and meet policy
- Trust: Tone and accuracy shape how people feel about the workplace
This is the day‑to‑day context for the program in this case study. The team wanted a reliable way to help operators write clear, dignity‑first notices while the clock was ticking. They needed consistency across shifts and sites without slowing response. The next sections explain the challenge in more detail and how the organization built the skills and systems to meet it.
Busy Moments Expose Communication Gaps and Inconsistent Notices
Busy moments reveal where communication breaks down. When alerts stack up and radio traffic is heavy, even skilled operators can miss key details or send a message that confuses people. The intent is good. The pressure is real. The result is often uneven notices that slow action and raise stress.
What we saw during incidents:
- Different operators used different words for the same event, which confused readers across sites
- Some notices were too long, with extra details that did not help anyone act
- Other notices were too short and missed the clear next step
- Tone shifted from calm and human to stiff or alarmist from one shift to the next
- Privacy slipped when messages named people or shared health details that were not needed
- Channel choices did not match the situation, so messages missed key audiences
- Templates existed but were hard to find in the moment, which led to copy and paste errors
- Approval loops added time and edits, and the final notice sometimes lost clarity
Why this happens in a GSOC and physical security setting:
- High cognitive load during alarms, cameras, phone calls, and dispatch creates split attention
- Policies are complex, and people are unsure what to include and what to avoid
- Shift changes and site differences make it hard to keep a single style and standard
- Most training focuses on tools and procedures, not on clear writing under time pressure
- Feedback comes after the fact, so people do not learn in the moment
- Multiple systems hold templates and guidance, and search takes time operators do not have
The impact on people and operations:
- Employees wait for direction or act on rumors
- Leaders spend time clarifying messages instead of solving the problem
- Trust takes a hit when tone feels cold or when notices overshare
- Compliance risk grows when privacy rules are not followed
- Operators feel frustrated because their best effort does not get the result they want
The team needed a simple, repeatable way to help people write clear, dignity first notices while the clock was ticking. They also needed a way to practice these skills in realistic scenarios, get fast feedback, and measure progress across shifts and sites. The next section explains the strategy that set this up.
Tests and Assessments Anchor the Strategy for Real-World Operations
The team chose tests and assessments as the backbone because people learn best when practice looks and feels like the job. Instead of a one‑time course, operators would try real prompts, get quick feedback, and build the habit of writing clear, dignity‑first notices while the clock was ticking.
What we set out to build and measure:
- Write short, plain subject lines and first sentences that tell people what to do
- Pick the right channel for the situation and audience
- Use approved templates without losing human tone
- Protect privacy and follow policy in every message
- Keep updates consistent across shifts and sites
How the assessments worked:
- Scenario prompts based on past incidents with names and details removed
- Timed writing tasks that mirror the pressure of live events
- A simple rubric that scores clarity, action step, tone, privacy, and accuracy
- Side‑by‑side examples that show a model notice and common fixes
- Quick retakes so operators can apply feedback right away
When and where practice fit the day:
- Three‑minute micro‑drills at the start of a shift or after handoff
- Weekly scenario sets that rotate by incident type
- Short after‑action checks within 24 hours of a real event
- Monthly calibration sessions so supervisors score the same way
What data we watched to guide improvement:
- Time from incident to first notice and time to the next update
- Readability and word count for the first two lines
- Privacy flags, corrections, and retractions
- Template use and consistency across sites
- Operator confidence ratings after each practice
How we made the learning stick:
- Spaced practice so skills stayed fresh over time
- Varied prompts to avoid copy and paste habits
- Peer review moments that kept tone human and respectful
- Positive recognition for clear, dignity‑first notices during live events
This approach kept training close to real work and kept feedback close to the moment of need. It also created a steady stream of insight about what helped and what got in the way. In the next section, we describe the tools and supports that made this easy to use at scale without slowing operations.
The Cluelabs AI Chatbot eLearning Widget Powers a Dignity-First Notice Coach
To help operators write clear notices in the moment, the team added a coach they could reach in two clicks. We used the Cluelabs AI Chatbot eLearning Widget and set it up as a “Dignity‑First Notice Coach.” We uploaded the incident communication policy, the style guide, and pre‑approved templates. We wrote a simple prompt that told the bot to favor plain language, protect privacy, and keep the first line action focused.
How the coach helps during live work:
- Turns rough bullet points into a short subject line and a clear first sentence
- Checks tone and suggests human, calm phrasing
- Flags privacy risks and removes names or health details
- Reformats the message for email, text alert, or signage with the right length
- Offers two or three rewrite options so operators can choose and send
The coach lives in two places. Inside scenario‑based assessments in Storyline, it gives instant tips while people practice. On the GSOC portal, it sits next to the dispatch and alert tools so help is right where work happens. Operators paste a draft or start with a few facts. The coach replies with a tighter version, a checklist, and quick fixes like “Start with the action,” “Avoid time guesses,” or “Say ‘avoid the area’ instead of ‘evacuate’ unless confirmed.”
How we set it up for consistency and speed:
- Uploaded policy and templates so the coach stays on brand and on policy
- Preloaded common scenarios like weather, access issues, medical events, and outages
- Built guardrails that block risky content and remind users to avoid personal details
- Kept access simple with a link on the console and a shortcut key
How it supports tests and skill growth:
- During assessments, the coach gives targeted feedback tied to the scoring rubric
- Operators can try a fix, submit again, and see their score improve in minutes
- Side‑by‑side examples show a model notice and a “before” draft for quick learning
How we used the data to improve the program:
- Anonymized Q&A logs showed where people got stuck, like subject lines or channel choice
- We updated templates and test items based on the most common questions
- We added short tips to the rubric where scores lagged, such as first‑line clarity
The result is a coach that fits the flow of work and practice. It speeds up drafting without cutting corners, and it keeps every notice clear, kind, and compliant. The next section shows what changed on the ground once the coach and assessments were in place.
Scenario-Based Assessments and the Chatbot Enable Consistent Dignity-First Drafting
When we paired realistic scenarios with the chatbot coach, operators built a repeatable way to write dignity‑first notices fast. Practice felt like the job, and help was right there when someone got stuck. Over time, the same structure showed up in every shift and site, which made messages easier to read and act on.
What a typical drill looks like:
- Read a short scenario based on a real event, with names removed
- Draft a subject line and a first sentence that tells people what to do
- Ask the coach to tighten language, check tone, and flag privacy risks
- Pick the right channel and get a version sized for email, text, or signage
- Submit for a quick score against the rubric and try a fast rewrite
How consistency shows up across teams:
- Clear openings that lead with action, like “Avoid,” “Shelter,” or “Use alternate entrance”
- First lines that include action, location, and time window in plain words
- Neutral tone that avoids blame and protects privacy in every notice
- Channel‑ready versions that fit the space and keep the same core message
Before and after examples from practice:
- Before: “We had a situation in Building C. Security is investigating. Please be advised.”
- After: “Avoid Building C east stairwell for the next 30 minutes. Use the west elevators and follow posted signs.”
- Before: “Someone fainted in the lobby. EMS is here. Do not gather.”
- After: “Give space in the Main Lobby for medical assistance. Use the South entrance until further notice.”
These quick transformations came from simple rules that the drills and the coach reinforced again and again. Start with the action. Name the place people care about. Offer the next step. Keep personal details out. Use calm words. Operators learned to apply the same rules to weather alerts, access issues, outages, and crowded areas.
What operators say helps most:
- The coach turns rough notes into a clean first draft in seconds
- Rewrite options make it easy to choose and send without losing time
- The rubric explains why a change improves clarity or protects privacy
- Short retakes build confidence because progress is visible right away
How this fits live operations:
- Micro‑drills at shift start keep skills warm for the next call
- During an incident, the coach sits next to dispatch tools for quick drafting
- After action, a short replay uses the same scenario to lock in the lesson
Together, scenario‑based assessments and the chatbot coach turned good intent into a dependable habit. The voice of the notices became steady and human, and the structure stayed the same even when the details changed. That reliability made it easier for people to understand what to do, and it made operators faster without cutting corners.
Teams Publish Clear Dignity-First Notices in Busy Moments
With the assessments and the coach in place, teams sent short, clear, dignity‑first notices even when the room was loud and the clock was ticking. Messages led with the action, protected privacy, and matched the channel. People knew what to do, and fewer follow‑ups were needed to explain the same message.
What changed in the field:
- First lines told people exactly what to do, where, and for how long
- Tone stayed calm and human across shifts and sites
- Privacy held firm as names, health details, and blame stayed out of messages
- Channel choices fit the moment so the right audience saw the notice fast
Results we tracked:
- Time to first notice dropped by about 45 percent during live incidents
- Supervisor edits before send fell from about one in three notices to one in ten
- Privacy corrections and retractions decreased by roughly 60 percent
- Readability improved, with most first lines under 20 words and focused on action
- Employee questions like “What should I do?” fell by about 40 percent during events
- Operator confidence scores rose steadily after short drills and retakes
Quick snapshots that show the shift:
- Severe weather: A clear, two‑line text alert directed people to stay in place and named safe areas by floor. No follow‑up clarifications were needed
- Access issue: A notice told people to use the South entrance and gave a time window. Badge data showed fewer attempts at the blocked door
- Medical event: A calm update asked for space and gave a simple detour. No personal details were shared, and traffic moved smoothly
These gains did not come from more words. They came from a steady habit of drafting with clear rules and quick feedback. The coach sped up the first draft. The assessments built skill and consistency. Together, they let teams publish clear, dignity‑first notices in the busiest moments without slowing response.
GSOC and Physical Security Teams Improve Speed Accuracy and Confidence
Both the control room and on‑site teams got faster and more sure of their words. Drafting a notice felt less like a blank page and more like a simple play: gather facts, write the first line, check privacy, send. Supervisors spent less time rewriting and more time coaching. Officers in the field heard fewer radio requests to “say that again.”
Speed shows up in the flow of work:
- Operators start with a clear first line in under a minute
- Fewer edit cycles mean notices move from draft to send with less back and forth
- Micro‑drills at shift start keep skills warm so the first live message is quick
- The coach turns rough notes into a clean draft that fits the channel
Accuracy improves with simple checks:
- Privacy risks get flagged and removed before send
- Action, location, and time window appear in the first sentence
- Templates keep names of buildings, entrances, and floors consistent
- Channel fit reduces errors like long texts or short emails with missing detail
Confidence grows in the control room and on site:
- Quick retakes show clear progress after feedback
- Operators report fewer “second pair of eyes” requests before send
- New hires ramp faster because drills mirror the job
- Supervisors align on the rubric, so coaching feels fair and predictable
GSOC and physical teams sync faster:
- Officers share simple facts, and GSOC turns them into clear public guidance
- Handoffs stay smooth across shifts because everyone uses the same structure
- Fewer follow‑up messages are needed to clarify what people should do
- After action, both teams review the same scenario and lock in the lesson
“I can get a solid first draft out in seconds and still protect privacy,” one operator said. “It keeps me calm when the room gets loud.”
“Our officers saw fewer crowds and wrong‑door tries once the notices got clear,” a site lead shared. “That saves time on the ground.”
The net effect is simple. Speed went up. Errors went down. People felt more capable. With the assessments and the coach working together, teams had a reliable way to communicate clearly in the moments that matter most.
Data From Assessments and Chatbot Logs Inform Iteration and Compliance
We did not guess our way forward. We used what the tests showed and what the chatbot logs revealed to guide every change. Each week, a small group reviewed scores and common questions, then made small, fast updates to templates, tips, and scenarios. This steady loop kept the program useful and kept messages within policy.
What we tracked and reviewed:
- Scores by rubric area: clarity, action step, tone, privacy, and channel fit
- Time from incident to first notice and to the next update
- Word count and reading level of the first two lines
- Common chatbot questions that showed where people got stuck
- Privacy flags the coach raised before send
- Use of approved templates across shifts and sites
How data shaped improvements:
- We added starter verbs to subject lines like “Avoid,” “Use,” and “Shelter” to lead with action
- We tightened templates, swapped vague terms, and banned phrases that created confusion
- We built a quick channel guide so operators could match message length to email, text, or signage
- We expanded the scenario bank to cover frequent events such as alarm resets, elevator outages, and short weather holds
- We tuned the coach prompt to push for plain words and to highlight missing location or time window
- We added in‑line rubric tips like “Start with the verb” and “State where, then when”
How we stayed within policy and protected people:
- Chatbot logs were anonymized and scrubbed of names and exact locations before review
- The coach blocked risky content and reminded users not to include personal or health details
- We set a short data‑retention window and purged practice drafts on a schedule
- Privacy and legal partners joined monthly spot checks of sample notices and scoring
- We kept an audit‑ready snapshot of rubric scores, template use, and the reason for key changes
What the feedback loop delivered over time:
- Fewer privacy fixes as operators formed the habit of keeping names and health details out
- Faster first drafts because subject line and first‑line patterns became second nature
- Clearer channel choices, which reduced resend and rewrite work
- More consistent tone across sites, backed by a shared rubric and examples
The simple rhythm of measure, review, and tweak kept the program alive to real needs. Operators saw quick wins, leaders saw risk go down, and the organization had clear evidence that notices were both useful and within policy.
Lessons for Security Executives and Learning and Development Teams Guide Adoption and Scale
Executives and L&D teams can make this approach work without heavy lift. The key is to keep practice close to real work, keep feedback fast, and protect privacy at every step. Start small, prove value in one shift, then scale with a simple playbook that anyone can follow.
Practical steps to adopt and scale:
- Define what good looks like with a short rubric and two model notices
- Pick four high‑value scenarios first, such as weather, access issues, outages, and medical events
- Embed practice in the flow with three‑minute micro‑drills at shift start and quick retakes
- Place the AI coach next to dispatch tools and inside scenarios so help is two clicks away
- Set privacy guardrails on day one and scrub chatbot logs for review
- Calibrate supervisors with a 15‑minute scoring huddle each month
- Name a single owner for templates, the rubric, and update cadence
- Share before and after examples to show the standard in plain view
- Celebrate wins during live events to reinforce the habit
Measure what matters each week:
- Time to first notice and time to the next update
- Percent of notices that meet the rubric on the first pass
- Average first‑line length and use of action verbs
- Supervisor edit rates and resend rates
- Privacy flags and any corrections or retractions
- Operator confidence after drills and after live events
- AI coach usage and common questions that signal gaps
A simple 30‑day rollout plan:
- Week 1: Draft the rubric and two model notices. Load policy, style, and templates into the coach. Build four scenarios
- Week 2: Pilot with one GSOC crew. Run daily micro‑drills. Hold a quick calibration to align scoring
- Week 3: Add after‑action practice on real events. Review chatbot questions. Tune templates and tips
- Week 4: Expand to two more shifts and one field site. Set a weekly review and a monthly audit check
Pitfalls to avoid:
- Teaching writing in isolation without live context
- Relying on templates alone and losing human tone
- Letting the AI send messages without human review
- Hiding the rubric so people guess at the standard
- Keeping logs with personal data or long retention windows
- Overbuilding dashboards before the basics work
How to win support and funding:
- Show a five‑slide report with time to first notice, edit rate, privacy fixes, and two before and after examples
- Estimate hours saved from fewer edits and resends, then compare to the cost of the coach and content upkeep
- Link gains to risk reduction and employee trust, not just training completion
This playbook travels well. Any team that must communicate fast and clearly under pressure can use tests, short drills, and an AI coach to build a steady voice. Start with real scenarios, keep people in the loop, measure the basics, and improve a little each week. The result is simple and powerful. Clear, dignity‑first notices when it matters most.
Guiding the Fit Conversation for an Assessment-Led, AI-Coached Security Program
In a corporate security setting with a GSOC and on-site teams, the hardest moments exposed gaps in speed, clarity, tone, and privacy. The solution worked because it kept practice close to real work and put help within reach. Scenario-based tests and short drills mirrored live incidents. A clear rubric set the standard for action-first, dignity-first notices. The Cluelabs AI Chatbot eLearning Widget, used as a Dignity-First Notice Coach, was trained on policy, style, and approved templates. It turned rough notes into clean first lines, flagged privacy risks, and sized messages for email, text, or signage. Anonymized logs and assessment scores showed where people got stuck, so the team tuned templates, prompts, and scenarios. The result was faster publishing, fewer edits, better privacy, and a steady voice across shifts and sites.
If you are weighing a similar approach, use the questions below to focus the conversation on fit, risks, and the work needed to succeed.
- Do we have recurring, high-stakes messages that often land during busy moments?
Why it matters: The biggest gains come when small wording errors carry real cost and speed matters. What it uncovers: If the need is rare or low risk, start with lighter template cleanup. If it is frequent and high impact, an assessment-led, AI-coached program can pay off fast. - Do we agree on what a good notice looks like and have one source of truth?
Why it matters: Tests and an AI coach only work if policy, style, and templates are clear and consistent. What it uncovers: Gaps or conflicts in policy that will derail consistency. If you lack a simple rubric and two model notices, make that your first deliverable. - Can we put practice and the coach where work happens without slowing people down?
Why it matters: Adoption hinges on two clicks or less from core tools and on short drills that fit shift rhythms. What it uncovers: Integration needs with your portal, LMS, or authoring tools, and whether you can schedule three-minute drills at shift start. If access is clunky, usage and results will drop. - Are our privacy, legal, and security guardrails ready for AI-assisted drafting and log review?
Why it matters: Trust and compliance depend on tight controls for PII, retention, and audit. What it uncovers: Required approvals, redaction rules, retention windows, and whether AI use is permitted. If not, you can still run assessments while you set guardrails for the coach. - Who will own the rubric, scenarios, weekly reviews, and supervisor calibration?
Why it matters: The program improves through small, steady updates, not a one-time launch. What it uncovers: Role clarity, bandwidth, and leadership support. Name owners for templates and scoring, set a weekly review, and plan a 15-minute monthly calibration so scores stay fair and coaching stays aligned.
If your answers show a real, repeatable need; a clear standard; smooth access; solid guardrails; and named owners, you are ready. Start with four scenarios, a short rubric, and the AI coach next to your dispatch tools. Measure the basics each week and improve a little at a time. That is how you get clear, dignity-first notices when it matters most.
Estimating Cost and Effort for an Assessment-Led, AI-Coached Security Communication Program
This estimate focuses on the work required to launch and stabilize a tests-and-assessments program paired with the Cluelabs AI Chatbot eLearning Widget as a Dignity-First Notice Coach. It assumes a mid-size corporate security operation with a GSOC and on-site teams, four core incident types for the initial build, and a pilot followed by phased rollout.
- Discovery and planning: Interview leaders and operators, review policies and past notices, confirm incident priorities, and define success metrics. Output is a short plan and a shared definition of what a good notice looks like.
- Design: Create the outcomes, scoring rubric, and blueprint for scenario-based assessments. Set the template governance rules so style and privacy standards are consistent across sites and shifts.
- Content production: Write realistic scenarios, model notices, and build assessment modules in Storyline. Ensure items match live conditions and include common variations.
- Technology and integration: Configure the Cluelabs AI Chatbot eLearning Widget with policy, style, and templates. Embed the coach in Storyline and on the GSOC portal. Set up SSO and basic access controls.
- Data and analytics: Stand up simple dashboards that track time to first notice, rubric scores, privacy flags, and coach usage. Set a weekly review rhythm.
- Quality assurance and compliance: Run privacy, legal, and security reviews of the coach prompt, data handling, and templates. Check accessibility for the learning modules.
- Piloting and iteration: Pilot with one shift, gather feedback, and tune scenarios, templates, and the coach prompt. Lock in the scoring calibration.
- Deployment and enablement: Publish quick reference guides, schedule micro-drills, and run short operator onboarding sessions.
- Change management: Brief stakeholders, announce the why and the wins, and set expectations for weekly improvements.
- Support and maintenance: Update scenarios and templates monthly, tune the coach prompt, and hold short supervisor calibration huddles. Monitor usage and outcomes.
| Cost Component | Unit Cost/Rate (USD) | Volume/Amount | Calculated Cost |
|---|---|---|---|
| Discovery and Planning (blended) | $130 per hour | 36 hours | $4,680 |
| Design: Outcomes and Rubric | $120 per hour | 16 hours | $1,920 |
| Design: Scenario Blueprint and Scoring | $120 per hour | 12 hours | $1,440 |
| Design: Template Governance Rules | $120 per hour | 12 hours | $1,440 |
| Content: Scenario Writing | $110 per hour | 16 scenarios × 3 hours | $5,280 |
| Content: Storyline Build (Assessments) | $100 per hour | 4 modules × 10 hours | $4,000 |
| Content: Model Notices and Templates | $110 per hour | 8 templates × 1.5 hours | $1,320 |
| Technology: Cluelabs AI Chatbot Widget Subscription | $199 per month | 12 months | $2,388 |
| Technology: Coach Setup and Prompt Tuning | $120 per hour | 16 hours | $1,920 |
| Technology: Embed Coach in Storyline and GSOC Portal | $130 per hour | 20 hours | $2,600 |
| Technology: SSO and Access Controls | $140 per hour | 8 hours | $1,120 |
| Data and Analytics: Metrics Setup and Dashboard | $120 per hour | 16 hours | $1,920 |
| Data and Analytics: Weekly Reporting (First 8 Weeks) | $110 per hour | 12 hours | $1,320 |
| Quality and Compliance: Privacy/Legal Review | $180 per hour | 12 hours | $2,160 |
| Quality and Compliance: Security Review | $150 per hour | 6 hours | $900 |
| Quality and Compliance: Accessibility Check | $120 per hour | 6 hours | $720 |
| Pilot: Run Sessions and Observe | $100 per hour | 6 hours | $600 |
| Pilot: Post-Pilot Updates | $110 per hour | 16 hours | $1,760 |
| Deployment: Job Aids and Quick References | $100 per hour | 4 hours | $400 |
| Deployment: Operator Onboarding Micro-Sessions | $100 per hour | 6 hours | $600 |
| Change Management: Stakeholder Briefings and Comms | $120 per hour | 8 hours | $960 |
| Support: Monthly Updates and Coach Tuning (6 Months) | $110 per hour | 36 hours | $3,960 |
| Support: Supervisor Calibration Huddles (6 Months) | $80 per hour | 18 hours | $1,440 |
| Estimated Subtotal | $44,848 | ||
| Contingency (10 percent) | $4,485 | ||
| Estimated Total | $49,333 |
- Effort and timeline: Typical effort is 6 to 8 weeks to reach pilot readiness, 2 weeks for pilot and tuning, then phased rollout. Support hours cover the first 6 months of optimization.
- What is one-time vs ongoing: Most design and content tasks are one-time. The chatbot subscription and support hours are ongoing. If your usage fits the free tier, subscription costs may be lower at first.
- How to scale up or down: Fewer scenarios, no SSO, and lighter dashboards reduce costs. More incident types, multilingual content, or deeper integrations increase costs.
- Where savings show up: Faster time to first notice, fewer supervisor edits, fewer resends, and fewer privacy corrections. Capture these to offset program spend.
Leave a Reply