top of page

Winning With AI Isn’t a Policy Toggle — It’s an Operating Rhythm

Flip on a tool, draft some words in a policy (and without context) and call it “AI strategy,” and you’ll get speed without sense—and often eloquent issues, confusion and or bugs at scale. The real win is a tight triple loop behavioural change model -use-case → framework → model → refine behaviour. Do that, and outputs arrive faster, sound like you, cite sources, and pass audit and change behaviour. Miss it, and you upscale and diversify chaos.


Winning with AI - Isn't a policy toggle
Winning with AI - Isn't a policy toggle

The reality check (receipts included)

Adoption is no longer the bottleneck - in McKinsey’s 2024 survey, 65% of organisations said they’re regularly using gen-AI, and in its 2025 cut 53% of C-suite execs say they personally use it at work. Yet wide adoption hasn’t meant wide impact.


Meanwhile, a wave of fresh project research is sobering. Echoing the persistence of the similar approaches towards Digital, the causes, the characteristics and, as such consequential issues, dating back to the halcyon days of the Enterprise Systems era and that transcended into the Cloud "transformation" era, Gartner projects ≥30% of gen-AI projects will be abandoned after proof-of-concept by end-2025 (poor data, weak guardrails, unclear value from weak use-case engagement). And an MIT-linked analysis making the rounds says ~95% of enterprise pilots aren’t showing measurable P&L lift—interpret that as a warning to run playbooks, not demos.


Risk is two-sided. In a randomised trial, developers with GitHub Copilot finished a task 55.8% faster—a real ceiling for lift. However, Stanford/ACM found users with AI code assistants wrote less secure code than controls, and newer testing across 100+ models shows ~45% of AI-generated code contains known vulnerabilities. Net effect; treat AI like power tools or contributors—expectation management, guards on, tests on, scanners on.


Trust also matters outside engineering. The Reuters Digital News Report 2024 shows majorities in the US and UK are uncomfortable with news, and not just discomfort with legacy mainstream media this time, “mostly produced by AI,” pushing brands to prove provenance for public-facing content. That’s why adoption of Content Credentials/C2PA has exploded—Adobe’s Content Authenticity Initiative reports 5,000+ participating members as of August 2025


And the regulator’s clock is ticking: the EU AI Act entered into force Aug 1, 2024; bans on unacceptable-risk systems started Feb 2, 2025; and GPAI obligations took effect Aug 2, 2025—with broader compliance waves still to come.


The engine - Use-case → Framework → Model

Start with a handful of high-value, bounded jobs—executive briefs, customer replies, incident triage, options papers—each with clear inputs, outputs, KPIs, and a risk class. Wrap those jobs in a living framework: voice presets, policy-to-prompt rules, risk routes (low→auto, medium→human, high→expert + citation checks), evaluator checks (style, citations, bias/PII), and provenance-first retention. Then (and only then) choose/shape the model: retrieve from your curated corpus at runtime; fine-tune only where repetition and format discipline justify it. This is how you get speed with receipts.


Media & software: two example proof-points

Among, in the full paper - a full Rare Innovation framework for quality Ai adoption, use-cases and deep examples - the following are a mere two example proof points -


  • Media production. Ground scripts, storyboards, captions and multi-lingual dubs on your brand bible, SKUs and legal clauses (RAG). Pre-publish evaluators police tone, claims and PII; outputs ship with Content Credentials for “who/what/when/how.” That’s how you earn audience trust when over half say they’re uneasy with AI-authored news.


  • Software delivery. Yes, Copilot-class tools can accelerate throughput (~56% faster in RCT). But security data is unambiguous: users tend to write less secure code with assistants, and about 45% of AI-generated code carries known vulns. Make assistants repo-aware and policy-aware; demand spec-first prompts that emit code + tests + docs; and gate merges with SAST/DAST, license checks, and threat-model notes.


Policy on paper ≠ policy in the workflow

Bans and breaches happen when “prompting in the wild” meets reality—Cisco’s 2024 benchmark found 27% of organizations banned gen-AI (at least temporarily) over privacy/data-handling concerns. The fix is not an email about rules; its policy embedded in prompts, routes and evaluators, with audit-ready logs.


Build → Run → Improve (how you scale without drama)

Build small - pick 3–5 use-cases, curate a lean approved corpus (freshness tags, audience, risk), stand up RAG, author “golden prompts,” fine-tune only where repetition pays. Run pragmatically: pilot in the flow of work, enforce risk routes, track edit-distance and time-to-acceptable draft. Improve continuously: feed approved outputs back into the corpus, prune stale content, tighten presets—scale only when KPIs hold and incident rates are near zero.


The payoff

Drafts arrive review-ready, not “AI-ish.” Editors keep more of pass one. Review cycles shrink. Risk drops because compliance lives inside the workflow. Institutional memory compounds as today’s approved outputs become tomorrow’s exemplars. And when regulators or auditors knock, you don’t hand them vibes; you hand them provenance—precisely what the market demands and the AI Act will enforce over the next waves.


Rare Bottom line 

With adoption projected to be high, and sometimes hype-driven and the costs/implications of abandonment risks - all too real, the advantage now goes to organisations and teams that run a disciplined use-case → framework → model rhythm—grounded in a Triple Loop approach to digital behaviour change - an Ai tuned to your truth, measured by edit-distance and cycle time, fenced by policy-to-prompt rules, and stamped with provenance—so value, outcome, transformation speed and enablement finally come with trust and high project success.


Rare Says, for Ai considerations

Start with people (and getting to know them), not platforms. Frame the use-case together and run the triple loop—use-case → framework → model → refine behaviour—because “tool-first” may be perceived to buy speed, yet without context, sense - you buy and scale chaos.


Read or download the full, free, Rare Innovation - "Winning with AI" paper.

 

Comments


bottom of page