Industry Analysis

NZ's AI Strategy Gap: 87% Adopt, 12% Scale

By Nic Fouhy13 min read
NZ's AI Strategy Gap: 87% Adopt, 12% Scale

The gap between AI adoption and AI scaling in New Zealand business is not a small one. It is the dominant feature of the 2026 commercial landscape. 87% of NZ organisations report using some form of AI; only 12% have successfully scaled it across their operations. That 75-point delta cannot be explained by enthusiasm, awareness, or even spend. The technology is widely available, broadly cheap, and increasingly familiar to staff at every level. What is missing is the structural work that turns a pilot into infrastructure.

What the data tells us about the gap is consistent and sobering. 74% of NZ leaders worry their organisation lacks a cohesive AI plan. 52% explicitly identify unsanctioned AI use as an internal threat. Only 24% of New Zealanders have undertaken any formal AI education. And in the same period, an estimated 81% of NZ AI users at work are bringing their own consumer-grade tools to the desk because their employer has not provided a sanctioned alternative. The picture is of an economy that has discovered the technology faster than it has learnt how to govern, scale, or use it well.

What does the AI strategy gap actually look like in NZ business?

The NZ AI strategy gap is the operational distance between widespread, often unsanctioned AI usage at the staff level and coherent, scaled deployment at the organisational level. 87% of NZ organisations use AI in some form. Only 12% have it scaled across operations. The middle 75% are running pilots, point tools, and informal usage that never compounds into competitive advantage.

The gap shows up differently in different places. In professional services, it appears as shadow AI activity outpacing formal Microsoft 365 Copilot rollouts. In retail and e-commerce, it appears as a chatbot deployed on the website while the back-office pricing and inventory work remains manual. In manufacturing, it appears as a single agentic-supply-chain pilot stalling at one warehouse for 18 months before anyone scales it. The common thread is that AI has entered the organisation in fragments rather than through a cohesive plan, and the fragments do not add up to a system.

What the 12% who have scaled are doing differently is not particularly mysterious. They have invested in the data, process, and training work that the other 75% have deferred. They have a sanctioned AI environment that staff actually use, instead of a policy that pretends shadow AI is not happening. They treat AI as an infrastructural shift rather than a feature to ship, and the procurement, governance, and change-management discipline they apply reflects that. Most of that work is unglamorous. None of it is optional.

Why is shadow AI ("BYOAI") rising despite formal policies?

Shadow AI, also known as BYOAI (Bring Your Own AI), is rising in NZ because the productivity gain from AI is large enough and individual enough that staff treat formal policy as advisory. Roughly 81% of NZ AI users at work are bringing their own consumer-grade tools, and bans without a sanctioned alternative drive the activity further out of organisational sight rather than reducing it.

The dynamic is structural rather than disciplinary. An employee who can complete a routine task in a fraction of the time using ChatGPT, Claude, or Gemini will, regardless of what the IT policy says, particularly when the policy is unaccompanied by a sanctioned alternative that does the same job. The behaviour is rational at the individual level. It is also exactly the behaviour that strips an organisation of its ability to see, govern, or audit what is happening with its data.

The leaders we work with on this issue increasingly recognise that bans are an operating-model failure dressed up as a security control. The fix is to provide a sanctioned, closed-loop AI environment that does the work staff are already doing, train people on safe use, and audit usage rather than try to prohibit it. This is the same conclusion that came out of the professional services sector view, and it generalises well across most NZ industries.

What separates the 12% who scale from the 75% stuck in pilots?

The 12% of NZ organisations that have scaled AI treat it as an infrastructural redesign rather than a tool to bolt onto existing processes. AI deployment is approximately 10% technology investment and 90% cultural and procedural redesign. The redesign work, including data hygiene, citizen-developer enablement, training, and governance, is what lets a pilot turn into a system.

Editorial illustration showing a small bright AI technology core supported by a much larger surrounding structure of data, process, training, and governance scaffolding
The 90/10 split: AI technology is the small bright core; everything around it is the work that determines whether the deployment scales

The pattern is most visible in three NZ business types that are bridging the gap consistently: mid-market logistics companies, B2B SaaS providers, and specialised engineering firms. Each of them shares a common operating posture. They run on data they already understand. They have leadership teams who treat AI procurement as an operational decision rather than a marketing one. They invest in staff training before procurement rather than after. And they have governance scaffolding designed in at the deployment stage, not retrofitted after the first incident.

The 75% stuck in pilots typically share a different pattern. AI was introduced as a vendor demo or a feature request, not as an operational redesign. Data was assumed to be ready when it was not. Training was assumed to be unnecessary because the tool "just worked." Governance was assumed to be IT's problem rather than the board's. The pilots that result are technically functional and organisationally invisible. They never compound into measurable outcomes, which means they never make the case for the next pilot, which means the organisation is in year three of "exploring AI" with nothing structural to show for it.

How can NZ SMEs use the MBIE AI Advisory Pilot to bridge the gap?

The MBIE AI Advisory Pilot, launched in early 2026, offers NZ SMEs up to $15,000 in co-funding to document a rigorous AI strategy. To qualify, a business must be NZ-registered, have fewer than 500 employees, have been operating for at least 12 months, and have no outstanding compliance issues with IRD or WorkSafe. Prior AI capability is not required.

The application process is built for SME pace. A business submits a brief proposal outlining the company, the consulting engagement they want to fund, the expected before-and-after outcome, and a quote from an MBIE-registered advisory firm. MBIE reviews applications on a rolling basis and pays approved co-funding directly to the advisory firm rather than to the business itself. The structure removes one of the typical SME blockers, which is the upfront cash exposure on professional services that the business cannot easily afford.

For SMEs in the 75% middle, the Advisory Pilot is a real lever. It pays for the work that most pilots skip: a documented strategy, a defensible procurement brief, a governance baseline, and a phased deployment plan. It does not pay for the technology itself, which is appropriate; the technology is rarely the limiting factor. The work the pilot funds is the 90% of the deployment that determines whether it scales. Our AI readiness audit service is designed to sit inside this Advisory Pilot frame, and the engagement we run for SMEs accessing it produces deliverables aligned to MBIE's expectations.

What governance frameworks should NZ boards adopt right now?

NZ boards should anchor AI governance in three layers: the Privacy Act 2020 as the legal baseline, ISO/IEC 42001 as the international management framework for AI lifecycle, and a documented internal AI policy that operationalises both. Each layer answers a different question, and a board that treats any one of them as the whole answer ends up exposed.

The Privacy Act 2020 is the bedrock. Its 13 Information Privacy Principles apply directly to AI training data, automated profiling, and AI outputs, and they apply regardless of where a model is hosted or who built it. NZ organisations using AI are subject to the Act whether they like it or not, and treating Privacy Act compliance as a post-deployment checklist rather than a deployment input is one of the most common ways that pilots become liabilities.

ISO/IEC 42001 sits a layer above the Act. It is a comprehensive international framework detailing the structures, policies, and technical controls needed to manage an AI lifecycle safely. Pursuing certification is not a legal requirement in NZ, but the work it forces is the work most boards need their organisations to do anyway, and certification is increasingly attractive for firms with international ambition or institutional customers asking for assurance. The internal policy layer then translates the Act and the framework into the day-to-day decisions, controls, and escalation paths that staff actually use.

This piece is part of a wider series on the state of AI in NZ business across 2025 and 2026. For NZ SMEs ready to do the 90% of AI deployment that determines whether the 10% of technology actually pays off, our AI readiness audit service is the place that work usually starts.

Frequently asked questions

What is BYOAI, and how widespread is shadow AI in NZ?

BYOAI (Bring Your Own AI) describes employees using personal consumer-grade AI tools at work without organisational sanction or visibility. In New Zealand, an estimated 81% of AI users at work are doing exactly this, with shadow AI increasingly normalised inside organisations that have either not provided sanctioned tools or have policies that prohibit the tools their staff are already using. The phenomenon is widespread enough that 52% of NZ leaders explicitly identify it as a pressing internal threat.

How do I qualify for the MBIE AI Advisory Pilot?

The MBIE AI Advisory Pilot, launched in early 2026, offers up to $15,000 in co-funding to NZ SMEs to document a rigorous AI strategy. To qualify, a business must be New Zealand-registered, have fewer than 500 employees, have been operating for at least 12 months, and have no outstanding compliance issues with IRD or WorkSafe. Prior AI capability is not required. Applications go via a brief proposal with a quote from an MBIE-registered advisory firm, reviewed on a rolling basis.

Does the Privacy Act 2020 cover AI training data?

Yes. The Privacy Act 2020 and its 13 Information Privacy Principles apply directly to AI training data, automated profiling, and AI-generated outputs in New Zealand. NZ organisations using AI must comply with the Act regardless of where the model is hosted or who built it. The Act is the legal bedrock for AI governance in this country, and it sits in front of any voluntary standard or international framework an organisation might also choose to adopt.

Should NZ boards pursue ISO/IEC 42001 certification?

ISO/IEC 42001 certification is increasingly attractive for NZ boards at firms with international ambitions, regulated workflows, or institutional customers asking for assurance. The standard provides a comprehensive framework for managing the AI lifecycle, including the structures, policies, and technical controls required to do it safely. It is not a legal requirement in NZ. It is a voluntary discipline that gives boards a defensible answer to the governance question, and the work it forces is the work most firms need to do anyway.

Got a question about this?

I read every message. If you're thinking about AI for your business, start here.

Thanks, . I'll be in touch.