Insights

AI in NZ Professional Services: Shadow AI to Sanctioned Stacks

By Nic Fouhy12 min read
AI in NZ Professional Services: Shadow AI to Sanctioned Stacks

A senior associate at a Wellington law firm pastes a draft settlement agreement into a free ChatGPT account to tighten the clauses before partner review. The firm has a written policy prohibiting that exact action. The associate knows about the policy. The work still ships through ChatGPT because the alternative, a manual rewrite under deadline pressure, costs an hour the file does not have.

Multiply that by every law, accounting, marketing, and engineering practice in the country and the picture sharpens quickly. AI is already inside NZ professional services. The question is no longer whether staff use it, but whether firms have provided the infrastructure to make that use safe, auditable, and aligned with client trust. We see the gap between policy and practice every week, and the firms closing it are pulling away from those that have not.

What does shadow AI actually look like inside NZ professional services firms?

Shadow AI in NZ professional services means staff using personal ChatGPT, Claude, or Gemini accounts to draft, summarise, and analyse work product, often with confidential client data, outside the firm's IT controls. Survey data shows 84% of NZ knowledge workers now use generative AI, and 81% of those users bring their own consumer-grade tools to the desk.

The pattern is consistent across the four core professional services verticals. In law, associates use ChatGPT to summarise discovery bundles, redraft contract clauses, and prepare client memos. In accounting, seniors use it to interpret tax position papers, summarise audit findings, and turn raw working papers into client-ready commentary. In marketing agencies, account managers use it to ideate campaign concepts and rewrite client briefs. In engineering consultancies, project staff use it to draft tender responses and condense technical reports.

The shared thread is that the work itself is genuinely faster. Power users in NZ knowledge-work surveys report saving more than 30 minutes a day on administrative tasks alone, and that figure compounds across a firm of 200 people. The catch is that most of that time saving is happening on infrastructure the firm does not control, with prompts that may include client names, financial details, internal commentary, and draft positions. Once that data leaves the laptop, the firm has no record of what was sent, where it now lives, or who can see it.

Why are bans on AI tools failing in legal, accounting, and consulting?

Bans fail in NZ professional services because the productivity gain from AI is large enough, and individual enough, that staff treat policy as advisory rather than binding. When a partner asks for a 40-page bundle summarised by morning, the associate who can deliver it through ChatGPT in twenty minutes will, regardless of what the IT policy says.

We have spoken with managing partners who introduced strict no-AI policies in 2024, expecting compliance to follow. Twelve months on, the firms that audited usage found the opposite. Shadow AI activity rose, not fell, because staff stopped asking IT for tools they had been told not to use and went around the firewall instead. Personal phones, personal accounts, personal email forwarding, all of it routine.

The deeper issue is structural. Professional services firms run on billable hours, and AI compresses the time required for routine work without compressing what the client expects. An associate who drafts a 90-minute memo in 30 minutes can either bill the saved hour to something else or absorb the gap. The economic incentive to use AI is not coming from the firm. It is coming from the workload itself. A ban does not remove the incentive; it just removes the firm's visibility over how it is being satisfied. That is the worst of both worlds.

Firms that have moved past this stage stopped treating shadow AI as a discipline problem and started treating it as an infrastructure problem. The fix is rarely a sterner policy. It is a sanctioned tool that does the job staff are trying to do anyway, with the data protections the firm needs.

What does sanctioned, closed-loop AI deliver that consumer ChatGPT cannot?

Closed-loop AI environments such as Microsoft 365 Copilot, the enterprise tier of ChatGPT, and Google Workspace AI keep prompts inside the firm's tenant, exclude content from training the underlying model, and respect existing identity and access controls. The functional output looks similar to consumer ChatGPT; the legal posture is materially different.

For a NZ firm holding client information that touches Privacy Act 2020 obligations, professional standards rules, or statutory confidentiality, the difference is not marginal. Closed-loop tools come with contractual commitments around data residency, retention, and access. They integrate with the firm's identity provider so that an associate's access to a matter is reflected in what the AI can read on their behalf. They produce audit logs the firm can review.

Diagram contrasting shadow AI use with sanctioned closed-loop AI inside a professional services firm
Shadow AI versus sanctioned AI: same productivity, different exposure

The early NZ accounting and compliance results bear this out. Firms running structured Copilot pilots have reported 74% reductions in audit effort and 63% cuts to compliance downtime, gains that show up because staff can use the tool against the firm's actual files rather than synthetic test data. Marketing teams running enterprise AI on briefing, content, and campaign analysis report the same direction of travel; global research now puts AI-driven marketing revenue at $47 billion for 2025, with 95% of marketing professionals confirming measurable time savings. The technology is the same as the consumer version. The difference is that the work product can stay in the firm.

How are firms like Beca scaling beyond text generation into operational AI?

The next stage of AI maturity in NZ professional services moves past document drafting into the operational core of the business. Engineering consultancy Beca has executed more than 100 AI-driven deployments spanning asset management, spatial planning, and hazard prediction. The pattern there is not about saving an hour on a memo; it is about reshaping how the firm's expert services are delivered.

That step is the one most firms underestimate. Drafting and summarisation are the surface layer of what generative AI can do, and they are the easiest pilots to run. The harder, more valuable work is embedding AI into the firm's domain models. For an engineering consultancy, that looks like a hazard-prediction model trained on the firm's project history, augmented by external data sources, and exposed to project teams as a recommendation layer inside their existing tooling. For a law firm, it looks like a precedent retrieval system that searches the firm's own work alongside public case law. For an accounting firm, it looks like an automated review layer that reads working papers and flags anomalies before a manager opens the file.

Operational AI is harder because it requires the firm to understand its own data well enough to expose it to a model. That work, cleaning matter management systems, normalising client records, tagging content, is unglamorous and often deferred. Firms that have done it can deploy AI features in weeks. Firms that have not are still six months from their first useful pilot. We work with firms on both sides of that line, and the gap between them is widening rather than closing. For larger firms exploring this work, our enterprise services cover the data foundations that make operational AI viable.

What does this mean for headcount and hiring inside NZ professional services?

The honest answer is that AI is not driving redundancies in NZ professional services. It is slowing the rate at which firms add new headcount. Roughly 45% of AI adopters report a reduction in net new hires, with the saved capacity absorbed into existing teams rather than converted into layoffs.

The framing matters. A graduate intake of eight may become an intake of six. A second senior associate hire may be deferred for a year. A marketing agency may take on additional client work without growing its delivery team. None of those decisions show up as a headcount cut on the firm's payroll. They show up as a flatter growth curve in staff numbers against a steeper growth curve in revenue. From the inside, that looks like productivity. From the outside, it looks like nothing has changed.

For staff already in the firm, the practical effect is upward pressure on the work people do. Routine drafting, document review, basic research, status reporting; the work that used to fill a junior calendar is increasingly handled by AI under supervision. The roles staff move into involve more judgment, more client interaction, and more work that an AI cannot do without a person in the loop. That transition is uneven, and firms that handle it well are investing heavily in training, structured AI literacy programmes, and clear professional development paths.

This piece sits inside a wider series on the state of AI in NZ business across 2025 and 2026. For firms ready to move from policy to infrastructure, our AI strategy and integration services walk through the steps from sanctioned tooling to operational AI without skipping the governance work in the middle.

Frequently asked questions

Is using ChatGPT with client data a Privacy Act 2020 breach?

Pasting identifiable client information into a public ChatGPT account can constitute a breach of the Privacy Act 2020, particularly under the principles covering disclosure, storage, and offshore transfer of personal information. Public consumer LLMs may use submitted content to train future models and store it in jurisdictions without equivalent privacy protections. NZ professional services firms should treat consumer accounts as unsuitable for any client data and move staff onto closed-loop enterprise tools that contractually exclude training on customer prompts.

What is closed-loop AI, and how does it differ from public ChatGPT?

Closed-loop AI describes enterprise environments where prompts and outputs stay inside a tenant boundary, never train the underlying model, and respect the organisation's existing access controls. Microsoft 365 Copilot, the enterprise tier of ChatGPT, and Google Workspace AI all operate on this basis. Public ChatGPT, by contrast, runs on a consumer agreement that permits data retention, model improvement, and broader operational access. The functional output is similar; the legal and security posture is not.

How do NZ firms measure ROI on Microsoft 365 Copilot?

The clearest signal is hours saved on routine drafting, summarisation, and analysis, multiplied by the loaded cost of the staff doing the work. NZ accounting firms running Copilot pilots have reported 74% reductions in audit effort and 63% cuts to compliance downtime. A useful baseline is to measure the time spent on a representative document or matter before deployment, then compare after three months of structured use. Track adoption rate alongside hours saved; unused licences distort any ROI calculation.

Can banning AI in a firm actually work?

Bans almost never work in professional services. Staff who already use generative AI on personal devices simply move it further out of sight, which is the opposite of the governance outcome the ban was meant to achieve. The evidence from NZ knowledge-work surveys shows shadow AI usage rises after restrictive policies, not falls. The practical alternative is to provide a sanctioned, closed-loop tool, train staff on safe use, and audit usage rather than try to suppress it.

Thinking about AI for your business?

Most conversations start with a specific pain point. What's yours?

Thanks, . I'll be in touch.