Right now, 77% of employees are pasting company information into AI tools — and most are using personal accounts to do it. For New Zealand organisations bound by the Privacy Act 2020 and the new Privacy Amendment Act 2025, that statistic should stop you in your tracks. Your staff are almost certainly using ChatGPT, Copilot, and Gemini at work. The question is not whether they are using AI — it is whether you can see it, govern it, and stop sensitive data from walking out the door.
AI adoption across Kiwi businesses has accelerated rapidly through 2025 and into 2026. But security has not kept pace. Nearly half of New Zealand businesses now say employees accidentally exposing data through AI-driven processes is their biggest cyber risk, and AI-related attacks have more than doubled year-on-year. The gap between AI enthusiasm and AI governance is where breaches happen.
This guide explains the three AI security threats every NZ business needs to understand — shadow AI, prompt injection, and data leakage — and the practical steps you can take to address each one without slowing down innovation.

Shadow AI: The Threat You Cannot See
Shadow AI is the 2026 evolution of shadow IT. It refers to employees using AI tools — chatbots, code assistants, image generators, writing aids — without IT or security team approval. Unlike traditional shadow IT, shadow AI is nearly invisible. There is no software to install. An employee simply opens a browser tab, pastes your quarterly financial data into a prompt, and hits enter.
The scale is staggering. Research shows that 56% of organisations now have real agentic AI exposure, with 23% coming from shadow deployments that IT knows nothing about. A full 32% have zero visibility into what AI agents are doing within their networks.
Why Shadow AI Is Particularly Risky in New Zealand
New Zealand’s Privacy Act 2020 requires organisations to take reasonable steps to protect personal information. The Privacy Amendment Act 2025, which takes full effect in May 2026 with the new Information Protection Principle (IPP) 3A, strengthens these obligations further. When an employee pastes customer data into an offshore AI tool, your organisation may be breaching these requirements without knowing it.
The NZ Government’s Responsible AI Guidance for Businesses and the Public Service AI Framework both emphasise the need for visibility and governance over AI tool usage. Organisations that cannot demonstrate which AI tools their staff use — and what data flows into them — face regulatory and reputational risk.
How to Address Shadow AI
Effective shadow AI governance does not mean banning AI tools. Blanket bans simply drive usage underground. Instead, you need three capabilities:
- Discovery: Identify every AI tool employees are accessing, including unapproved ones, with confidence scoring to prioritise risk
- Policy enforcement: Set zero-trust, identity-based access controls that determine who can use which AI tools and under what conditions
- Monitoring: Maintain continuous visibility into AI usage patterns across both office and remote workers through secure web gateway routing
Prompt Injection: When Your AI Application Turns Against You
If your organisation has deployed — or is building — AI-powered applications (customer service chatbots, internal knowledge assistants, document processors), you face a different class of threat: prompt injection.
Prompt injection is a technique where a malicious user crafts input designed to override your AI model’s instructions. A well-crafted prompt injection can make your chatbot ignore its safety rules, reveal its system prompt (and any proprietary instructions), or extract training data. The OWASP (Open Worldwide Application Security Project) Foundation now includes prompt injection in its Top 10 for LLM Applications, ranking it as the number one risk.
What Makes Prompt Injection Different from Traditional Attacks
Traditional web application firewalls (WAFs) are designed to catch SQL injection, cross-site scripting, and similar attacks. They look for known patterns in structured inputs. Prompt injection exploits the fundamentally different nature of natural language — the input is unstructured, context-dependent, and deliberately ambiguous. A conventional WAF will miss it entirely.
Purpose-built AI security uses a score-based detection model rather than simple pattern matching. Each incoming prompt receives an injection score — for instance, a score of 1 means the prompt is very likely an injection attempt, while a score of 99 means it is almost certainly safe. This probabilistic approach handles the nuance of natural language far better than binary rules.
What Prompt Injection Protection Looks Like
Modern AI application security sits at the edge — between your users and your AI models — and provides:
- Real-time prompt scanning that detects injection attempts before they reach your model
- Response scanning to catch any sensitive data the model might inadvertently include in its output
- Content safety guardrails that block harmful, non-compliant, or off-topic responses automatically
- Model-agnostic deployment, meaning the protection works regardless of whether you use OpenAI, Anthropic, Google, or a self-hosted model
This is not about modifying your application code. The security layer sits in front of your models, inspecting traffic at the edge with minimal latency impact.
AI Data Leakage: Preventing Your Secrets from Becoming Training Data
Data leakage is the most tangible AI security risk. A Cyberhaven study found that 11% of data employees paste into ChatGPT is confidential — trade secrets, personally identifiable information (PII), intellectual property, and internal communications. And once that data enters an AI provider’s system, you may have limited control over how it is stored, processed, or used to improve models.
For NZ organisations, this creates a direct compliance problem. Under the Privacy Act 2020, personal information sent to an offshore AI provider may constitute a cross-border disclosure. You need to demonstrate that the receiving party offers comparable protections — a difficult argument when the data is flowing into a general-purpose chatbot with billions of users.
The Three Layers of Data Leakage Prevention
Comprehensive AI data leakage prevention operates at three levels:
1. Prompt-level DLP (Data Loss Prevention)
Before any prompt reaches an AI model, it is scanned for sensitive content — credit card numbers, email addresses, phone numbers, tax numbers (IRD numbers for NZ), health information, and proprietary code. Detected PII can be automatically redacted or the prompt can be blocked entirely.
2. Response-level scanning
The model’s output is also inspected before it reaches the user. This catches scenarios where a model inadvertently generates or reveals sensitive information — a growing risk as models become more capable and are given access to broader data sets.
3. Application posture management
For sanctioned AI tools (the ones you have approved for employee use), AI Security Posture Management (AI-SPM) continuously scans for misconfigurations, overly permissive access, and data exposure risks. This is the equivalent of a CASB (Cloud Access Security Broker) purpose-built for AI applications.
AI Infrastructure Security: Protecting the Plumbing
Organisations running their own AI models or connecting to multiple AI providers face infrastructure-level security challenges. AI infrastructure security provides:
- Centralised gateway management for routing all model traffic through a single control plane, with edge-stored API keys so secrets never sit on the client side
- Identity-verified connections using zero-trust principles for internal API access between AI agents and backend systems
- Request caching and spending limits per model to prevent runaway costs and abuse
- Complete audit logging for every AI interaction, giving your security team and compliance officers a full trail
For NZ organisations subject to NZISM (New Zealand Information Security Manual) requirements — particularly government agencies, councils, and education providers — this audit trail is essential for demonstrating control over AI systems.
The NZ Regulatory Landscape for AI Security in 2026
New Zealand does not yet have AI-specific legislation, but several existing and emerging frameworks create clear obligations:
| Framework | Relevance to AI Security |
|---|---|
| Privacy Act 2020 | Governs collection, use, and disclosure of personal information — applies directly to data flowing into AI tools |
| Privacy Amendment Act 2025 (IPP 3A) | Strengthens information protection obligations; full effect from May 2026 |
| NZISM | Security controls for government agencies, increasingly adopted as a benchmark by private sector |
| Responsible AI Guidance for Businesses | NZ Government guidance on responsible AI deployment in the private sector |
| Public Service AI Framework | Sets expectations for government agency AI use |
Organisations that implement AI security now are not just protecting against threats — they are building the governance foundation that regulators and customers will increasingly demand.
How to Get Started with AI Security
You do not need to boil the ocean. A practical path to securing your organisation’s AI usage looks like this:
Step 1: Discover what you have. Run a shadow AI audit. Identify which AI tools your employees are using, what data is flowing into them, and which tools lack enterprise security controls.
Step 2: Set governance policies. Define which AI tools are sanctioned, who can use them, and what data types are permitted. Align these policies with your Privacy Act obligations.
Step 3: Deploy edge-based AI security. Implement prompt scanning, response scanning, and DLP capabilities that sit between your users and AI models — without requiring changes to your applications.
Step 4: Establish monitoring and audit trails. Ensure every AI interaction is logged and that your security team has dashboards showing usage patterns, blocked threats, and policy violations.
Step 5: Review and iterate. AI threats evolve quickly. Review your AI security posture quarterly, update policies as new tools emerge, and run regular training for staff.
ASI Solutions provides a comprehensive AI security service built in partnership with Cloudflare, covering all three pillars: AI application security, workforce AI governance, and AI infrastructure protection. The service is locally supported by Kiwi engineers and designed for New Zealand’s regulatory environment. If you are looking for a practical starting point, book a meeting with the ASI Solutions team to discuss your organisation’s AI security posture.