Concept mockup Not an official MPI publication. Independent concept artefact by Andrew Loughran in support of an application for the Senior Adviser (AI for Standards) role. Read the legal notice & disclaimer
Biosecurity Import & Export Standards Directorate Ko ngā Paerewa Whakauru me te Whakaputa Koiora
Concept artefact · v1.0 · May 2026
Overview Needs Analysis
Needs analysis · IHS revision lifecycle

The IHS revision lifecycle has four measurable bottlenecks. Three are amenable to safe AI assistance.

An honest read of the standards lifecycle, the work pattern at each step, and the cost in analyst time. Built from public BIES outputs, the published Review of Submissions corpus, and the IHS development process described under sections 22 to 24L of the Biosecurity Act 1993.

Sources public MPI documents only
Method document corpus review and process mapping
Author Andrew Loughran
Date May 2026
Executive summary

Five sentences.

BIES runs a continuous portfolio of IHS development and amendment work, with seven to ten active public consultations open at any time across plant, animal, biological, and forestry pathways. The slowest stages of the IHS revision lifecycle, in observed analyst time, are submission analysis (post-consultation), draft consistency review (pre-consultation), and plain-language guidance generation (post-publication). All three are pattern-recognition and structured drafting tasks, exactly the work generative AI is well suited to first-draft. Scientific risk assessment and regulatory judgement are not. A four-tool suite covering synthesis, consistency, plain language, and amendment change explanation, each operating to a strict human-in-the-loop architecture, can free a conservatively estimated 0.6 to 1.2 FTE of analyst time per year, returned to the technical work that AI cannot do.

The lifecycle

The IHS revision lifecycle, end to end.

The lifecycle below is a simplification of the process described under sections 22 to 24L of the Biosecurity Act 1993 and the published BIES consultation policy. Steps highlighted in amber are the bottlenecks where analyst time concentrates. Steps highlighted in green are the steps a Standards Companion tool addresses.

STEP 01
Risk assessment & scoping
Variable, weeks to months
STEP 02
Drafting the standard
Multi-week, internal QA cycles
TOOL 2
STEP 03
Public consultation
6 to 8 weeks fixed
STEP 04
Submission analysis & RoS drafting
40 to 80 hrs per IHS
TOOL 1
STEP 05
Revision, provisional, review window
3 to 4 weeks
STEP 06
Publication & importer guidance
Drafting, ongoing query response
TOOL 3
STEP 07 · CYCLES BACK
Amendment notification to importers
Every IHS amendment, every notification email and industry update, with drafting and consistency cost across hundreds of stakeholders
TOOL 4
Output volume

What BIES actually publishes.

The figures below are drawn from a sample of the BIES consultation register and the Review of Submissions corpus published on mpi.govt.nz. They are indicative orders of magnitude, not internal MPI data.

Consultation register, indicative annual flow

Source: BIES consultation register on mpi.govt.nz, observed at 9 May 2026.
Active consultations at any time
7 to 10
Closed consultations per year (estimate)
~25
Plant Product Imports stream
~10
Animal & aquatic product stream
~8
Forestry & biological products stream
~5
OMAR & export-side stream
~2

Submission volume per consultation, observed range

Source: published Review of Submissions documents, sample: grain & seeds 2021 (8); biological products 2023 (24); cut flowers 2025 (consultation extended, multi-cycle); transitional facilities H&SW guidance 2025 (low-volume).
Narrow technical amendment (e.g. grain MRL update)
5 to 10
Routine standard refresh
10 to 25
Significant pathway change
25 to 60
Contentious or industry-mobilised
60 to 200+
Where time concentrates

Four bottlenecks. Each named, scoped, and quantified.

Each of the four below is a discrete cost in analyst time. Each is amenable to safe AI assistance with a defensible architecture. None of them is the scientific risk assessment work that sits outside what genAI should be doing.

Bottleneck 1 · Submission analysis and Review of Submissions drafting

Step 04 of the lifecycle. Tool 1 addresses this.
Pattern observed

After a consultation closes, an analyst must read every submission individually, identify the substantive themes, count submitters per theme, identify minority and dissenting views, verify any cited evidence, propose a disposition for each issue raised, draft the formal Response section of the Review of Submissions, prepare the table of submissions received, and arrange for full-text submissions to be appended. The published Review of Submissions document follows a fixed BIES structure: Introduction, Submissions received, WTO submissions, Response to submissions, Changes to the standard, Copy of submissions.

Cost, observed range
Consultation typeSubmissionsAnalyst hours, indicativeAnnual frequency
Narrow technical5 to 1015 to 25~8 per year
Routine refresh10 to 2525 to 50~10 per year
Significant pathway change25 to 6050 to 100~5 per year
Contentious or mobilised60 to 200+100 to 200+~2 per year

Annual analyst time, indicative central case: 950 to 1,650 hours, before any drafting of consequential IHS revisions.

Bottleneck 2 · Drafting consistency before consultation

Step 02 of the lifecycle. Tool 2 addresses this.
Pattern observed

Each draft IHS must conform to the BIES IHS template (sections including Scope, Definitions, Eligible Countries, Pre-export Requirements, Treatment, Documentation, Inspection, Certification, Equivalence, Audit). Terminology must be consistent with prior published IHSs (for example, the difference between "devitalisation", "treatment", and "sterilisation"). Cross-references to the Biosecurity Act, IPPC ISPMs, OIE codes, and prior MPI standards must be valid. Section numbering and clause hierarchy must align. Quality control catches inconsistencies, but late, after consultation has identified them publicly, as observed in the cut flowers and aquatic animal products consultations.

Cost

Each round of late-stage QC rework costs 10 to 30 analyst hours. Worse, an inconsistency that surfaces in submissions costs reputational and process time, with the inconsistency raised in the Review of Submissions and addressed in revision.

Annual analyst time: indicative 250 to 500 hours, plus the unmeasured cost of inconsistencies that reach consultation.

Bottleneck 3 · Plain-language guidance for importers

Step 06 of the lifecycle. Tool 3 addresses this.
Pattern observed

IHSs are dense legal documents, written for compliance specialists and licensed brokers. The MPI website carries plain-language guidance pages for importers, but they cover a fraction of the standard corpus, lag publication of new IHSs, and require analyst time to draft. The lag means importer enquiry volume to the front line is higher than it needs to be, particularly in the months after a new or amended IHS is published.

Cost

Hard to quantify in analyst hours alone, because the cost shows up as inbound enquiry volume to Border Clearance, Plant Imports, and Animal Imports teams. Indicative central case: 200 to 400 analyst hours per year writing or updating guidance, plus front-line query handling time that scales with IHS amendment volume.

Bottleneck 4 · Amendment notification and explanation

Step 07 of the lifecycle. Tool 4 addresses this.
Pattern observed

Every IHS amendment generates a notification cycle: an industry update or notification email, an updated guidance page, sometimes a stakeholder briefing. Each requires an analyst to articulate what changed, what it means in practice for importers, and what the operational implications are. The work is high-frequency, low-novelty, and inconsistent in tone across analysts. Inconsistency is itself a quality issue raised by submitters in the cut flowers consultation and elsewhere.

Cost

2 to 6 analyst hours per amendment notification, multiplied by the volume of amendments and routine updates. Indicative central case: 100 to 250 analyst hours per year on notification drafting, before stakeholder briefings.

All four cost figures are indicative orders of magnitude derived from public BIES outputs and published consultation behaviour. They are conservative working assumptions for the value model, not internal MPI data. The value model on the Value Model page lets you adjust every assumption.

The genAI fit test

Where the four bottlenecks land on the genAI suitability matrix.

A concept-level test before anything is built: does the work pattern look like something current generative AI is genuinely good at, with the safeguards we can put around it. The test is binary on each row.

BottleneckPattern recognitionStructured draftingVerifiable outputHuman-in-loop fitsReversibleVerdict
1. Submission synthesis Yes, clustering text by theme Yes, RoS template is fixed Yes, every claim cites a source quote Yes, draft only, analyst signs out Yes, nothing autopublished Build
2. Consistency check Yes, structural and terminological matching Yes, redline against template Yes, every issue cites the clause and prior IHS reference Yes, drafter accepts or rejects each issue Yes, advisory only Build
3. Plain-language guidance Lower, this is generation more than analysis Yes, guidance template is fixed Yes, every plain-language sentence cites the IHS clause Yes, analyst reviews before publication Yes, never auto-published to importers Build
4. Amendment change explainer Yes, diff plus implication Yes, notification email template is consistent Yes, diff is mechanical, narrative cites the diff Yes, analyst owns the notification Yes, draft only Build
Scientific risk assessment Yes Variable No, requires laboratory and field evidence Yes but not material at the right resolution No, scientific judgement does not roll back Do not build
Legal interpretation Yes Yes No, requires legal authority Yes No, regulatory error is not cheap to reverse Do not build
Before and after

Same lifecycle. Same standards. Same scrutiny. Different time profile.

Today

The directorate's experienced analysts spend a measurable share of their time on tasks that current genAI can first-draft to a high standard. Submission analysis, drafting consistency QC, plain-language guidance, and amendment notification together account for an indicative 1,500 to 2,800 analyst hours per year, before any technical or scientific work.

Time pressure compresses the consultation cycle, occasionally surfaces drafting inconsistencies in public, and slows the publication of importer-facing guidance.

With the Standards Companion suite

The same analysts spend their time on the work that requires their judgement: scientific risk, technical interpretation, stakeholder relationships, regulatory authority. The Companion produces the first drafts. The analyst reviews, adjusts, signs out.

Indicative central case: 0.6 to 1.2 FTE of analyst time freed annually, faster Review of Submissions cycles, fewer drafting inconsistencies surfacing in consultation, and faster publication of plain-language importer guidance.

The Value Model turns these ranges into a live calculator.

Method note

How this analysis was built.

Sources: the BIES public consultation register on mpi.govt.nz; a sample of published Review of Submissions documents (grain & seeds 2021, biological products 2023, cut flowers and foliage 2025, transitional facilities H&SW guidance 2025); the Biosecurity Act 1993 sections 22 to 24L; the MPI Annual Report 2024/25; and the Senior Adviser (AI for Standards) job description, MPI26/19077920.

Method: corpus review of public BIES outputs to identify recurring document structures and patterns; process mapping against the Biosecurity Act consultation requirements; cost ranges built bottom-up from observed document length, complexity, and submission volume; cross-checked against published genAI synthesis benchmarks for clustering and structured drafting tasks.

Limits: I do not have internal BIES time-and-motion data. The analyst hour ranges are indicative orders of magnitude, intentionally conservative, and adjustable in the value model. The aim is a defensible working case, not a precision estimate.

Where this would land in a real BIES engagement: the first 30 days of the role would be a structured time-and-motion exercise across two analysts in different streams, replacing the orders of magnitude here with measured values. The value model accepts those measured values as inputs without changing the architecture.