Five sentences.
BIES runs a continuous portfolio of IHS development and amendment work, with seven to ten active public consultations open at any time across plant, animal, biological, and forestry pathways. The slowest stages of the IHS revision lifecycle, in observed analyst time, are submission analysis (post-consultation), draft consistency review (pre-consultation), and plain-language guidance generation (post-publication). All three are pattern-recognition and structured drafting tasks, exactly the work generative AI is well suited to first-draft. Scientific risk assessment and regulatory judgement are not. A four-tool suite covering synthesis, consistency, plain language, and amendment change explanation, each operating to a strict human-in-the-loop architecture, can free a conservatively estimated 0.6 to 1.2 FTE of analyst time per year, returned to the technical work that AI cannot do.
The IHS revision lifecycle, end to end.
The lifecycle below is a simplification of the process described under sections 22 to 24L of the Biosecurity Act 1993 and the published BIES consultation policy. Steps highlighted in amber are the bottlenecks where analyst time concentrates. Steps highlighted in green are the steps a Standards Companion tool addresses.
What BIES actually publishes.
The figures below are drawn from a sample of the BIES consultation register and the Review of Submissions corpus published on mpi.govt.nz. They are indicative orders of magnitude, not internal MPI data.
Consultation register, indicative annual flow
Submission volume per consultation, observed range
Four bottlenecks. Each named, scoped, and quantified.
Each of the four below is a discrete cost in analyst time. Each is amenable to safe AI assistance with a defensible architecture. None of them is the scientific risk assessment work that sits outside what genAI should be doing.
Bottleneck 1 · Submission analysis and Review of Submissions drafting
Pattern observed
After a consultation closes, an analyst must read every submission individually, identify the substantive themes, count submitters per theme, identify minority and dissenting views, verify any cited evidence, propose a disposition for each issue raised, draft the formal Response section of the Review of Submissions, prepare the table of submissions received, and arrange for full-text submissions to be appended. The published Review of Submissions document follows a fixed BIES structure: Introduction, Submissions received, WTO submissions, Response to submissions, Changes to the standard, Copy of submissions.
Cost, observed range
| Consultation type | Submissions | Analyst hours, indicative | Annual frequency |
|---|---|---|---|
| Narrow technical | 5 to 10 | 15 to 25 | ~8 per year |
| Routine refresh | 10 to 25 | 25 to 50 | ~10 per year |
| Significant pathway change | 25 to 60 | 50 to 100 | ~5 per year |
| Contentious or mobilised | 60 to 200+ | 100 to 200+ | ~2 per year |
Annual analyst time, indicative central case: 950 to 1,650 hours, before any drafting of consequential IHS revisions.
Bottleneck 2 · Drafting consistency before consultation
Pattern observed
Each draft IHS must conform to the BIES IHS template (sections including Scope, Definitions, Eligible Countries, Pre-export Requirements, Treatment, Documentation, Inspection, Certification, Equivalence, Audit). Terminology must be consistent with prior published IHSs (for example, the difference between "devitalisation", "treatment", and "sterilisation"). Cross-references to the Biosecurity Act, IPPC ISPMs, OIE codes, and prior MPI standards must be valid. Section numbering and clause hierarchy must align. Quality control catches inconsistencies, but late, after consultation has identified them publicly, as observed in the cut flowers and aquatic animal products consultations.
Cost
Each round of late-stage QC rework costs 10 to 30 analyst hours. Worse, an inconsistency that surfaces in submissions costs reputational and process time, with the inconsistency raised in the Review of Submissions and addressed in revision.
Annual analyst time: indicative 250 to 500 hours, plus the unmeasured cost of inconsistencies that reach consultation.
Bottleneck 3 · Plain-language guidance for importers
Pattern observed
IHSs are dense legal documents, written for compliance specialists and licensed brokers. The MPI website carries plain-language guidance pages for importers, but they cover a fraction of the standard corpus, lag publication of new IHSs, and require analyst time to draft. The lag means importer enquiry volume to the front line is higher than it needs to be, particularly in the months after a new or amended IHS is published.
Cost
Hard to quantify in analyst hours alone, because the cost shows up as inbound enquiry volume to Border Clearance, Plant Imports, and Animal Imports teams. Indicative central case: 200 to 400 analyst hours per year writing or updating guidance, plus front-line query handling time that scales with IHS amendment volume.
Bottleneck 4 · Amendment notification and explanation
Pattern observed
Every IHS amendment generates a notification cycle: an industry update or notification email, an updated guidance page, sometimes a stakeholder briefing. Each requires an analyst to articulate what changed, what it means in practice for importers, and what the operational implications are. The work is high-frequency, low-novelty, and inconsistent in tone across analysts. Inconsistency is itself a quality issue raised by submitters in the cut flowers consultation and elsewhere.
Cost
2 to 6 analyst hours per amendment notification, multiplied by the volume of amendments and routine updates. Indicative central case: 100 to 250 analyst hours per year on notification drafting, before stakeholder briefings.
All four cost figures are indicative orders of magnitude derived from public BIES outputs and published consultation behaviour. They are conservative working assumptions for the value model, not internal MPI data. The value model on the Value Model page lets you adjust every assumption.
Where the four bottlenecks land on the genAI suitability matrix.
A concept-level test before anything is built: does the work pattern look like something current generative AI is genuinely good at, with the safeguards we can put around it. The test is binary on each row.
| Bottleneck | Pattern recognition | Structured drafting | Verifiable output | Human-in-loop fits | Reversible | Verdict |
|---|---|---|---|---|---|---|
| 1. Submission synthesis | Yes, clustering text by theme | Yes, RoS template is fixed | Yes, every claim cites a source quote | Yes, draft only, analyst signs out | Yes, nothing autopublished | Build |
| 2. Consistency check | Yes, structural and terminological matching | Yes, redline against template | Yes, every issue cites the clause and prior IHS reference | Yes, drafter accepts or rejects each issue | Yes, advisory only | Build |
| 3. Plain-language guidance | Lower, this is generation more than analysis | Yes, guidance template is fixed | Yes, every plain-language sentence cites the IHS clause | Yes, analyst reviews before publication | Yes, never auto-published to importers | Build |
| 4. Amendment change explainer | Yes, diff plus implication | Yes, notification email template is consistent | Yes, diff is mechanical, narrative cites the diff | Yes, analyst owns the notification | Yes, draft only | Build |
| Scientific risk assessment | Yes | Variable | No, requires laboratory and field evidence | Yes but not material at the right resolution | No, scientific judgement does not roll back | Do not build |
| Legal interpretation | Yes | Yes | No, requires legal authority | Yes | No, regulatory error is not cheap to reverse | Do not build |
Same lifecycle. Same standards. Same scrutiny. Different time profile.
Today
The directorate's experienced analysts spend a measurable share of their time on tasks that current genAI can first-draft to a high standard. Submission analysis, drafting consistency QC, plain-language guidance, and amendment notification together account for an indicative 1,500 to 2,800 analyst hours per year, before any technical or scientific work.
Time pressure compresses the consultation cycle, occasionally surfaces drafting inconsistencies in public, and slows the publication of importer-facing guidance.
With the Standards Companion suite
The same analysts spend their time on the work that requires their judgement: scientific risk, technical interpretation, stakeholder relationships, regulatory authority. The Companion produces the first drafts. The analyst reviews, adjusts, signs out.
Indicative central case: 0.6 to 1.2 FTE of analyst time freed annually, faster Review of Submissions cycles, fewer drafting inconsistencies surfacing in consultation, and faster publication of plain-language importer guidance.
The Value Model turns these ranges into a live calculator.
How this analysis was built.
Sources: the BIES public consultation register on mpi.govt.nz; a sample of published Review of Submissions documents (grain & seeds 2021, biological products 2023, cut flowers and foliage 2025, transitional facilities H&SW guidance 2025); the Biosecurity Act 1993 sections 22 to 24L; the MPI Annual Report 2024/25; and the Senior Adviser (AI for Standards) job description, MPI26/19077920.
Method: corpus review of public BIES outputs to identify recurring document structures and patterns; process mapping against the Biosecurity Act consultation requirements; cost ranges built bottom-up from observed document length, complexity, and submission volume; cross-checked against published genAI synthesis benchmarks for clustering and structured drafting tasks.
Limits: I do not have internal BIES time-and-motion data. The analyst hour ranges are indicative orders of magnitude, intentionally conservative, and adjustable in the value model. The aim is a defensible working case, not a precision estimate.
Where this would land in a real BIES engagement: the first 30 days of the role would be a structured time-and-motion exercise across two analysts in different streams, replacing the orders of magnitude here with measured values. The value model accepts those measured values as inputs without changing the architecture.