- Home
- AI Governance Statement
As a firm that helps organisations develop and implement AI governance frameworks, we hold ourselves to the same standards we recommend to our clients. This statement covers Chase & Marshal's own AI deployments across our website and the SparkOS platform, and is reviewed at least annually.
We reference internationally recognised frameworks including ISO/IEC 42001, the NIST AI Risk Management Framework, the EU AI Act, and the New Zealand Government's AI Principles to contextualise and strengthen our approach.
AI Systems We Deploy
The following AI systems are currently in active use. For each, we document the model, what data is transmitted, how human oversight is maintained, and the known limitations users should be aware of.
Website Chat Assistant
An AI-powered conversational assistant embedded on chasemarshal.com to answer visitor questions about our services, direct enquiries, and provide information about Chase & Marshal.
GPT-4o (OpenAI) via Replit AI Integrations
The content of your chat messages and the current page context. No personally identifiable information is required or stored beyond the active session.
Responses are generated in real time; no human reviews individual conversations. Feedback mechanisms and periodic quality audits are used to assess output quality.
May occasionally produce inaccurate or outdated information. Not a substitute for professional advice. Users should verify important information directly with our team.
SparkOS Idea Validator & ECHO Copilot
AI-powered idea analysis and strategic coaching tools within the SparkOS platform, used by registered SparkOS users to evaluate business ideas, identify risks, and generate strategic recommendations.
GPT-4o (OpenAI) via Replit AI Integrations
The text of your idea submission, selected industry or context tags, and session metadata. Your idea content is passed to the AI model to generate analysis. No financial data or sensitive personal details are required.
Outputs are clearly labelled as AI-generated analysis and presented as exploratory insights, not definitive assessments. Users are advised to apply independent judgment before acting on recommendations.
Outputs reflect patterns in training data and do not account for real-time market conditions, proprietary business data, or jurisdiction-specific legal requirements. Results should be treated as a starting point, not a final answer.
SparkOS CIA (Creative Intelligence Assessment) Scoring
An AI-assisted scoring component within the Creative Intelligence Assessment (CIA) used to evaluate open-ended responses, score divergent thinking tasks, and generate candidate-level feedback narratives.
GPT-4o (OpenAI) via Replit AI Integrations
Candidate assessment responses (text, scenario answers, and collaborative task outputs) are submitted to the model for analysis. No candidate names, contact details, or demographic data are transmitted to the AI provider.
AI-generated scores are reviewed by human administrators before being released to employers. Final scoring decisions are confirmed by a trained human reviewer. Candidates may request a review of their assessment outcomes.
AI scoring is inherently probabilistic. Results may vary across repeated administrations and should be interpreted alongside other assessment data. The system is not designed for use as the sole basis for high-stakes employment decisions.
Our Ethical Commitments
Transparency
We clearly disclose where AI is used in our products and services. AI-generated content is labelled as such. This page exists because we believe users deserve to know.
Human Oversight
We maintain human review for high-stakes AI outputs, particularly in assessment contexts. AI assists decision-making; it does not replace human judgment in consequential situations.
Data Minimisation
We transmit only the minimum data necessary to AI providers. We do not send sensitive personal information, financial data, or unnecessary identifiers to third-party AI models.
Privacy
Our AI use complies with the New Zealand Privacy Act 2020. Data shared with AI providers is processed under their applicable data processing agreements and privacy frameworks.
Limitations Honesty
We do not overstate the reliability of AI outputs. Known limitations are documented and communicated directly to users. We design AI features to complement, not replace, professional expertise.
Continuous Improvement
We review our AI governance practices at least annually and update them in response to new regulations, guidance from AI providers, and feedback from our users and clients.
Framework Alignment
Our AI governance practices are informed by the following internationally recognised frameworks and standards.
Artificial Intelligence Management System standard — our governance approach aligns with its principles on risk management, transparency, and continuous improvement for AI systems.
We reference the NIST AI RMF's GOVERN, MAP, MEASURE, and MANAGE functions to structure our internal AI risk identification and mitigation practices.
Although Chase & Marshal operates primarily in the APAC region, we monitor EU AI Act requirements as a globally applicable benchmark for AI transparency, human oversight, and prohibited practices.
We align with the New Zealand Government's core AI principles: transparency, accountability, fairness, safety, privacy and security, reliability, and inclusion.
How to Raise Concerns
If you have concerns about how AI is being used in our products or services — including concerns about output quality, fairness, data handling, or any other matter covered in this statement — please contact us directly.
For SparkOS users, concerns related to assessment scoring or AI-generated results may be raised through the in-app feedback mechanism or directly via email. We commit to acknowledging all AI-related concerns within five business days.
This statement is reviewed at least annually. If you believe it is materially inaccurate or incomplete, we welcome your feedback.