The concern the standard is designed to address
When AI tools produce professional outputs — a cost benchmark, a programme analysis, a report section — there is a genuine question about where accountability sits. If the AI is wrong, who is responsible? If the output is challenged, who can explain how it was produced and what checks were applied?
Left unaddressed, this creates a real problem for professional services. Advice becomes harder to stand behind. Clients lose visibility of how conclusions were reached. And in a field like construction finance monitoring, where QS reports directly inform decisions worth hundreds of thousands of pounds, the stakes of an unchecked AI error are not theoretical.
The RICS Professional Standard on Responsible Use of AI in Surveying Practice, mandatory from 9 March 2026, addresses this head-on. Its answer is not to limit AI use — it is to require that a named, qualified surveyor reviews every material AI output and accepts written responsibility for it. The audit trail exists to make professional judgement visible, not to replace it. As Chris de Gruben FRICS — co-chair of the RICS working group that authored the standard — has described it, the standard places professional judgement, knowledge, skills, experience, and scepticism at the heart of any AI-assisted workflow.[Artefact, March 2026]
The core principle: AI must not replace professional skill and judgement. Instead, AI should enhance efficiency while ultimate responsibility remains with a competent surveyor. The audit trail is the mechanism that proves this is happening.[RICS standard, §4.2]
What the standard actually requires
The standard’s audit trail requirements sit across two areas: the AI system register and the reliability assessment process. Together they create a documented chain from the AI tool used to the professional decision made.
The AI system register creates a firm-level record of every AI tool in use that has material impact on service delivery. It records what each tool is, what it is used for, when it was first adopted, and when its use will next be reviewed. This is the baseline — it establishes which AI systems the firm is accountable for, and when.
The reliability assessment is the output-level record. Every time AI has material impact on a specific client deliverable, the qualified surveyor responsible must produce a written assessment of that output. The standard is specific about what this assessment must contain.
The explainability requirement sits alongside these: if a client requests a written explanation of how AI was used in their instruction — what tools, what they did, what the surveyor checked — the firm must be able to provide it. The audit trail is what makes that possible.
What a reliability decision looks like in practice
The reliability assessment is the most operationally significant part of the audit trail for most construction finance monitoring firms. It is not a lengthy document — but it requires genuine engagement with the AI output rather than a rubber stamp.
The standard requires each written reliability decision to contain:
To make this concrete, consider how this applies to a construction finance monitoring firm using AI to assist with cost benchmarking. The seven elements below are not a theoretical framework — they are the actual written record a QS would produce and sign off before that output entered a client report.
This is not a lengthy exercise. For a surveyor who has genuinely reviewed the AI output, writing this down takes minutes. But it creates a record that is meaningful — both as professional protection and as the kind of transparency that clients and lenders increasingly expect.
Dip-sampling for high-volume outputs
The standard recognises that individual reliability assessments for every AI output is not always practical at scale. For firms generating high volumes of automated or AI-assisted outputs — regular monitoring reports across a large portfolio, for instance — it allows for dip-sampling: randomly selecting and reviewing a subset of outputs at regular intervals.
The key conditions are that the sampling methodology must be documented, the intervals must be defined, and the firm remains accountable for all outputs regardless of whether each was individually reviewed. Dip-sampling is not a way to reduce accountability — it is a practical mechanism for maintaining oversight at scale.
What the standard is really requiring here is that accountability at scale is designed in, not assumed. A firm processing fifty monitoring reports a month cannot rely on good intentions — it needs a defined process, documented intervals, and evidence that the process is running. That is what a dip-sampling programme provides.
What this means for construction finance monitoring
The audit trail requirement has a specific resonance in construction finance. QS monitoring reports are the basis on which banks release drawdown payments. If AI contributed to a cost assessment or programme analysis in one of those reports, the bank has a legitimate interest in knowing that a named professional reviewed that contribution and takes responsibility for it.
Far from making the surveyor less relevant, the standard’s audit trail requirements formalise something that good QS practice has always assumed: that behind every piece of professional advice, there is a qualified individual who has applied their judgement and can be held accountable for it. AI introduces speed and analytical capability into the monitoring workflow. The audit trail ensures the professional responsibility remains where it should.
The firms that navigate AI adoption well in construction finance will not be the ones that use it most, or the ones that use it least. They will be the ones that use it with a clear record of what it did and what the surveyor decided about it.
For the full documentation requirements across all seven categories of the RICS standard, read the RICS AI Compliance Guide. For definitions of every term used in this article, see the RICS AI compliance glossary.