The concern the standard is designed to address

When AI tools produce professional outputs — a cost benchmark, a programme analysis, a report section — there is a genuine question about where accountability sits. If the AI is wrong, who is responsible? If the output is challenged, who can explain how it was produced and what checks were applied?

Left unaddressed, this creates a real problem for professional services. Advice becomes harder to stand behind. Clients lose visibility of how conclusions were reached. And in a field like construction finance monitoring, where QS reports directly inform decisions worth hundreds of thousands of pounds, the stakes of an unchecked AI error are not theoretical.

The RICS Professional Standard on Responsible Use of AI in Surveying Practice, mandatory from 9 March 2026, addresses this head-on. Its answer is not to limit AI use — it is to require that a named, qualified surveyor reviews every material AI output and accepts written responsibility for it. The audit trail exists to make professional judgement visible, not to replace it. As Chris de Gruben FRICS — co-chair of the RICS working group that authored the standard — has described it, the standard places professional judgement, knowledge, skills, experience, and scepticism at the heart of any AI-assisted workflow.[Artefact, March 2026]

The core principle: AI must not replace professional skill and judgement. Instead, AI should enhance efficiency while ultimate responsibility remains with a competent surveyor. The audit trail is the mechanism that proves this is happening.[RICS standard, §4.2]

What the standard actually requires

The standard’s audit trail requirements sit across two areas: the AI system register and the reliability assessment process. Together they create a documented chain from the AI tool used to the professional decision made.

The AI system register creates a firm-level record of every AI tool in use that has material impact on service delivery. It records what each tool is, what it is used for, when it was first adopted, and when its use will next be reviewed. This is the baseline — it establishes which AI systems the firm is accountable for, and when.

The reliability assessment is the output-level record. Every time AI has material impact on a specific client deliverable, the qualified surveyor responsible must produce a written assessment of that output. The standard is specific about what this assessment must contain.

The explainability requirement sits alongside these: if a client requests a written explanation of how AI was used in their instruction — what tools, what they did, what the surveyor checked — the firm must be able to provide it. The audit trail is what makes that possible.

What a reliability decision looks like in practice

The reliability assessment is the most operationally significant part of the audit trail for most construction finance monitoring firms. It is not a lengthy document — but it requires genuine engagement with the AI output rather than a rubber stamp.

The standard requires each written reliability decision to contain:

Required elements of a reliability assessment
Assumptions What assumptions did the AI make, or were made in using it? What data did it draw on? What inputs did it receive?
Concerns What are the key areas of concern about the reliability of this output? What are the limitations of the underlying data?
Reason for each concern The specific reason behind each concern — why it matters and what it affects in the context of this output.
Whether concerns can be lessened Whether anything could be done to lessen each concern. The standard asks whether it is possible — not whether it was done.
Overall reliability conclusion The impact of the concerns on the overall reliability of the output, including a concluding statement on whether the output can reasonably be used for its intended purpose.
Named sign-off Prepared by, or under the supervision of, an appropriately qualified and named surveyor who accepts responsibility for its use.

To make this concrete, consider how this applies to a construction finance monitoring firm using AI to assist with cost benchmarking. The seven elements below are not a theoretical framework — they are the actual written record a QS would produce and sign off before that output entered a client report.

Illustrative example — cost benchmarking output
AI system Automated benchmarking tool comparing borrower’s stated costs against BCIS lower quartile data for the relevant construction type and region.
Assumptions Benchmark data sourced from BCIS Q4 2025. Project classified as traditional masonry residential, 12-unit scheme, South East England. Classification confirmed by QS against architect’s drawings.
Key concerns BCIS data reflects market conditions to Q4 2025; tender received January 2026. Specialist subcontract packages not separately benchmarked due to limited comparables at this scale.
Reason for each concern Market movement since Q4 2025 could affect material and labour costs. Limited specialist comparables at this scheme scale reduces confidence in those line items.
Whether concerns can be lessened QS cross-referenced specialist package rates against two recent comparable tenders. Agreed approach with lender prior to report. Market movement risk cannot be fully eliminated but is noted in the report narrative.
Overall reliability conclusion AI benchmark output is fit for purpose as one input to the three-source cost triangulation. Sole reliance on this output would not be appropriate given the concerns noted above.
Sign-off [Name], [credentials], [date] — I have reviewed this output and accept responsibility for its use in the monitoring report prepared for [lender].

This is not a lengthy exercise. For a surveyor who has genuinely reviewed the AI output, writing this down takes minutes. But it creates a record that is meaningful — both as professional protection and as the kind of transparency that clients and lenders increasingly expect.

↑ Back to top

Dip-sampling for high-volume outputs

The standard recognises that individual reliability assessments for every AI output is not always practical at scale. For firms generating high volumes of automated or AI-assisted outputs — regular monitoring reports across a large portfolio, for instance — it allows for dip-sampling: randomly selecting and reviewing a subset of outputs at regular intervals.

The key conditions are that the sampling methodology must be documented, the intervals must be defined, and the firm remains accountable for all outputs regardless of whether each was individually reviewed. Dip-sampling is not a way to reduce accountability — it is a practical mechanism for maintaining oversight at scale.

What the standard is really requiring here is that accountability at scale is designed in, not assumed. A firm processing fifty monitoring reports a month cannot rely on good intentions — it needs a defined process, documented intervals, and evidence that the process is running. That is what a dip-sampling programme provides.

↑ Back to top

What this means for construction finance monitoring

The audit trail requirement has a specific resonance in construction finance. QS monitoring reports are the basis on which banks release drawdown payments. If AI contributed to a cost assessment or programme analysis in one of those reports, the bank has a legitimate interest in knowing that a named professional reviewed that contribution and takes responsibility for it.

Far from making the surveyor less relevant, the standard’s audit trail requirements formalise something that good QS practice has always assumed: that behind every piece of professional advice, there is a qualified individual who has applied their judgement and can be held accountable for it. AI introduces speed and analytical capability into the monitoring workflow. The audit trail ensures the professional responsibility remains where it should.

The firms that navigate AI adoption well in construction finance will not be the ones that use it most, or the ones that use it least. They will be the ones that use it with a clear record of what it did and what the surveyor decided about it.

For the full documentation requirements across all seven categories of the RICS standard, read the RICS AI Compliance Guide. For definitions of every term used in this article, see the RICS AI compliance glossary.