AI is already in your firm’s workflow

A surveyor pastes a contractor submission into ChatGPT to get a quick summary before writing up the monitoring report. A colleague uses Copilot to draft the narrative section. Someone runs a cost schedule through an AI tool to check for anomalies. None of it appears in any register. None of it has been disclosed to the bank client. There is no written reliability decision on file.

This is not a hypothetical. It is a description of how AI is already entering construction finance monitoring workflows — not through a formal adoption decision, but through individual surveyors solving day-to-day problems with tools that are fast, free, and effective. Most firms have not made a decision to use AI. They have simply not made a decision to govern it.

This is not unusual. In the UK specifically, a survey of over 2,000 employees across sectors, conducted by market researcher Censuswide on behalf of Microsoft, found that 71% of UK workers had used shadow AI — tools not approved or sanctioned by their employer.[Microsoft/Censuswide, October 2025] That survey covered the UK workforce broadly, not QS firms specifically. But in discussions with QS principals, the same pattern surfaces: the tools are being used informally, the documentation is not in place. In most cases, the people doing this are not trying to cut corners. They are trying to do their jobs more efficiently. The problem is not the AI. The problem is the absence of documentation around it.

Under the RICS Professional Standard on Responsible Use of AI in Surveying Practice, which became mandatory on 9 March 2026, that absence is a compliance breach.

↑ Back to top

What counts as AI use under the RICS standard

The first thing to understand is that the standard casts a wide net. It applies to any AI system used in service delivery that has material impact — meaning any use where the AI output is capable of influencing the advice, recommendation, or deliverable provided to a client.

For construction finance monitoring, that threshold is crossed more readily than most firms expect.

Examples of AI use that count under the standard

Using ChatGPT to draft or structure any section of a monitoring report. Using Copilot to summarise a facility agreement or contractor submission. Using an AI-powered cost tool to benchmark against BCIS data. Using a document intelligence tool to extract figures from drawings or invoices. Using any generative AI to check, rewrite, or improve professional text before it reaches a client.

Simple rule-based tools — a spreadsheet formula, a conditional formatting macro — are generally not AI under the standard. But tools that use machine learning, natural language processing, or generative capabilities to produce outputs that inform client deliverables are.

The standard also applies to shadow AI: tools used by staff without formal firm approval. If a surveyor uses a personal ChatGPT account to help draft a report section, the firm is responsible for that use regardless of whether it was sanctioned. The standard makes firms accountable for all AI in their service delivery, not just the tools they have formally adopted.

Crucially, the standard sets no frequency threshold. Once AI has material impact on a deliverable — even once — the documentation requirements apply. A firm that uses ChatGPT on a single monitoring report in a month is subject to the same obligations as one that uses it across its entire portfolio. Occasional use is not a defence; the trigger is material impact, not volume.

↑ Back to top

Why undocumented use creates real risk

Consider a scenario that is entirely plausible for a firm with active construction monitoring work. A surveyor uses ChatGPT to summarise a developer’s cost schedule and incorporates the summary into a drawdown recommendation. The recommendation is sent to the bank. The bank approves the drawdown on that basis.

Six months later, the project runs into financial difficulty. The bank reviews the drawdown history. Questions arise about the cost assessment behind one particular approval. It emerges that an AI tool contributed to the analysis — but there is no record of what the tool was, what it produced, or whether a named surveyor reviewed and validated the output. There is no client disclosure on file. There is no reliability assessment.

At that point, the firm faces three compounding problems.

Regulatory exposure. The RICS standard is a mandatory professional standard. Non-compliance is on the record regardless of whether a project goes wrong. If a complaint is made or a disciplinary review is triggered, the absence of documentation will be relevant.

Professional indemnity exposure. PI insurers are examining how firms manage AI risk. If a claim is made and it emerges that AI was used without the documentation the RICS standard requires, the insurer will have questions about whether appropriate professional processes were in place. That is not a theoretical risk — it is an increasingly live one as the standard creates clear benchmarks against which conduct will be assessed.

The practical problem of reconstruction. Without contemporaneous records, it becomes very difficult to demonstrate after the fact that professional judgement was applied to an AI output. Documentation does not prevent errors — but it demonstrates that appropriate oversight was exercised. Its absence suggests it wasn’t.

↑ Back to top

What you need to log

The RICS standard requires documentation across several categories. For a firm using ChatGPT or similar tools in construction finance monitoring, the core requirements are:

Documentation required per tool and per use
01AI Usage Register entry. For each AI tool used in service delivery: system name, purpose, date first used, date of next review. Must include informal and unapproved tools.
02Risk Register entry. For each tool: documented risks covering accuracy limitations, potential bias, data security, and consequences of failure. Reviewed quarterly.
03Client Disclosure. Written notification in terms of engagement for each bank relationship where AI is used: which tools, what for, and the client’s rights including how to contest or opt out.
04Reliability Assessment per material output. For each AI-assisted output that informs a client deliverable: assumptions made, concerns identified, mitigations applied, fitness-for-purpose conclusion. Signed by a named, qualified surveyor.
05Data Governance. Confirmation that client data was not uploaded to any AI system without prior written consent. For tools like ChatGPT, this is particularly important — pasting client-specific cost data or project details into a public AI tool without consent is a data governance breach alongside the compliance breach.

The volume of documentation scales with the volume of AI use. A firm running 50 active monitoring projects and using ChatGPT regularly across them faces a genuine administrative challenge if they try to manage this manually. That is worth acknowledging honestly — the standard is demanding, and the firms that embed documentation into their workflow rather than treating it as a separate task will find it more manageable.

↑ Back to top

Practical steps to close the gap

For a firm that is using AI informally and needs to get on top of compliance, the sequence matters.

Step 1: Find out what is actually being used. The most practical first step is a straightforward survey of your team — what AI tools are people using day-to-day, including personal accounts? Build the usage register from the honest answer, not the official one.

Step 2: Make the material impact determination. Write down, formally, that your firm has assessed its AI use and determined it has material impact on service delivery. This is the threshold requirement that triggers everything else — and it needs to be a written record, not an assumption.

Step 3: Update client disclosure. Your terms of engagement with each bank client need to include written disclosure of AI use before it happens, not after. For ongoing relationships where AI is already being used, this needs to be addressed in the next communication or agreement update.

Step 4: Build the reliability assessment habit. For each monitoring report where AI contributed to the analysis, a named surveyor needs to have reviewed the AI output and reached a written fitness-for-purpose conclusion. This is the most operationally demanding requirement — but it is also the one that keeps professional judgement at the centre of the workflow. The sign-off is not a formality. It is the surveyor confirming, in writing, that they reviewed the AI’s contribution and take responsibility for what went to the client.

The firms that will handle this well are not the ones that ban AI or the ones that use it without thinking. They are the ones that build simple, consistent documentation processes and apply them every time.

For a one-page checklist covering all the documentation your firm needs, download the RICS AI Compliance Checklist from the hub. For the full requirements across all seven categories, read the RICS AI Compliance Guide.