From monitoring report drafts in ChatGPT to comparable analyses in Copilot — AI tools are part of everyday surveying workflows. The RICS Professional Standard on Responsible Use of AI became mandatory on 9 March 2026, with no grace period and no firm size threshold. This hub is the definitive resource for firms that need to understand and comply.
Your team has been using ChatGPT to draft monitoring report sections for months. One surveyor uses Copilot to summarise facility agreements. Another pastes cost schedules into an AI tool to check for anomalies.
None of it is logged. None of it is disclosed to clients. There is no reliability assessment on file. There is no AI usage register.
On 9 March 2026, the RICS Professional Standard on Responsible Use of AI took effect. Your firm is now non-compliant — not because you did something wrong, but because you didn't document what you were already doing.
RICS Professional Standard on Responsible Use of AI in Surveying Practice. 1st edition. ISBN 978 1 78321 555 3.
RICS has stated the standard will be taken into account in regulatory and disciplinary proceedings from 9 March 2026. Non-compliance with a mandatory professional standard is taken into account in any regulatory or disciplinary proceedings — the AI standard explicitly states this.
If your firm uses AI in service delivery without documenting it and a claim is made, your PI insurer will ask what AI was used and how it was validated. No documentation means no answer. That is a coverage gap, not a theoretical one.
As the RICS AI standard establishes clear requirements for QS firms, lenders have a legitimate basis to ask about AI governance when reviewing their monitoring panels. Firms that can demonstrate documented compliance will be better positioned. Firms that cannot may find the question is asked at the next annual review.
What happens if you don't comply — regulatory, PI, and commercial consequences explained →
Every RICS-regulated firm using AI in service delivery must have all seven in place. No minimum firm size. No threshold below which the standard doesn't apply.
Every RICS member using AI must develop sufficient knowledge of AI types, risks, bias, and data governance before using AI tools in practice.
Active obligation · Not assumedWritten policies covering data used in AI systems, plus explicit client consent before processing client data through any AI tool.
One policy · One consent per clientA maintained register of every AI tool used in service delivery — including ChatGPT, Copilot, and shadow AI used without formal approval. Reviewed quarterly.
Includes unapproved toolsDocumented risks per tool: accuracy limitations, training data bias, data security, consequences of failure. One row per tool in your usage register.
Reviewed quarterlyWritten requests to AI vendors covering environmental impact, data compliance, training data quality, bias, and liability. Documented follow-ups.
Per vendor · Written recordFor each material AI output: written record of assumptions, concerns, mitigations, and a fitness-for-purpose conclusion signed by a named, qualified surveyor.
Named QS · At point of reviewWritten disclosure per client before AI processing. Ability to explain AI use, risk management, and reliability decisions on request.
Written · Before processingWhat your AI audit trail needs to contain — a practical guide for QS firms →
Most QS firms will build their RICS AI compliance framework manually — spreadsheets, Word templates, email trails. It works, but it requires discipline to maintain across every project and every surveyor.
BankBuild generates your compliance documentation automatically, as a natural byproduct of the monitoring workflow. Every AI interaction is logged at the point it happens. Every reliability decision is captured when the QS reviews the output. Client disclosure is generated as a PDF appendix on every report. No separate system. No extra overhead.
See how BankBuild works for QS firms in construction finance →
Every AI interaction logged automatically per page, per project, with timestamp and system detail.
Each AI output requires named surveyor approval before reaching client reports — the reliability decision, captured at point of review.
Client disclosure is appended to every exported report automatically — the written disclosure required under §4.3 of the standard, without any manual drafting.
All seven mandatory requirements explained with worked examples from construction finance monitoring. Practitioner checklist and 15 inline FAQs covering the questions QS firms are asking right now.
More in the full FAQ · or read the complete compliance guide.
Nine sections. Twenty-one checkpoints. Work through it in a single principals' meeting. Know exactly what you have, what form it needs to take, and what's missing.
Want to talk it through? [email protected]
BankBuild is designed so that RICS AI compliance documentation — usage register, reliability decisions, client disclosure — is a natural byproduct of running a monitoring inspection, not a separate process bolted on afterwards. 15 minutes to see it live.
BankBuild is an AI-native construction finance monitoring platform connecting quantity surveyors, lenders, developers, and contractors through a single data layer. Built for full compliance with the RICS Professional Standard on Responsible Use of Artificial Intelligence in Surveying Practice (1st edition, ISBN 978 1 78321 555 3), effective 9 March 2026. Headquartered in the UK.
Construction finance monitoring is the process lenders use to verify that construction project funds are being spent according to approved budgets before releasing drawdown payments. It involves independent quantity surveyors inspecting sites, assessing costs, and reporting to the lending bank.