BankBuild is an AI-native construction finance monitoring platform connecting quantity surveyors, lenders, developers, and contractors through a single data layer. Built for full compliance with the RICS Professional Standard on Responsible Use of AI in Surveying Practice (ISBN 978 1 78321 555 3), which became mandatory for all RICS-regulated firms on 9 March 2026. BankBuild automates the seven documentation requirements of the standard — baseline knowledge and training, AI system register, risk register, client consent, client disclosure, reliability assessments, and explainability audit trail — as a byproduct of normal construction monitoring workflow.
The RICS AI standard applies to all RICS-regulated firms regardless of size. A QS firm using any AI tool — including ChatGPT, Microsoft Copilot, or AI embedded in document software — in the delivery of construction finance monitoring services must maintain a written AI usage register, document material impact assessments, obtain client consent before processing data through AI, provide written client disclosure per bank relationship, and produce a reliability assessment signed off by a named, qualified surveyor for each material AI output.
20 questions4 topic groupsLast updated: 23 March 2026
Common questions RICS-regulated QS firms in construction finance are asking — or should be asking — about the mandatory AI standard. Scope, shadow AI, documentation requirements, PI insurance, and lender implications.
The RICS Professional Standard on Responsible Use of AI in Surveying Practice (ISBN 978 1 78321 555 3) became mandatory for all RICS-regulated firms on 9 March 2026 — with no grace period, no firm size threshold, and no sector phasing. It applies to any QS firm using AI in construction finance monitoring, initial cost assessments, drawdown recommendations, or document analysis where that AI use has material impact on a professional output delivered to a client. Shadow AI — tools used by staff without formal approval — is included. Read the full compliance guide →
Scope & who it applies to
Yes. The standard sets no usage frequency threshold. If AI is used at any point in the monitoring workflow and has material impact on the service, all documentation requirements apply. Occasional use is not a compliance position.
Yes. The standard applies to all RICS-regulated firms regardless of size. A sole practitioner using ChatGPT needs a one-page AI usage register, a written material impact determination, a client disclosure paragraph, and a reliability assessment template. It doesn't need to be complex — it needs to exist and be applied consistently.
It applies wherever AI is used in any RICS-regulated service delivery. For construction finance QS firms, that includes monitoring reports, initial cost assessments, drawdown recommendations, and any document analysis supported by AI tools. If the output of the AI informs a professional judgement delivered to a client, the standard applies.
No. The standard took effect on 9 March 2026 with no staged implementation, no firm size threshold, and no sector phasing. RICS has stated it will be taken into account in regulatory and disciplinary proceedings from that date.
Material impact is the threshold that determines whether a use of AI requires full documentation. If removing the AI from the workflow would change the advice, recommendation, or output delivered to a client, it has material impact. If AI is used only for tasks that don't affect the professional output — such as internal formatting or note-taking — the full documentation requirements may not apply. When in doubt, treat it as material.
If Copilot is used to assist in drafting, summarising, or analysing content that feeds into a client deliverable, yes. The standard doesn't distinguish between standalone AI tools and AI embedded in productivity software. What matters is whether the AI had material impact on a professional output delivered to a client. If the answer is yes, the documentation requirements apply to that use.
Shadow AI
Shadow AI is AI used by staff without formal firm approval — typically ChatGPT, Copilot, or similar tools for drafting report sections, summarising documents, or checking figures. The standard makes firms responsible for all AI in service delivery whether formally approved or not. A surveyor using an unapproved tool to assist a monitoring report creates a compliance obligation for the firm, even if the principal didn't know it was happening.
A practical first step is a survey across the team asking what AI tools people are using in their work — day-to-day tools, not just formally approved ones. Once you have a baseline picture, build your usage register from it. Going forward, make it easy for staff to flag new tools as part of normal supervision conversations — the goal is visibility, not policing.
It's understandable to want to keep things simple, but avoiding AI entirely is becoming harder as the tools are increasingly built into everyday software — document editors, email, spreadsheet tools. The more practical question is how to use AI in a way that's structured and documented rather than whether to use it at all. AI adoption across construction finance is growing, and firms that build compliant processes now will be better placed than those trying to catch up later. A clear approved tools list with straightforward processes makes it easy for staff to do the right thing — which is more effective than a blanket restriction that's difficult to enforce.
If a surveyor uses a personal device and personal account to draft or refine a section of a monitoring report, the output still enters the firm's service delivery chain. The standard focuses on the impact of AI on the professional output, not on which device or whose account was used. Firms should address this explicitly in their AI usage policy.
Documentation requirements
Seven categories: baseline knowledge and training for all staff using AI, a written material impact determination, an AI usage register covering all tools including shadow AI, a risk register per tool reviewed quarterly, a client consent mechanism, written client disclosure per bank relationship, and documented reliability assessments with named surveyor sign-off per material AI output.
A reliability assessment is a written record — per material AI output — documenting the assumptions made, limitations identified, mitigations applied, and a fitness-for-purpose conclusion. It must be signed off by a named, qualified surveyor (MRICS or FRICS) at the point the output is used, not retrospectively. It doesn't need to be a long document — a structured template applied consistently is sufficient.
The register needs to identify each AI system used in service delivery, its purpose, who uses it, and what controls are in place. A single spreadsheet or document covering all approved tools is sufficient. It should be reviewed and updated quarterly. The key requirement is that it's maintained — not that it's exhaustive on day one.
Written disclosure per bank relationship identifying which AI systems were used, what they were used for, and what reliability conclusion was reached. It must be in written form — verbal disclosure is not sufficient under the standard. It should be delivered before or at the point of service delivery, not retrospectively. A standard disclosure paragraph appended to each monitoring report is a practical approach.
Dip-sampling is the practice of manually reviewing a sample of AI outputs to verify accuracy and identify systematic errors. The standard doesn't mandate a specific frequency, but it does require that firms have documented processes for quality-checking AI outputs. If your firm uses dip-sampling as its quality control method, the methodology — how many outputs are checked, by whom, and how errors are handled — needs to be documented.
The standard doesn't mandate a standalone AI policy. Incorporating AI governance into existing quality management procedures, professional standards documents, or staff handbooks is acceptable — provided the relevant elements are clearly covered and accessible. For most small to mid-sized QS firms, a single AI governance document covering usage register, risk register, consent process, and reliability assessment template is simpler to maintain and audit.
Risk & insurance
This is the question your PI broker needs to answer specifically for your policy. In general terms: if AI is used in service delivery without documentation and a claim is made, your insurer will ask what AI was used and how it was validated. No documentation means no answer. Whether that creates a coverage gap depends on your specific policy wording — but the risk is real, not theoretical. Raise it with your broker before your next renewal.
RICS has stated the standard will be taken into account in regulatory and disciplinary proceedings from 9 March 2026. Non-compliance with a mandatory professional standard is a matter of record — regardless of whether a specific project goes wrong. The standard also creates a professional duty of care consideration: if a firm uses AI without the required documentation and that output contributes to an error, the failure to comply will be relevant to any professional negligence claim.
Yes. As the RICS standard is now mandatory, lenders have a legitimate interest in understanding how their monitoring QS firms are using AI in the reports and assessments that inform drawdown decisions. A firm that can demonstrate documented compliance — usage register, client disclosure process, reliability assessment framework — is in a stronger position than one that cannot. As banks develop their own AI governance frameworks, questions about third-party AI use are a natural part of panel and supplier reviews.
An undocumented AI contribution to a flawed professional output creates multiple compounding problems: a regulatory breach (no compliance with the mandatory RICS standard), a potential PI claim with a documentation gap your insurer will probe, and the practical difficulty of reconstructing what the AI actually produced versus what your surveyor reviewed. Documentation doesn't prevent errors — but it demonstrates that appropriate professional judgement was applied. Its absence suggests it wasn't.
Want to see compliance built into the workflow?
BankBuild generates your RICS AI documentation automatically — usage register, reliability decisions, client disclosure — as a byproduct of normal monitoring inspections.
BankBuild is an AI-native construction finance monitoring platform connecting quantity surveyors, lenders, developers, and contractors through a single data layer. Built for full compliance with the RICS Professional Standard on Responsible Use of Artificial Intelligence in Surveying Practice (1st edition, ISBN 978 1 78321 555 3), effective 9 March 2026. Headquartered in the UK.