The RICS Professional Standard on Responsible Use of Artificial Intelligence in Surveying Practice became mandatory on 9 March 2026. It applies to every RICS member and regulated firm worldwide, and for quantity surveying firms working in construction finance, the compliance requirements are more operationally demanding than most firms realise.
The RICS AI in Construction 2025 report, drawing on over 2,200 professionals globally, found that 45% of organisations report no AI use and fewer than 1% have scaled it across projects — yet 70% of project managers and quantity surveyors expect AI to deliver greater value. The profession knows AI is coming. The gap between that expectation and having the documentation, processes, and infrastructure to use it compliantly is wider than most firms expect.
This is not a general overview of the standard. This is a practical guide for QS firms that monitor construction projects, prepare drawdown reports, and verify costs on behalf of UK banks. If anyone in your firm uses ChatGPT, Copilot, automated report-drafting software, or any AI tool in any part of the monitoring workflow — your firm is using AI under this standard, and you are subject to specific documentation, disclosure, and quality assurance obligations that did not exist before it was published.
Here is what the standard actually requires, what it means for construction finance monitoring, and what your firm needs to have in place now.
What the RICS AI Standard Actually Requires
The standard, published by RICS in September 2025 and effective from 9 March 2026, establishes mandatory requirements across five pillars.
The first is baseline knowledge: every RICS member who uses AI to deliver surveying services must develop and maintain sufficient knowledge to support responsible use. At minimum, this means understanding the different types and subsets of AI, how they work, their limitations and failure modes, the risk of erroneous outputs, the inherent risk of bias, and data usage risks. The standard acknowledges that knowledge across the profession is uneven, which makes this an active obligation, not an assumption.
The second is practice management: firms must implement governance policies covering both data and systems. On data governance, the standard is specific — firms must safeguard private and confidential data, restrict access to staff who strictly need it, train those staff at least annually on AI-related data risks, and must not upload private or confidential data to any AI system unless there is express written consent in advance from affected stakeholders and the firm has taken reasonable steps to ensure the upload does not pose unacceptable risk. On system governance, firms must assess in writing whether AI is the most appropriate tool for each task, and must maintain a written register recording each AI system with material impact, its purpose, the date it was first used, and the date its use will next be reviewed.
The third is risk management: firms must create and operate a risk register documenting overarching AI risks — including bias, erroneous outputs, limitations in training data quality, and data retention risks. For each risk, the register must record a description, likelihood, impact, mitigation plan, the firm's risk appetite, regular status updates, and a RAG rating or equivalent. The risk register must be reviewed and updated at least quarterly by staff responsible for decisions about the firm's use of AI.
The fourth is using AI in service delivery: this covers procurement due diligence before adopting any AI system (including written requests to vendors and documented follow-ups), reliability assessments of AI outputs, quality assurance through dip-sampling of automated or high-volume outputs, and client communication — including written disclosure of when and how AI is used, with specific contractual provisions. There is also an explainability requirement: firms must be able to provide, on request, written information about the AI systems they use, how risks are managed, and what reliability decisions were made.
The fifth is developing AI: firms that build their own AI systems must document applications, risks, and alternative approaches, carry out sustainability impact assessments, involve diverse stakeholders, and ensure compliance with data protection laws including obtaining written permissions for personal data use.
The language throughout the standard is deliberately flexible — using terms like "appropriate" and "reasonable" — but the obligations themselves are not optional. The standard uses the word "must" throughout its mandatory requirements, and RICS is explicit that it will be taken into account in regulatory, disciplinary, and legal proceedings.
One threshold requirement underpins everything: if your firm determines that its use of AI has a material impact on the delivery of surveying services, you must record that determination and the reasoning behind it in writing. This is the gateway — once you've crossed it, all of the requirements below apply.
For QS firms in construction finance, three requirements from across these pillars carry the most operational weight.
First, you need a written AI usage register. Every AI system used in your practice that has a material impact on service delivery must be documented. This isn't just the obvious tools — if a QS uses ChatGPT to draft sections of a monitoring report, or uses automated cost-comparison software to benchmark against BCIS data, or relies on AI-assisted document extraction to process contractor invoices, each of these must be logged. The standard specifies that the register must record the AI system used, the purpose for which it is used, the date on which it was first used, and the date on which its use will next be reviewed. For a typical QS firm handling 8–10 active monitoring projects, this register could involve documenting 5–15 separate AI tools or workflows.
Second, you must notify clients in writing, in advance, when AI is used. Before AI is deployed in any way that materially affects service delivery, your client — typically the bank — must be informed. The standard requires your terms of engagement or contractual documents to detail in writing: when AI will be involved, which parts of the process it touches, the extent of professional indemnity cover for AI use (if available), how a client can contest the use of AI, how they can seek redress if negatively affected, and how they can opt out of AI being used, if at all. For construction finance monitoring, this is particularly significant: banks rely on QS reports to make drawdown decisions worth hundreds of thousands of pounds. If AI contributed to the analysis behind that report, the bank needs to know — and needs to know their rights in relation to it.
Third, you must conduct reliability assessments and quality assurance. Every time AI has a material impact on an output, the QS must apply professional judgement to assess the reliability of that output — and that assessment must be documented in writing. The standard is specific about what this written decision must contain: any relevant assumptions made, key areas of concern regarding reliability (including the reliability of underlying datasets), the reason for each concern, whether anything could be done to lessen each concern, and a conclusion on whether the output can reasonably be used for its intended purpose. Critically, each reliability decision must be prepared by, or under the supervision of, an appropriately qualified and named surveyor who accepts responsibility for its use. For high-volume or automated outputs, the standard allows dip-sampling — randomly selecting and reviewing a subset of outputs at regular intervals — but is clear that firms remain accountable for each output regardless.
↑ Back to topWhat This Means Specifically for Construction Finance Monitoring
Based on observation across our partner portfolio, the average UK construction drawdown takes roughly 23 days from site inspection to bank fund release. Of those 23 days, around 3 represent actual professional QS work — the inspection, cost assessment, and report preparation. The remaining 20 days are process: formatting reports, emailing documents between parties, chasing clarifications, waiting in review queues, and reconciling data across spreadsheets. That is 20 days of manual process on every drawdown, across a UK construction finance market worth roughly £350 billion — almost all of which still runs on spreadsheets, email, and PDFs. As AI adoption accelerates across this market, the volume of verification work flowing through AI-assisted processes will grow rapidly. The RICS standard is getting ahead of that curve.
This is precisely the workflow where AI adoption is accelerating fastest. QS firms are already using AI tools to speed up the administrative burden: drafting report templates, extracting cost data from contractor submissions, cross-referencing figures against previous valuations, and formatting outputs to match individual bank requirements. Based on what we are seeing across firms working in this space, AI-assisted report preparation can reduce preparation time from 6–8 hours to under an hour on standardised projects.
But under the new RICS standard, every one of those AI-assisted steps now requires documentation. If your firm prepares 100 monitoring reports per week across your project portfolio, and AI touches any part of that process, the compliance requirements are real. You need a register entry for each AI tool — with purpose, first-use date, and next review date. You need written client disclosure for each bank relationship — covering not just what AI does but how they can contest or opt out. You need documented reliability assessments with named surveyors taking responsibility. You need a dip-sampling programme. And you need a risk register reviewed quarterly.
Most QS firms currently have none of this in place. The gap between "we're using AI" and "we're using AI compliantly" is significant — and from 9 March 2026, it is a gap that carries regulatory, disciplinary, and professional indemnity consequences.
But compliance is only one dimension of what this standard signals. The deeper message is about where the profession is heading.
The RICS AI in Construction 2025 report — based on over 2,200 construction professionals globally — confirms the scale of this gap. 45% of organisations report no AI use at all, and just 1% have scaled it across projects. Yet nearly 70% of project managers and quantity surveyors expect AI to deliver greater value in their work. The profession sees where it's going. It just hasn't built the compliance infrastructure to get there.
QS firms broadly fall into three positions right now. Some are actively using AI across their monitoring workflows — ChatGPT for drafting, automated tools for cost extraction and benchmarking — but haven't documented any of it. They have a compliance problem they may not yet recognise. Others believe they don't use AI at all, while their staff are quietly using ChatGPT, Copilot, or other tools without formal approval — what the industry is now calling "shadow AI." The RICS standard applies to them too, and the first step is simply getting visibility of what's happening inside their own practice. And a third group has genuinely not adopted AI yet and is watching from the sidelines, uncertain whether the technology is mature enough to trust with professional work.
For that third group, the risk isn't just falling behind on compliance. It's falling behind on capability. AI in construction finance verification is not a passing experiment. Automated cost extraction, AI-assisted benchmarking against BCIS data, intelligent document processing, real-time portfolio analytics — these are becoming the operational baseline, not a competitive edge. The RICS standard exists precisely because AI adoption across the profession has reached the point where governance is mandatory. Firms that haven't started are not avoiding risk by waiting. They're accumulating a different kind of risk: the risk of being unable to match the speed, consistency, and transparency that banks and developers will increasingly expect from their monitoring QS.
The firms that will be strongest through this transition are not the ones that adopted AI first. They are the ones that adopted it with structure — with documented processes, clear governance, and compliance built into the workflow from day one. That is the real opportunity the RICS standard creates.
↑ Back to topA Practical Compliance Checklist: What Your Firm Needs Now
If your QS firm uses any form of AI in construction finance monitoring — and most now do, even if it's just ChatGPT for drafting or an automated spreadsheet tool — here is what you should have in place today.
Material Impact Determination. A written record confirming that your firm has assessed whether its use of AI has a material impact on the delivery of surveying services, together with the reasoning behind that determination. This is the threshold that triggers all other requirements.
AI Usage Register. A written record of every AI system your firm uses in service delivery that has a material impact. For each system: the AI system itself, the purpose for which it is used, the date it was first used, and the date on which its use and appropriateness will next be reviewed. The register must be maintained alongside the risk register and kept current as your AI usage evolves.
Risk Register. A documented register of AI-related risks covering bias, erroneous outputs, training data limitations, and data retention. Each risk must include a description, likelihood, impact, mitigation plan, the firm's risk appetite, status updates, and a RAG rating. The risk register must be reviewed and updated at least quarterly by staff responsible for decisions about the firm's AI use.
Client Disclosure Documents. Written notification in your terms of engagement for every bank client where AI is involved. The standard requires these documents to detail: when AI is involved, which parts of the process it touches, the extent of PI cover for AI use if available, internal processes to contest AI use, processes to seek redress if a client feels negatively affected, and how a client can opt out of AI use if at all.
Reliability Assessment Process. A documented procedure for assessing AI output reliability before any output informs a client-facing deliverable. Each written reliability decision must detail: assumptions made, key concerns including the reliability of underlying datasets, reasons for each concern, whether anything could lessen each concern, and a conclusion on whether the output can reasonably be used for its intended purpose. Each decision must be prepared by, or supervised by, an appropriately qualified and named surveyor who accepts responsibility.
Dip-Sampling Programme. For any automated or high-volume AI outputs, a documented sampling methodology with regular intervals. Firms remain accountable for each output, whether individually reviewed or not.
Data Governance Controls. Policies ensuring private and confidential data is stored securely, access is restricted to staff who need it, those staff are trained at least annually on AI data risks, and no private or confidential data is uploaded to any AI system without express written consent from affected stakeholders in advance.
Procurement Due Diligence. For every AI vendor or tool, documented evidence of due diligence conducted through written requests to the supplier and recorded follow-ups. The standard requires these requests to cover, at minimum: environmental impact, stakeholders involved in development, data law compliance, permissions for personal data, accuracy and diversity of training datasets including known gaps and bias risks, and the type and extent of the vendor's liability. Where a vendor provides limited information, the risks must be recorded in the risk register.
Explainability Readiness. The ability to provide, on request, written information about each AI system used — including its type, basic workings and limitations, the due diligence conducted, how risks are managed, and the reliability decisions made. Clients may request this information to understand or challenge AI use in relation to their instruction.
↑ Back to topThis guide is written for QS firms in construction finance monitoring — our area of deepest expertise. The RICS AI standard applies identically across all disciplines. If your firm practises in valuation, building surveying, project management, or any other RICS-regulated area, the seven mandatory requirements are the same. BankBuild's RICS AI Governance Centre serves all disciplines — not just construction.
How BankBuild Approaches RICS AI Compliance
BankBuild is an AI-native construction finance monitoring platform designed with the RICS AI standard's requirements in mind from the outset — not retrofitted after publication.
The principle behind BankBuild's approach is straightforward: compliance documentation should be a natural byproduct of how QS firms already do their monitoring work, not a separate administrative burden layered on top. When AI is used in the verification workflow, the interactions are logged, reliability decisions are captured at the point of QS review, and audit trails are maintained as part of the process — not assembled retrospectively.
This means QS firms using BankBuild for construction finance monitoring don't maintain separate compliance processes alongside their inspection workflow. The compliance infrastructure is embedded in the workflow itself — covering the standard's requirements for AI usage registers, reliability assessments with named surveyor sign-off, client disclosure documentation, and audit trail depth.
BankBuild is, to our knowledge, the first platform in UK construction finance to build automated RICS AI compliance documentation into the monitoring workflow from day one.
This covers AI used within BankBuild's monitoring workflow. For AI tools your firm uses outside the platform — such as ChatGPT for other tasks, or standalone cost estimation software — the standard's documentation requirements still apply, and your firm will need to maintain those records separately.
For a walkthrough of how this works in practice, reach out to us at see the BankBuild platform.
Start Here: Download the RICS AI Compliance Checklist
We've distilled the requirements above into a one-page compliance checklist designed specifically for QS firms in construction finance. It covers the key areas your firm needs to have documented, with clear yes/no checkpoints you can work through in a single meeting.
Download it from the RICS AI Compliance Hub.
If you're exploring how to build compliance into your monitoring process rather than bolting it on — we're happy to walk you through the BankBuild workflow and how it could benefit your own. Reach out to us at see the BankBuild platform or connect with Laura, our CEO, on LinkedIn.
↑ Back to topWhat Happens If Your Firm Doesn't Comply
The RICS standard is clear: it will be taken into account in regulatory, disciplinary, and legal proceedings. This means non-compliance doesn't just risk a warning from RICS — it creates exposure on three fronts.
Regulatory risk. RICS conducts compliance reviews of regulated firms. If your firm is using AI in construction monitoring without a documented register, client disclosures, and reliability assessments, you are in breach of a mandatory professional standard. The disciplinary process can escalate from caution to suspension to removal from RICS membership — and for firms where RICS accreditation is a condition of bank panel appointments, that's an existential threat.
Professional indemnity risk. If a bank challenges a drawdown decision and it emerges that the underlying QS report relied on AI outputs that were not disclosed, not documented, and not quality-assured under the standard's requirements — no named surveyor, no written reliability decision, no client disclosure — your professional indemnity insurer will have questions. PI insurers are already updating their risk assessments around AI usage. Firms that cannot demonstrate compliance with the RICS standard may find their premiums increasing or their coverage questioned at exactly the moment they need it most.
Commercial risk. Banks are beginning to ask QS firms about their AI practices. The firms that can demonstrate structured, compliant AI usage — complete with register, risk documentation, named accountability, and client disclosures — will win panel positions. The firms that cannot will find themselves competing on price in a market where compliance is becoming the minimum standard for serious lenders.
The RICS AI standard isn't a burden — it's a competitive opportunity. The firms that move first to embed compliance into their workflows will be the ones that banks trust with their construction lending portfolios. The ones that wait will be scrambling to retrofit documentation onto processes that were never designed to produce it.
Read the full breakdown: What happens if your QS firm doesn't comply — regulatory, PI, and commercial consequences explained.
↑ Back to topFrequently Asked Questions
A selection of the most common questions. For the full list visit the standalone FAQ page or the RICS AI compliance glossary for definitions of every term used in this guide.
What counts as "material impact" on service delivery? The standard says an output has material impact if it is capable of influencing the delivery of the service — for example, outputs summarising documents relied on in a report, outputs composing significant parts of an opinion, or outputs recommending what to investigate. If your firm is using AI to draft monitoring reports, extract cost data, or benchmark against historical projects, that is almost certainly material.
Does the standard apply if my firm doesn't currently use AI? The baseline knowledge requirement applies to all members. Even if your firm isn't using AI today, the standard requires awareness and readiness. And it's worth auditing whether your team is using tools like ChatGPT informally — "shadow AI" use is more common than most firms realise.
Do we need a written reliability decision for every single AI output? For individual outputs with material impact, yes — including assumptions, concerns, and a named surveyor accepting responsibility. For automated or high-volume outputs, dip-sampling at regular intervals is acceptable, but firms remain accountable for each output regardless.
What must we tell clients about our AI use? In writing, in advance: when AI will be involved, which parts of the process it touches, the extent of PI cover if available, how to contest AI use, how to seek redress, and how to opt out. This must be in your terms of engagement or contractual documents.
What does the procurement due diligence involve? Written requests to the vendor covering environmental impact, development stakeholders, data law compliance, permissions for personal data, training data accuracy and diversity including known gaps and bias risks, and the vendor's liability. Follow-ups must be in writing and recorded. If the vendor provides limited information, you must log the resulting risks in your risk register.
Can my firm use ChatGPT for drafting monitoring reports under the RICS AI standard? Yes, but it must be documented. ChatGPT or any generative AI tool used in preparing client deliverables is considered AI with material impact on service delivery. Your firm must log it in your AI usage register, disclose its use to the bank client in writing before it touches their work, and have a named surveyor conduct a written reliability assessment on every output that informs a client report. Using ChatGPT is not prohibited — using it without documentation is the compliance risk.
What counts as "shadow AI" and why does it matter for compliance? Shadow AI refers to staff using AI tools — typically ChatGPT, Copilot, or similar — without formal firm approval or documentation. Under the RICS standard, the firm is responsible for all AI use in service delivery, whether approved or not. If a junior surveyor uses ChatGPT to draft a section of a monitoring report and that report informs a bank's drawdown decision, the firm has used AI with material impact — and all documentation requirements apply. The first step for most firms is auditing what tools their staff are actually using.
Does the RICS AI standard apply to automated spreadsheet tools and macros? The standard applies to AI systems, which it defines broadly. Simple rule-based macros — such as a spreadsheet formula that sums a column — are not AI. However, tools that use machine learning, natural language processing, or pattern recognition to generate outputs — including AI-powered spreadsheet add-ins that auto-categorise costs, predict values, or generate narrative text — would likely be considered AI with material impact if those outputs inform client deliverables. If in doubt, document it. Over-documenting carries no regulatory risk; under-documenting does.
How often must the AI risk register be reviewed? The standard requires the risk register to be reviewed and updated at least quarterly by staff responsible for decisions about the firm's use of AI. Each review should assess whether risks have changed, whether new AI tools have been adopted, and whether mitigation measures remain adequate. The review itself should be documented with the date, reviewer name, and any changes made.
What happens if a client asks to opt out of AI being used on their project? The standard requires firms to include in their terms of engagement how a client can opt out of AI use, if at all. If a bank client requests that no AI is used in their monitoring work, the firm must either comply or clearly explain why opt-out is not feasible for specific aspects of the workflow. This should be agreed in writing before the instruction proceeds.
Do I need separate client disclosure for each bank we work with? Yes. The standard requires written disclosure in your terms of engagement or contractual documents for each client relationship where AI is used. Different banks may have different risk appetites, different requirements for AI transparency, and different contractual terms. A single generic disclosure is unlikely to satisfy the standard's requirement that clients are informed about AI use specific to their instruction.
What qualifications does the surveyor signing off reliability decisions need? The standard requires that each reliability decision is prepared by, or under the supervision of, an appropriately qualified surveyor who accepts responsibility. For construction finance monitoring, this would typically be a chartered surveyor (MRICS or FRICS) with experience in the relevant discipline. The named surveyor's credentials should be recorded alongside each reliability decision.
Is there a grace period for compliance? The standard took effect on 9 March 2026. There is no formal grace period. RICS has stated that the standard will be taken into account in regulatory, disciplinary, and legal proceedings from its effective date. Firms that are already using AI should have compliance documentation in place now. Firms that are adopting AI should implement documentation from the point of first use.
What does "explainability" mean in practice for a QS firm? Explainability means your firm must be able to provide, on request, written information about each AI system used — including its type, basic workings and limitations, the due diligence conducted on it, how risks are managed, and the reliability decisions made. In practice, if a bank asks "how did AI contribute to this monitoring report?", you need to be able to answer specifically: which tool, what it did, what the surveyor checked, and why the output was deemed reliable. You do not need to explain the AI's internal algorithms — you need to explain your firm's process for using and validating it.
Can a firm be compliant without using any software platform? Yes, but the administrative burden is significant. A firm can maintain the AI usage register in a spreadsheet, write reliability decisions in Word documents, draft client disclosures manually, and keep a risk register in whatever format they choose. The standard does not require any specific software. However, firms handling multiple active monitoring projects will find that manual compliance documentation adds hours per week to an already admin-heavy workflow — which is the gap that platforms like BankBuild are designed to close.