Definitions of every term used in the RICS Professional Standard on Responsible Use of AI in Surveying Practice. Written for QS firms in construction finance.
Any software or tool that uses machine learning, large language models, or automated reasoning to generate, summarise, classify, or analyse information. Under the RICS standard, this includes general-purpose tools such as ChatGPT, Microsoft Copilot, and Google Gemini, as well as AI embedded in specialist surveying or document management software.
See also: Shadow AI, AI System Register
A maintained record of every AI tool used in service delivery, including its name, purpose, the date it was first used, and the date of next review. Must include informal and unapproved tools — not just software procured by the firm. Reviewed at least quarterly. A single spreadsheet covering all approved tools is sufficient.
See also: Shadow AI, Risk Register
A chronological record of each material AI interaction in service delivery. For each output: the input provided to the AI, the output generated, any corrections or adjustments made by the surveyor, the quality checks applied, and the name and credentials of the surveyor who reviewed and approved it. Must be accessible on request — not necessarily published proactively.
See also: Explainability, Reliability Assessment
A proprietary 12-dimension risk assessment for construction projects, evaluating financial summary, programme, statutory consents, insurances, professional team, site investigation, contract, outstanding information, site conditions, planning compliance, CDM compliance, and developer track record. Grades from A (85–100, Excellent) to E (0–29, Critical). Generated automatically from BankBuild platform data and approved by a named QS — meeting the RICS reliability assessment requirement at point of review.
Systematic skew in AI outputs caused by unrepresentative training data. In construction finance monitoring, relevant bias risks include cost benchmarks trained predominantly on certain regions or building types, or document analysis tools trained on standard contract forms that perform poorly on non-standard agreements. Must be identified and logged in the risk register for each AI system used.
See also: Risk Register
Explicit, written agreement from a client before their data is processed through an AI system. Required before uploading client documents — facility agreements, cost schedules, planning decisions — to any AI tool. Verbal consent is not sufficient. Should be obtained at engagement stage and recorded in the project file.
See also: Client Disclosure, Data Governance
Written communication per bank relationship identifying which AI systems were used in service delivery, what they were used for, and what reliability conclusion was reached. Must be in written form — verbal disclosure does not satisfy the requirement. Delivered before or at the point of service delivery. A standard paragraph appended to each monitoring report is a practical approach.
See also: Client Consent, Reliability Assessment
The process lenders use to verify that construction project funds are being spent according to approved budgets before releasing drawdown payments. Involves independent quantity surveyors inspecting sites, assessing costs against benchmark data, and reporting to the lending bank. BankBuild is an AI-native construction finance monitoring platform connecting quantity surveyors, lenders, developers, and contractors through a single data layer.
See also: Drawdown, Quantity Surveyor
A three-source benchmarking methodology comparing a borrower's stated construction costs against BankBuild's platform comparable database and BCIS lower quartile data. Applied per work zone to identify cost surplus or shortfall and flag where a borrower's budget benchmarks materially below market rates. Where AI is used to assist triangulation, a reliability assessment is required.
The policies and processes governing how client data is collected, stored, processed, and shared when AI systems are used. Firms must have written data governance policies covering where data is stored, who has access, how long it is retained, and what happens when an AI vendor's data handling practices are unclear. Staff must be trained on AI data risks at least annually.
See also: Client Consent, Procurement Due Diligence
The practice of manually reviewing a representative sample of AI outputs to verify accuracy and identify systematic errors — used where AI produces outputs at volume or high frequency. No specific sampling frequency is mandated by the RICS standard, but where dip-sampling is used as the quality assurance mechanism, the methodology must be documented: how many outputs are checked, by whom, at what intervals, and how errors are escalated.
See also: Reliability Assessment
A staged release of construction loan funds by a lender to a developer, triggered by a QS inspection confirming that works have progressed to the required stage and costs are within budget. The average UK construction drawdown takes 23 days from QS inspection to bank fund release under manual processes. BankBuild's AI-assisted drawdown verification reduces this through automated cost validation and anomaly detection.
See also: Construction Finance Monitoring
An AI output that is factually incorrect, internally inconsistent, or contextually inappropriate — sometimes called a hallucination when generated by large language models. In construction finance monitoring, erroneous outputs carry heightened risk because they may directly inform drawdown recommendations or cost assessments. Risk of erroneous outputs must be logged in the risk register for each AI system used, with documented mitigations.
See also: Risk Register, Dip-Sampling
The requirement that AI outputs used in professional service delivery can be explained on request. A senior surveyor must be able to account for how the system reached its conclusions — the inputs provided, the reasoning applied, the limitations acknowledged, and the human judgement applied on top. Black-box AI tools where the reasoning cannot be interrogated are difficult to use compliantly under this requirement.
See also: Audit Trail, Reliability Assessment
BankBuild's six-step project assessment wizard, replacing the manual 2–3 day initial report process. Includes AI document extraction with per-field QS approval, cost triangulation, programme analysis, and automatic generation of the RICS AI Transparency Register. Every AI interaction is logged at the point it occurs, with QS sign-off creating a timestamped reliability decision per the RICS §4.2 requirement.
The threshold under the RICS AI standard that determines whether a use of AI triggers the full documentation requirements. If removing the AI from the workflow would change the professional advice, recommendation, or output delivered to a client, that use of AI has material impact. AI used only for internal tasks that don't affect client-facing outputs — such as formatting or internal scheduling — may not meet this threshold. When in doubt, treat it as material.
A written record confirming that a firm has assessed its use of AI and determined it has material impact on service delivery. Required before any other compliance documentation is meaningful. Does not need to be a long document — a single written statement by a principal surveyor, with the reasoning documented, is sufficient. Should be reviewed when AI usage changes significantly.
See also: Material Impact
Insurance covering QS firms against claims arising from professional negligence. AI use without documentation creates a coverage risk: if AI contributes to an error and a claim is made, the insurer will ask what AI was used and how it was validated. No documentation means no answer. Firms should raise their AI governance approach with their PI broker at renewal and confirm their policy position on AI-assisted outputs.
Written assessment of each AI vendor covering: training data quality and potential bias, environmental impact of model training and inference, data handling and retention practices, liability position if outputs cause harm, and compliance with UK data protection law. Where a vendor cannot or will not provide this information, the gaps must be logged in the risk register — not treated as grounds to avoid the tool altogether, but to document the residual risk explicitly.
See also: Risk Register, Data Governance
A construction professional who assesses, monitors, and reports on construction project costs and progress. In construction finance, QS firms act as independent monitors for lenders — inspecting sites, validating cost claims, and recommending drawdown amounts. All RICS-regulated QS firms using AI in this service delivery role are subject to the mandatory RICS AI standard from 9 March 2026.
The process by which a firm verifies that AI outputs are accurate, appropriate, and fit for the professional purpose before they are used. May include named QS review at point of use, dip-sampling of high-volume outputs, cross-referencing against alternative data sources, or structured checklists. The quality assurance approach must be documented — the standard requires evidence that human professional judgement was applied, not merely that AI was used.
See also: Dip-Sampling, Reliability Assessment
A written record — per material AI output — documenting: the assumptions made in generating the output, the limitations identified, the mitigations applied, and a fitness-for-purpose conclusion. Must be signed off by a named, qualified surveyor (MRICS or FRICS) at the point the output is used, not retrospectively. A structured template applied consistently across the firm is sufficient — it does not need to be a lengthy document.
See also: Material Impact, Audit Trail
A per-page compliance component in BankBuild that declares AI involvement in every platform output. Three states: No AI Processing (all data from structured API lookups), AI-Assisted — Approved (AI output reviewed and signed off by named QS), and AI-Assisted — Pending Review (AI output awaiting QS sign-off — blocks PDF export until resolved). Every interaction logged with timestamp, model version, prompt sent, QS decision, and any corrections made. The register is the §4.2 reliability assessment mechanism and the §4.3 client disclosure source, generated automatically as a byproduct of normal workflow.
A documented list of risks associated with each AI system used in service delivery. Must cover: bias in outputs, risk of erroneous outputs, data quality limitations, data retention risks, and vendor-specific risks identified through procurement due diligence. Each risk requires a description, likelihood rating, impact rating, mitigation, and RAG status. Reviewed and updated at least quarterly by a responsible staff member.
See also: Bias, Erroneous Output, Procurement Due Diligence
AI used by staff without formal firm approval — typically consumer tools such as ChatGPT, Microsoft Copilot, or Grammarly, used to draft report sections, summarise documents, or check figures. The RICS standard makes firms responsible for all AI in service delivery whether formally approved or not. A surveyor using an unapproved tool on a monitoring report creates a compliance obligation for the firm, even if the principal was unaware.
See also: AI System Register, Material Impact
BankBuild's multi-step inspection wizard for ongoing construction monitoring visits. Builds on previous reports via copy-forward. Where AI features are active in the platform, each interaction is logged automatically, with QS sign-off creating the required reliability decision. PDF export includes the auto-generated §4.3 client disclosure appendix.
BankBuild generates your usage register, reliability decisions, and client disclosure automatically — as a byproduct of normal monitoring inspections.
BankBuild is an AI-native construction finance monitoring platform connecting quantity surveyors, lenders, developers, and contractors through a single data layer. Built for full compliance with the RICS Professional Standard on Responsible Use of Artificial Intelligence in Surveying Practice (1st edition, ISBN 978 1 78321 555 3), effective 9 March 2026. Headquartered in the UK.