Christina Lovelock MBCS explores how artificial intelligence could affect how business analysts approach non-functional requirements.
Let's be honest, most business analysts and our stakeholders find functional requirements far more interesting and worthy of discussion than non-functional requirements (NFRs).
Numbers of concurrent users, screen response times and audit logging are never going to generate much interest or debate — until something goes wrong. With SaaS solutions and cloud storage, these traditional areas of NFR are not necessarily the ones that we should be focusing our attention on.
NFRs in the world of AI
NFRs define the guardrails that shape how systems behave. It is NFRs that will help us to develop and implement fair and robust AI solutions that meet ethical and regulatory expectations.
Some important areas analysts and architects need to add to their ‘standard NFRs’ are:
- Explainability
- Transparency
- Ethical Audits
- Sustainability
- Bias detection and mitigation
Explainability
A key aspect of building trusted and effective AI solutions is ensuring that AI workflows, allocations and decisions can be understood and explained by humans. BAs writing narrative requirements, user stories or use cases (for development teams or to support procurement) must articulate business needs in relation to explaining AI outcomes. This is especially important for agentic AI, where there is a level of autonomy and AI works towards specific goals.
Transparency
Whilst transparency and explainability are clearly closely connected, transparency is about making system processes, data flows and decision logic visible, whereas explainability focuses on making those elements understandable to diverse audiences. Transparency is a prerequisite for ethical oversight, enabling scrutiny, trust building and informed consent in AI-driven environments. Business requirements in this area might include documentation of data flows, decision trees and escalation paths for issues or anomalies.
Ethical audits
Ethical audits help ensure that AI systems align with organisational values, legal obligations and societal expectations. These things will not happen by accident, or as a by-product of good functional requirements. This means BAs need to focus on requirements that support traceability, accountability and independent review of AI processes. Requirements should specify how ethical risks (such as unfair treatment, opacity or misuse) will be identified, assessed and addressed. By ensuring we include these elements in our specifications and procurement discussions, we send a clear message on the importance of ethical as well as legal compliance, to developers and suppliers of AI solutions.
Sustainability
To holistically assess the sustainability of AI solutions, we have to look further than energy efficiency. It encompasses long term social, environmental and economic impacts. BAs should articulate requirements that promote responsible resource use, lifecycle awareness and alignment with sustainability goals. This might include specifying carbon footprint thresholds, data minimisation practices or reuse of models and datasets. For agentic AI systems, sustainability considerations may also extend to goal alignment and unintended consequences over time.
Bias detection and mitigation
Bias is not unavoidable. Identifying and remediating bias should not be an afterthought. BAs must define business needs that support proactive identification of bias in training data, model outputs and decision pathways. Requirements should include mechanisms for testing, monitoring and remediating bias across different user groups and contexts. This is especially important for agentic AI, where autonomous optimisation may inadvertently reinforce historic inequities or exclusionary patterns. Avoiding bias is about making good up front design and data decisions, not spotting errors in implemented systems.
For you
Be part of something bigger, join BCS, The Chartered Institute for IT.
Know the law
These aspects are not just a ‘nice to have’. Whilst laws and regulations specific to AI are still emerging, we must be alert to the existing laws that are especially relevant. In particular, UK GDPR (Article 22) restricts decisions based solely on automated processing that have legal or similarly significant effects on individuals. Business analysts must ensure that our requirements reflect the need for meaningful human oversight, clear explanation of outcomes and mechanisms for individuals to challenge decisions made by AI systems.
Conclusion
NFRs just got more interesting. They are no longer just about technical performance, they’re about trust, ethics and accountability. As AI systems become more autonomous and embedded in decision-making, BAs must expand our toolkit to include NFRs that safeguard human values and legal rights. By reframing NFRs as enablers of responsible innovation, we can ensure that AI solutions are not only effective, but also fair, transparent, and sustainable.
Christina Lovelock is a digital leader, coach and author. She is active in the business analysis professional community and champions entry level roles. She is the author of the BCS books Careers in Tech, Data and Digital and Delivering Business Analysis: The BA Service Handbook.
Take it further
Interested in this and similar topics? Explore BCS' books and courses:
- The Psychology of AI Decision Making: Unpacking the ethics, biases, and responsibilities of AI
- Developing Information Systems: Practical guidance for IT professionals
- Getting Started with Tech Ethics: An introduction to ethics and ethical behaviours for IT professionals
- Innovating ethically to drive business change
- BCS Foundation Certificate in the Ethical Build of AI