Tag: AI

  • How to Validate a Software Specification

    How to Validate a Software Specification

    Interview Refresher

    Are we building the right product? Software specification defines the functionality of a system and the constraints under which it must operate. Requirements validation ensures that the specification is correct, consistent, complete, feasible, verifiable, and aligned with business and user needs.

    Validation is different from verification:

    • Validation asks: Are we building the right product?
    • Verification asks: Are we building the product right? [1]

    Both are essential, but validation comes first because it confirms that the defined system is actually the one stakeholders expect and that it is fit for purpose.

    Below are the core techniques for validation of requirements of technical specifications.


    1. Reviewer Check for Errors and Inconsistencies

    A reviewer or team of reviewers reads through the specification to identify:

    • Errors
    • Inconsistencies
    • Contradictions
    • Unclear sentences
    • Missing details

    This is often the quickest way to uncover obvious issues before deeper validation activities begin.


    2. Prototyping

    Prototyping involves building a simple working model or partial system to help customers and engineers clarify what the final system should do. It is particularly useful when requirements are unclear, complex, or high risk. Prototypes help stakeholders “see” the system and confirm whether the written requirements reflect their expectations.


    3. Generating Test Cases

    A powerful technique is to generate test cases directly from the specification. If a requirement cannot be tested then it is not written clearly enough.

    Test case generation helps identify:

    • Ambiguous or vague requirements
    • Missing conditions
    • Contradictory behaviours
    • Requirements that are not verifiable

    This technique links validation to quality assurance.


    4. Validity Checks

    Validity checks ensure that the requirements genuinely reflect business or stakeholder needs. A specification may be formally correct yet still include requirements that do not contribute to the real objectives of the system. During validity checks, we ask:

    • Does this requirement belong in the system?
    • Does it support a real user or business need?

    5. Consistency Checks

    Consistency checks ensure that requirements do not contradict each other.
    Examples include:

    • Conflicting business rules.
    • Two requirements defining different behaviours for the same scenario.
    • Contradictions between functional and non-functional constraints.

    A consistent specification avoids logical conflicts that could derail design or testing later.


    6. Completeness Checks

    Completeness checks ensure that nothing important is missing from the specification.
    This includes verifying that:

    • All inputs and outputs are defined
    • All scenarios, constraints, and user types are considered
    • There are no “TBD” or “to be confirmed” placeholders
    • Non-functional requirements (performance, reliability, response time) are fully stated

    Incomplete specifications are one of the main causes of project delays and rework.


    7. Verifiability

    A verifiable requirement can be tested. If a requirement cannot be verified, it cannot be validated during testing or acceptance.

    Example of non-verifiable requirement:

    “The system should respond quickly.” This must be rewritten using measurable criteria.


    8. Realistic or Feasible Requirements

    Feasibility checks ensure the requirement can realistically be implemented. This includes checking:

    • Technical feasibility.
    • Cost and budget constraints.
    • Timeline constraints.
    • Existing system limitations.
    • Available skills and technologies.

    A requirement that cannot be achieved in practice is not a valid requirement.


    Summary

    By checking for validity, consistency, completeness, verifiability, and feasibility, errors and omissions can be identified early in the software development life cycle (SDLC) to improve overall project outcomes. If you need to perform detailed and thorough work there is an IEEE Standard, “IEEE Standard for System, Software, and Hardware Verification and Validation[2]

    Footnotes

    [1]Ian Sommerville, Software Engineering, 10th edition, Pearson, 2016, p. 14.
    <[2] IEEE Standard 1012-2024, Standard for System, Software, and Hardware Verification and Validation. Available at: https://ieeexplore.ieee.org/document/11134780.

  • Getting the Bugs Out of Software – Saving Lives and Money with AI

    Getting the Bugs Out of Software – Saving Lives and Money with AI

    $2.41 trillion. That’s the estimate of the cost of poor software quality in the US for 2022. $607bn of that was just finding and fixing bugs. Imagine a world where even safety-critical software systems do not fail. By safety critical, I mean the systems that control airplanes and air traffic, 911 calls, and the power grid delivering gas and electricity supplies. All of these have disasters reported where a computer glitch was cited as the problem. Medical machines are controlled by software, from the heart rate monitor on watches, to the robot surgeon in the hospital. The Food and Drug Administration (FDA) has a whole database of recalled medical devices that include software issues as root cause.  Some of them are Class I which means there is “a reasonable chance” that the product will cause health problems or death1.

    Software engineers are constantly trying to improve how software is created at all stages of the process, not just coding. How can it be right the first time? It is not possible to prove that there are no bugs. But engineers work hard to find them at all stages of creating and testing the software.  Once it is released, engineers still have to maintain the code and fix the issues that inevitably show up. Now I’m sure you’ve heard about Artificial Intelligence (AI), ChatGPT, machine learning, and big data. My research involved looking for ways to use those machine learning algorithms to find bugs.  I experimented to see if certain models can work together to make one stronger model, something like a voting system for best one. The data I used to feed into the machine to predict where the errors are, was created by NASA and is freely available for the purpose of improving software engineering.

    High power electricity poles in urban area connected to smart grid. ChatGPT.

    So, if software engineers can use AI to predict and find more of the errors while building and testing software, they won’t have to go back and fix things. That time can be used instead to innovate and create better new products and services. If they can find the problems in software that is already out there before safety-critical systems go down, it will save not only billions of dollars, but save lives.

    This post is adapted from my 3MT2 (three-minute thesis) entry November 2023 at East Carolina University.

    1. https://www.fda.gov/medical-devices/medical-device-safety/medical-device-recalls. Medical device recalls, U.S. Food & Drug Administration accessed 1/14/2025.
    2. https://threeminutethesis.uq.edu.au/higher-degrees-researchstart-your-3mt-journey-here. “The competition supports their capacity to effectively explain their research in three minutes, in a language appropriate to a non-specialist audience.”