top of page

The S-DRP Framework: Practical Data-Risk Propagation in AI Systems

Dr. Tejasvi Addagada | AI Governance · Data Risk · The Methodology Underneath Provable Governance


Risk in AI systems does not stay where you put it.

A privacy issue in a training dataset surfaces three months later as a legal exposure in a customer support transcript. A bias introduced at the feature engineering layer becomes a regulatory finding at the credit decision boundary. A drift in foundation model behaviour in March produces an unreviewed agentic action in July that nobody had thought to monitor.

This is the central operating reality of AI risk in 2026, and it is what most enterprise risk frameworks are not built to govern. They were built to assess data risk, model risk, and decision risk in isolation — as if each were a static attribute of a static artefact. AI systems do not have static artefacts. They have flows.

The Scientific Data-Risk Propagation framework — S-DRP — is what I have spent the last three years developing as a response to that gap.


What S-DRP is

S-DRP is a measurement-first framework for tracking how data risk, model risk, and decision risk propagate through the lifecycle of an AI system — from source data, through transformation, into model behaviour, through decisions, and out into downstream impact. It is regulator-neutral by design and regulator-evidenceable by construction. The artefacts it produces are intended to satisfy the evidence demands of any of the major AI governance regimes currently in force or in formation.

It extends rather than replaces the methodology I introduced in 2022 in Data Risk Management: Essentials to Implement an Enterprise Control Environment (Blue Rose Publishers). That book established the Contingency and Evolutionary Models — frameworks for governing data risk under uncertainty, and for letting governance evolve as the data environment matured. The premise of those models was that risk is not a one-time assessment; it is a property of a moving system that must be continuously re-evaluated.

S-DRP carries that premise into the AI era and asks the harder question: if risk moves through a system, can we measure where it goes, how much of it survives each transition, and what artefacts we need to capture to prove the result?

This is the methodology that sits underneath what I have called provable governance — governance whose claims about AI behaviour, oversight, and risk control can be evidenced by runtime artefacts an auditor or regulator can verify, rather than asserted in policy documents nobody operationally reads.



What "scientific" actually means here

The word "scientific" is doing real work in the name. It is not rhetoric.

Three things make S-DRP scientific in the operational sense, rather than the rhetorical sense.

It makes claims that can be checked. A traditional AI risk framework typically asserts that a system "has appropriate controls" or "meets the principle of accuracy." S-DRP asserts instead that a specific, named risk has propagated to a specific zone with a measurable residual magnitude — and that the artefacts proving this exist and are available for inspection. The claim is structured to be verifiable. If the artefacts do not support the claim, the claim is wrong.

It prefers measurement over assertion. Each zone in the propagation path carries one or more measurement instruments — concrete, repeatable methods for quantifying the risk that has crossed into that zone. Where measurement is genuinely difficult or premature (and in some areas of AI behaviour it is), S-DRP is explicit about the gap rather than papering over it with qualitative assessment.

It treats the methodology itself as fallible. The first version of S-DRP is not the last. Its zones, instruments, and metrics are subject to revision as practitioners apply it and report back what works, what fails, and what needs sharper definition. The 2022 book set the precedent — the Contingency and Evolutionary Models were built to evolve. S-DRP inherits that posture.

The bar this sets is uncomfortable for most existing governance practice. It is meant to be.


The structure of the framework

At the structural level, S-DRP has three components.

The propagation path is the canonical sequence of zones an AI system's data and decisions move through, from source to downstream impact. There are five zones, defined below.

The instruments are measurement methods specific to each zone, capturing the residual risk that has crossed into it. Different zones require different instruments; some are extensions of disciplines that already exist in mature data and risk functions, some are new.

The artefact register is the set of evidence objects — decision traces, context graphs, policy checkpoints, tool invocation records, outcome distributions — that an AI system must produce continuously for its propagation profile to be verifiable.

The propagation path is the spine. The instruments populate it. The artefact register is what makes the claim provable to an auditor or regulator. Together they form the operational backbone of provable governance.


Zone 1: Source

The source zone covers risk attached to data before it enters the AI lifecycle. Provenance, lawfulness of collection, consent state, sensitivity classification, freshness, lineage from upstream systems, and the quality characteristics that will eventually constrain what the system can reliably do.

Most enterprises know they need source-zone governance and most have invested in it under data quality, master data, or privacy programmes. What they typically do not have is a propagation-aware view: a record of which downstream models, decisions, and customer-facing actions a given source artefact eventually feeds, and therefore where its inherited risk surfaces.

The instruments at this zone are largely instruments enterprise data offices are already building — data lineage graphs, consent registers, sensitivity tags, freshness monitors. S-DRP requires that these instruments produce machine-readable artefacts that can be referenced from later zones. A consent state recorded only in a privacy compliance report cannot be inherited by a model card three zones downstream; it must be tagged into the lineage.

The artefact register at the source zone includes the lineage graph mapping sources to systems, the consent state registry, sensitivity classifications, the data quality profile, and freshness logs.

A propagation claim that begins at the source zone asserts: this input has these characteristics, and they will be inherited by downstream zones unless explicitly transformed away.


Zone 2: Transformation

The transformation zone covers everything that happens to data between source and model — feature engineering, encoding, augmentation, sampling, retrieval indexing, prompt assembly. In traditional ML pipelines this is the layer where bias is introduced, where information is destroyed or amplified, where representation choices set the upper bound on what any downstream model can learn. In LLM and agentic systems this layer expands to include retrieval-augmented context, prompt templates, agent memory updates, and the dynamic assembly of input sequences at inference time.

This zone is the most underspecified in conventional risk frameworks. Model risk management traditions inherited from credit and market risk treat it as preprocessing — a technical step assumed to be benign and stable. It is neither. A change to a prompt template, a swap of the retrieval index, an update to the embedding model — any of these can alter the risk profile of downstream model behaviour without triggering a single review under most existing governance regimes.

S-DRP requires explicit propagation tracking through this zone. Every transformation must be recorded with enough fidelity that its contribution to downstream risk can be reconstructed. Where the source zone asks what came in, the transformation zone asks what was done to it, by whom, under what policy, and with what verifiable record.

The instruments at this zone include feature drift detectors, prompt versioning systems, retrieval index audit trails, augmentation logs, and what S-DRP captures as ContextNode and ContextEdge records — explicit captures of each piece of context entering the model and the relationships between them. The ContextNode/Edge graph is the artefact that makes retrieval-based systems governable rather than opaque.

The artefact register at the transformation zone includes feature definitions and lineage, prompt template versions, the retrieval index manifest, augmentation and sampling logs, and the ContextNode/ContextEdge graph.

A propagation claim that crosses the transformation zone asserts: this transformation chain converted these inputs into this representation, with these residual risk characteristics, in a way that is reconstructable.


Zone 3: Model

The model zone covers the behaviour of the AI system itself — what it computes, predicts, generates, or decides given an input. In traditional ML governance this is the zone where model risk management has spent most of its energy: validation testing, bias measurement, robustness evaluation, performance monitoring, drift detection.

S-DRP does not displace these activities. It absorbs them and extends them in two directions.

First, it requires that model-zone instruments produce artefacts referenceable by later zones — not merely retained in a model risk file. A bias measurement that lives in a validation report and is not exposed to downstream decision auditing has zero propagation value. The measurement must be part of the artefact register that travels with the model into production.

Second, S-DRP recognises that the "model" in agentic systems is not a single model. It is an assembly: a base foundation model whose behaviour shifts with each provider update, a context graph assembled at inference time, a tool registry that defines what actions are available, and a policy layer that constrains what the assembly is allowed to do. Each of these is itself a zone-3 artefact, and each must be governed.

The instruments at this zone include model performance profiles, drift detectors, behaviour-shift monitors against the reference model approved at deployment, tool invocation logs, and what S-DRP captures as PolicyCheckpoint records — explicit assertions made by the policy layer at inference time, retained for audit.

The artefact register at the model zone includes the model identifier and version, the reference behaviour profile, performance and drift metrics, the tool registry state at the time of inference, and the PolicyCheckpoint log.

A propagation claim that crosses the model zone asserts: this system produced this output, under these policies, using these tools, with these performance characteristics, in a way that is reconstructable to its constituent assemblies.


Zone 4: Decision

The decision zone covers how model output is used. This is the zone where regulatory exposure crystallises — because regulators do not regulate models per se. They regulate decisions: credit decisions, hiring decisions, fraud determinations, customer-eligibility decisions, content moderation decisions.

In legacy ML governance this zone is largely procedural: was a human in the loop, was an adverse action notice generated, was the decision logged. S-DRP requires that decisions be governed as first-class objects — captured with enough context that the question why was this decision made can be answered concretely six months later, by someone who was not present when it happened.

This is the zone where the DecisionTrace construct lives. A DecisionTrace is the canonical artefact S-DRP requires for any consequential AI-mediated decision: a structured record containing the model output, the inputs that produced it, the context that informed it, the policies that authorised it, the tools it invoked along the way, and the human review status (if any) at the time of issuance.

If a regulator arrives ninety days after a decision and asks why, the DecisionTrace is the answer. If the DecisionTrace cannot be produced, the system is — by S-DRP's standard — not in a state of provable governance for that decision class.

The instruments at this zone include decision logging systems, the DecisionTrace schema itself, human review attestation, escalation triggers, and reconciliation against approved policy.

The artefact register at the decision zone includes DecisionTrace records (one per consequential decision), the approval and escalation log, adverse action and disclosure records, and reconciliation against policy.

A propagation claim that crosses the decision zone asserts: this decision was made on these grounds, under these policies, with this review status, in a way that is reconstructable in regulatory time horizons.


Zone 5: Downstream impact

The downstream-impact zone covers what happens after the decision — the actual effects on customers, counterparties, employees, markets, and the broader operating environment of the firm. This is the zone most poorly instrumented in existing governance frameworks and the zone most directly visible to regulators, customers, and journalists.

A model can have excellent zone-3 metrics. A decision can have a complete zone-4 trace. The downstream impact can still be that a class of customers is systematically denied credit, that a fraud system disproportionately flags a demographic group, that an agentic workflow consistently mishandles a customer journey in ways the firm does not see — because no individual decision is wrong, but the aggregate pattern is.

This is where principle-level AI governance breaks down most visibly. The principle of fairness is not violated by any single decision. It is violated by the pattern.

S-DRP requires zone-5 instruments that operate at the population and pattern level — outcome distributions across protected classes, complaint-rate analyses, cohort-level performance tracking, time-series of operational impact. These instruments are demanding. They require the firm to define, in advance, which outcomes it will track and at what cadence — a definition that itself becomes a governance artefact. They require sustained measurement infrastructure that is more expensive than discrete-decision logging. And they require institutional willingness to act on findings that may indict programmes the firm has invested in.

The instruments at this zone include outcome distribution monitoring, complaint and grievance correlation, cohort performance tracking, and what S-DRP frames as the ImpactBand — the range of acceptable downstream variance the firm has explicitly committed to maintaining for a given decision class.

The artefact register at the downstream-impact zone includes outcome distribution reports, complaint and grievance correlations, cohort performance time series, and ImpactBand definitions and breach logs.

A propagation claim that closes the path asserts: this system has produced these aggregate outcomes, against these committed bands, with these breaches and these remediations, in a way that is reconstructable across the time horizon the regulator cares about.


The full propagation claim

When the five zones are instrumented end-to-end, a single AI system can produce, on demand, a continuous propagation profile: a verifiable record of how risk entered, how it was transformed, how it was applied, how it was decided upon, and what it caused. That continuous profile is what regulators have started to ask for. It is what boards should be demanding from management. It is what S-DRP is structured to produce.

What it is not: a replacement for principles. The principles a board signs — robustness, accuracy, transparency, fairness — remain necessary. The propagation profile is what makes them enforceable. Principles without artefacts are slogans. Artefacts without principles are noise. S-DRP is the bridge between them.


Regulatory alignment

  1. S-DRP is regulator-neutral but regulator-evidenceable. The artefacts it produces are intended to satisfy the evidence demands of any of the major AI governance regimes currently in force or in formation.

  2. Under the EU AI Act, the documentation and conformity assessment obligations under Articles 11 to 15 for high-risk systems and the transparency obligations under Article 86 are directly served by the DecisionTrace, the PolicyCheckpoint log, and the ContextNode/Edge graph. A system implementing S-DRP is positioned to respond to a regulator query in days, not months.

  3. Under the NIST AI Risk Management Framework, the Govern, Map, Measure, and Manage functions are operationally instantiated by zone-by-zone instrumentation. NIST AI RMF specifies what good governance looks like in principle. S-DRP gives it a concrete artefact spine.

  4. Under the RBI FREE-AI Framework, the board-level oversight obligations on Indian financial institutions require artefacts that can be presented to risk committees with confidence and to the regulator on request. The artefact register is exactly that.

  5. Within the OECD.AI Policy Observatory, the framework is currently positioned as a reference methodology for translating principle-level commitments into measurable practice. The OECD principles are sound; the translation problem they leave open is what S-DRP is designed to solve.

  6. A firm that implements S-DRP is not implementing one regulator's framework. It is building the evidence base that satisfies any of them.


Implementation path

S-DRP is not a software product. It is a methodology that produces requirements for software, organisation, and operating practice.

A realistic twelve-month implementation in a regulated enterprise looks roughly as follows.

In the first quarter, the propagation path is defined. AI systems in scope are identified, their current zones are documented, and the artefact register required for the highest-risk system class is specified. Executive agreement is secured on which zones will be instrumented first.

In the second quarter, zone 4 is stood up. DecisionTrace is implemented for the most consequential decision class. This delivers the most immediate regulatory value and creates a template for other classes.

In the third quarter, the framework extends to zones 3 and 5. Model-behaviour instruments are connected to the decision zone. Downstream-impact monitoring begins.

In the fourth quarter, the loop is closed. Source lineage and transformation tracking are backfilled — making the propagation claim end-to-end verifiable for the first system class.

Year two is maturity. Refinement of instruments. Extension to additional system classes. Integration with existing risk and compliance reporting cycles.

This is unglamorous work. It is also the work that distinguishes firms whose AI governance will survive regulator scrutiny from firms whose governance is a slide deck.


What to take from this

Five things.

  1. AI risk is not a static attribute of an AI system. It is a property of a flow.

  2. Provable AI governance requires evidence at every zone of the flow — not principles attached to the whole.

  3. The five zones — source, transformation, model, decision, downstream impact — are the canonical propagation path.

  4. Each zone produces named artefacts that constitute the evidence base. Without those artefacts, governance claims are assertions.

  5. S-DRP is regulator-neutral but regulator-evidenceable. The evidence base satisfies any current major regime.


The frameworks that matter in the AI era will be the ones that produce verifiable artefacts. The frameworks that do not will be remembered as documents that were signed by boards and ignored by engineers. The work of 2026 in AI governance is not writing better principles. It is producing better artefacts. The propagation path is where that work lives.

These are the personal views of the author and do not reflect those of any organisation. Tejasvi Addagada is the author of two books on data — Data Management and Governance Services: Simple and Effective Approaches (2017) and Data Risk Management: Essentials to Implement an Enterprise Control Environment (Blue Rose Publishers, 2022). The Scientific Data-Risk Propagation framework introduced here extends the methodology of the 2022 book into the AI era and is currently positioned as a reference contribution to the OECD.AI Policy Observatory. He writes on AI governance, data risk, and emerging-technology policy in financial services at tejasviaddagada.com.


Frequently asked questions

What is the Scientific Data-Risk Propagation (S-DRP) framework? S-DRP is a measurement-first framework for tracking how risk propagates through the lifecycle of an AI system — from source data, through transformation, into model behaviour, through decisions, and out into downstream impact. It defines five zones, the measurement instruments for each, and the artefact register that makes governance claims verifiable to a regulator.


How does S-DRP differ from traditional model risk management?

Traditional model risk management treats the model as a relatively stable artefact subject to periodic review. S-DRP treats the AI system as a continuously changing flow and instruments the entire propagation path — not just the model. Where model risk management produces a validation report, S-DRP produces a continuously verifiable artefact register.


How does S-DRP relate to the NIST AI Risk Management Framework?

NIST AI RMF specifies what good governance looks like in principle through its Govern, Map, Measure, and Manage functions. S-DRP gives those functions a concrete artefact spine — the named evidence objects that make the principles operationally enforceable. The two are complementary; S-DRP is one practical instantiation of NIST AI RMF.


What is a DecisionTrace?

A DecisionTrace is the canonical artefact S-DRP requires for any consequential AI-mediated decision. It is a structured record containing the model output, the inputs that produced it, the context that informed it, the policies that authorised it, the tools it invoked, and the human review status at the time of issuance. If a regulator asks why a decision was made, the DecisionTrace is the answer.


What are the five zones in S-DRP?

Source, transformation, model, decision, and downstream impact. Each zone has its own risks, its own measurement instruments, and its own artefacts. The propagation profile of an AI system is the end-to-end record across all five zones.


Is S-DRP a software product?

No. S-DRP is a methodology that produces requirements for software, organisation, and operating practice. The instruments and artefacts it specifies can be implemented with combinations of in-house engineering, observability tooling already in the stack, and targeted purchases for the gaps.


Does S-DRP apply to agentic AI systems?

Yes. S-DRP is particularly well-suited to agentic systems, where the propagation path includes tool invocation, multi-step reasoning, and dynamic context assembly. The ContextNode/Edge graph, the PolicyCheckpoint log, and the DecisionTrace are all designed to capture the additional governance surface that agentic systems present.


How long does it take to implement S-DRP?

A realistic implementation in a regulated enterprise is twelve months for the first system class, followed by progressive extension. The order matters: zone 4 (DecisionTrace) typically delivers the most immediate regulatory value, followed by zones 3 and 5, with source and transformation tracking backfilled to close the loop.


How does S-DRP relate to the 2022 book on data risk management?

S-DRP extends the Contingency and Evolutionary Models introduced in that book into the AI era. The continuity is deliberate. The 2022 book established that data risk is a property of a moving system; S-DRP carries that premise into AI systems and asks the harder question of how to measure where the risk goes and what artefacts prove the result.


Where can I learn more about S-DRP?

Forthcoming pieces on this site will cover each of the five zones in depth, the measurement instruments and metrics, application case studies in credit decisioning and agentic workflows, the comparison with traditional model risk frameworks, and a CDO-level implementation guide.

Comments


Contact Info

Address

Airoli Knowledge Park Road, Dighe, Green World, vitawa, Airoli, Thane, Maharashtra 400708, India

Email

Follow Us

  • Instagram
  • Twitter
  • LinkedIn
  • Pinterest
  • Youtube

Subscribe to get latest Updates !

Thanks for subscribing!

@2023 Tejasvi Addagada

bottom of page