Search is not available for this dataset
counterfactual_divergence_score float64 | cascade_pressure_index float64 | systemic_coupling_score float64 | intervention_reversibility_index float64 | drift_gradient float64 | stabilization_window_score float64 | latent_failure_load float64 | coordination_stability_score float64 | label_counterfactual_cascade_failure int64 |
|---|---|---|---|---|---|---|---|---|
0.18 | 0.22 | 0.28 | 0.84 | -0.17 | 0.79 | 0.24 | 0.88 | 0 |
0.26 | 0.34 | 0.39 | 0.72 | -0.09 | 0.68 | 0.35 | 0.77 | 0 |
0.41 | 0.49 | 0.53 | 0.58 | 0.05 | 0.52 | 0.49 | 0.63 | 1 |
0.55 | 0.61 | 0.66 | 0.44 | 0.14 | 0.39 | 0.62 | 0.51 | 1 |
0.67 | 0.73 | 0.77 | 0.31 | 0.23 | 0.28 | 0.74 | 0.42 | 1 |
0.21 | 0.27 | 0.32 | 0.79 | -0.14 | 0.74 | 0.28 | 0.84 | 0 |
0.74 | 0.81 | 0.84 | 0.25 | 0.31 | 0.21 | 0.82 | 0.34 | 1 |
0.36 | 0.43 | 0.47 | 0.63 | 0.01 | 0.57 | 0.43 | 0.69 | 0 |
0.59 | 0.66 | 0.71 | 0.38 | 0.18 | 0.34 | 0.68 | 0.46 | 1 |
0.16 | 0.2 | 0.25 | 0.87 | -0.2 | 0.82 | 0.22 | 0.9 | 0 |
Clinical Counterfactual Systemic Cascade Simulation v0.2
What this is
A small dataset that tests one question:
Can you detect when a counterfactual clinical system is moving toward cascade failure, not just carrying instability?
This repo focuses on counterfactual systemic cascade simulation.
It models a system where:
- counterfactual divergence may widen
- cascade pressure may rise
- systemic coupling may tighten
- reversibility may shrink before overt failure appears
Run this first
Generate baseline predictions:
python baseline_heuristic.py data/tester.csv predictions.csv
Score them:
python scorer.py data/tester.csv predictions.csv
That is enough to see the full evaluation loop.
You will get:
standard metrics
trajectory detection performance
counterfactual cascade failure detection errors
What to try next
Replace the baseline.
Build your own model.
Output a file like:
id,prediction_score
0,0.12
1,0.81
2,0.67
Then run:
python scorer.py data/tester.csv your_predictions.csv
What matters
Not just accuracy.
The key signals are:
recall_trajectory_deterioration_detection
false_stable_trajectory_rate
These tell you:
are you catching systems that are getting worse
are you missing hidden cascade failure
Data
Each row represents a counterfactual clinical system state.
Core variables:
counterfactual_divergence_score
cascade_pressure_index
systemic_coupling_score
intervention_reversibility_index
drift_gradient
stabilization_window_score
latent_failure_load
coordination_stability_score
Target:
label_counterfactual_cascade_failure
Important distinction
There are two different components in this repo.
scorer.py
evaluates predictions
domain-agnostic
works across all v0.2 datasets
does not generate predictions
baseline_heuristic.py
generates predictions
domain-specific
uses the variables in this dataset
Do not reuse the heuristic across datasets.
It is only a local reference.
What changed from v0.1
v0.1:
static counterfactual cascade classification
v0.2:
adds direction via drift_gradient
This allows you to separate:
unstable but recovering counterfactual systems
unstable and deteriorating counterfactual systems
Why this exists
Most models answer:
what is happening now
This tests:
where the counterfactual system is going
That difference is where cascade failure appears early.
Files
data/train.csv — training data
data/tester.csv — evaluation data
scorer.py — canonical evaluation script
baseline_heuristic.py — dataset-specific reference model
README.md — dataset card
Evaluation
Primary metric:
recall_trajectory_deterioration_detection
Secondary metric:
false_stable_trajectory_rate
Standard metrics are also reported:
accuracy
precision
recall
f1
The scorer supports binary predictions or score-based predictions.
License
MIT
Structural Note
Clarus datasets are structural instruments.
They are designed to expose instability geometry, not just predict isolated outcomes.
This v0.2 repo adds directional state movement so the dataset can separate static counterfactual instability from active deterioration in systemic cascade simulation.
Production Deployment
This dataset can be used in:
counterfactual intervention research
systemic failure simulation
escalation pathway modeling
clinical scenario stress testing
model benchmarking for trajectory-aware cascade reasoning
It is suitable for research and prototyping.
It is not a substitute for live clinical judgment.
Enterprise & Research Collaboration
Clarus builds datasets for:
instability detection
trajectory tracking
intervention reasoning
These structures are not domain-bound.
They apply wherever systems move toward or away from failure.
Applicable domains include:
healthcare systems
financial markets
energy infrastructure
logistics networks
artificial intelligence systems
manufacturing systems
supply chains
climate systems
Any environment where:
capacity and demand interact
delays and coupling exist
trajectory determines outcome
This dataset is one instance of a general stability framework.
- Downloads last month
- 20