GPU-Native Catastrophe Modeling

100× Faster
Hurricane Risk Assessment

Transform 14 hours of computation into 9 seconds. TensorCat processes 25,000 hurricane events with full physics simulation on consumer GPUs.

100×
Speedup vs Traditional
9.1s
25K Events Processed
95.1%
Computational Reduction
179M
Timestep Evaluations
FOUR-KERNEL ARCHITECTURE

GPU-Native Pipeline

Optimized computational flow that reduces memory usage by 44× while maintaining full physics accuracy

1

Spatial Filtering

Pre-filters event-location pairs using GPU-accelerated haversine distance calculations. Eliminates 95%+ of irrelevant computations before they occur.

95.1% pair reduction in 1.83s
2

Temporal Hazard Streaming

Streams physics through time without materializing full spatiotemporal tensors. Captures progressive damage, storm asymmetry, and debris accumulation.

69,066 pairs/sec throughput
3

Vulnerability Assessment

Component-level damage modeling with HAZUS curves. Includes roof, walls, windows, and foundation damage with temporal fatigue effects.

0.01s for 486K assessments
4

Financial Aggregation

Insurance-grade loss computation with policy terms. Generates AAL, PML, TVaR, and exceedance probability curves with sanity checks.

✓ All actuarial checks passed
from tensorcat import TensorCatPipeline, KernelConfig
from tensorcat import load_storm_data

# Load 25,000 hurricane events
event_tracks, _, event_year_mapping, n_sim_years = load_storm_data(
    "df_all.parquet", 
    max_events=25000
)

# Configure with MAXIMUM optimization preset
config = KernelConfig(
    device='cuda',
    spatial_filter_radius_km=400.0,
    temporal_batch_size=200,
    use_mixed_precision=True
)

# Initialize pipeline
pipeline = TensorCatPipeline(config)

# Run complete analysis - 9.1 seconds for 25K events!
results = pipeline.run_pipeline(
    event_tracks=event_tracks,
    location_coords=portfolio_locations,
    building_values=building_values,
    event_year_mapping=event_year_mapping,
    policy_terms={'deductible': 100000, 'limit': 5000000}
)

# Access insurance-grade outputs
print(f"Expected Annual Loss: ${results['financial']['expected_annual_loss']/1e6:.2f}M")
print(f"100-year PML: ${results['financial']['pml_metrics']['pml_100yr']/1e6:.2f}M")

# Output:
# Expected Annual Loss: $3.56M
# 100-year PML: $15.00M
# Total time: 9.1 seconds (100× faster!)
BENCHMARK RESULTS

Real-World Performance

Actual benchmark running 25,000 hurricanes across 400 locations on consumer grade GPU

Traditional CPU Model
120 min
~2 hours processing time
TensorCat GPU
9.1 sec
Sub-10-second analysis
100× FASTER

Throughput

• 2,750 events/second
• 53,455 pairs/second
• 19.7M timestep evaluations/second

Memory Efficiency

• 0.73GB GPU memory used
• 44× less than traditional (32GB → 0.73GB)
• Runs on consumer GPUs

Risk Metrics

• Expected Annual Loss: $3.56M
• 100-year PML: $15M
• Burn rate: 0.17% ✓ PASS

Validation

• All actuarial sanity checks passed
• PML monotonicity: ✓ PASS
• AAL/PML ratio: 0.24 ✓ PASS

GET STARTED

Installation

Install TensorCat and run your first catastrophe model in under 5 minutes

# Clone repository
git clone https://github.com/sachinra0805/tensorcat.git
cd tensorcat

# Install dependencies
pip install torch numpy pandas pyarrow

# Run demo with your own data
python examples/run_tensorcat_demo.py

# Or use in your code
from tensorcat import TensorCatPipeline
pipeline = TensorCatPipeline()
results = pipeline.run_pipeline(...)

Ready to Accelerate Your Risk Models?

Join researchers and practitioners using TensorCat for hurricane risk assessment, insurance pricing, and climate science.

View on GitHub Get Support