12  L10 — Inverse Modeling & Parameter Identification

Calibrating Constitutive Models from Experimental Data

📽 Slides: Open presentation

12.1 The Inverse Problem

Forward problem: given parameters \(\boldsymbol{\theta}\), compute response \(\mathbf{y} = \mathcal{M}(\boldsymbol{\theta})\).

Inverse problem: given observed data \(\mathbf{y}^\text{obs}\), find \(\boldsymbol{\theta}^*\) such that \[ \boldsymbol{\theta}^* = \arg\min_{\boldsymbol{\theta}} \mathcal{J}(\boldsymbol{\theta}), \qquad \mathcal{J}(\boldsymbol{\theta}) = \|\mathbf{y}^\text{obs} - \mathcal{M}(\boldsymbol{\theta})\|^2. \]

Challenges: non-uniqueness, ill-posedness, expensive forward model.

12.2 Typical Experimental Tests

Test Primary measurement Model parameters identified
Uniaxial tension/compression \(\sigma\)\(\varepsilon\) curve \(E\), \(\sigma_y\), \(H_\text{iso}\)
Cyclic loading Hysteresis loop Kinematic hardening \(c\), \(\gamma\)
Creep \(\varepsilon(t)\) at const. \(\sigma\) \(\tau_i\), \(E_i\) (viscoelastic)
Biaxial / bulge 2D stress state Anisotropy, yield locus shape
Digital Image Correlation (DIC) Full-field strain Spatial distribution of parameters

12.3 Sensitivity Analysis

Local sensitivity: how much does output change with a small parameter perturbation? \[ S_{ij} = \frac{\partial y_i}{\partial \theta_j} \quad (\text{sensitivity matrix } \mathbf{S}) \]

Computed by:

  • Finite differences (simple, expensive: one forward solve per parameter)
  • Adjoint method (one additional solve, independent of number of parameters)
  • Automatic differentiation (via JAX, PyTorch, Enzyme)

12.4 Optimization Algorithms

Gradient-based:

  • Levenberg-Marquardt (LM) — standard for least-squares
  • L-BFGS, BFGS — general smooth objectives
  • Requires gradient (finite diff or adjoint)

Gradient-free:

  • Nelder-Mead simplex — robust for small parameter spaces
  • Differential Evolution, CMA-ES — global optimization

Bayesian:

  • Markov Chain Monte Carlo (MCMC) — full posterior, expensive
  • Gaussian Process surrogate + Bayesian optimization

12.5 Formulating the Objective Function

Weighted least squares: \[ \mathcal{J}(\boldsymbol{\theta}) = \sum_k w_k\left(\frac{y_k^\text{obs} - y_k^\text{sim}(\boldsymbol{\theta})}{\sigma_k}\right)^2 \]

where \(\sigma_k\) is the measurement uncertainty at data point \(k\).

Include regularization for ill-posed problems: \[ \mathcal{J}_\text{reg} = \mathcal{J} + \lambda\|\boldsymbol{\theta} - \boldsymbol{\theta}_0\|^2 \quad (\text{Tikhonov}) \]

12.6 Practical Example: J2 Plasticity Calibration

from scipy.optimize import least_squares
import numpy as np

def residual(params, eps_exp, sig_exp):
    E, sigma_y, H = params
    sig_sim = j2_stress_strain(E, sigma_y, H, eps_exp)
    return sig_sim - sig_exp

result = least_squares(
    residual,
    x0=[200e3, 300, 1000],          # initial guess [MPa]
    bounds=([100e3, 100, 0],        # lower bounds
            [300e3, 1000, 50000]),  # upper bounds
    args=(eps_data, sig_data),
    method='trf',
    verbose=2
)

12.7 Uniqueness and Identifiability

Not all parameters are simultaneously identifiable from a single experiment:

  • Uniaxial tension alone: cannot distinguish isotropic from kinematic hardening
  • Monotonic loading: cannot identify cyclic hardening parameters
  • Homogeneous tests: cannot identify spatial gradient parameters

Identifiability analysis: check rank of sensitivity matrix \(\mathbf{S}^T\mathbf{S}\) — rank deficiency signals non-unique parameters.

Remedy: design multi-test campaigns, use heterogeneous stress states (notched specimens + DIC).

12.8 Verification and Validation

After calibration:

  1. Verification: did the algorithm work correctly? → Check convergence, residuals, sensitivity.
  2. Validation: does the model predict new experiments? → Test on a held-out dataset.

The V&V process is governed by standards (ASME V&V 10, 20) in engineering applications.

Report confidence intervals on the identified parameters, not just point estimates.