Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Addressing Explainability of Generative AI using SMILE (Statistical Model-agnostic Interpretability with Local Explanations)

About

The rapid advancement of generative artificial intelligence has enabled models capable of producing complex textual and visual outputs; however, their decision-making processes remain largely opaque, limiting trust and accountability in high-stakes applications. This thesis introduces gSMILE, a unified framework for the explainability of generative models, extending the Statistical Model-agnostic Interpretability with Local Explanations (SMILE) method to generative settings. gSMILE employs controlled perturbations of textual input, Wasserstein distance metrics, and weighted surrogate modelling to quantify and visualise how specific components of a prompt or instruction influence model outputs. Applied to Large Language Models (LLMs), gSMILE provides fine-grained token-level attribution and generates intuitive heatmaps that highlight influential tokens and reasoning pathways. In instruction-based image editing models, the exact text-perturbation mechanism is employed, allowing for the analysis of how modifications to an editing instruction impact the resulting image. Combined with a scenario-based evaluation strategy grounded in the Operational Design Domain (ODD) framework, gSMILE allows systematic assessment of model behaviour across diverse semantic and environmental conditions. To evaluate explanation quality, we define rigorous attribution metrics, including stability, fidelity, accuracy, consistency, and faithfulness, and apply them across multiple generative architectures. Extensive experiments demonstrate that gSMILE produces robust, human-aligned attributions and generalises effectively across state-of-the-art generative models. These findings highlight the potential of gSMILE to advance transparent, reliable, and responsible deployment of generative AI technologies.

Zeinab Dehghani• 2026

Related benchmarks

TaskDatasetResultRank
Explainability Stability AnalysisInstruction-based Image Editing Stability Evaluation 10 prompts, 30 perturbations--
6
Explainability AccuracyInstruction-based Image Editing different prompts scenario
ATT Accuracy79.44
3
Explainability AccuracyInstruction-based Image Editing different images scenario
ATT Accuracy95.71
3
Attention Accuracy EvaluationInstruction-based text prompts and images (test)--
3
Consistency AnalysisPrompt 'What is the meaning of life?'--
3
Fidelity AnalysisgSMILE Fidelity Analysis Prompt: 'Transform the weather to make it snowing' (test)--
3
Fidelity EvaluationPrompt 'What is the meaning of life?'--
3
Instruction-based Image Editing ConsistencyTransform the weather to make it snowing prompt 1000 iterations (30 perturbations)--
3
Showing 8 of 8 rows

Other info

Follow for update