Deep neural networks have shown robust anomaly detection capabilities. They are capable of capturing temporal and multimodal dependencies. Moreover, they allow for minimal manual feature engineering and domain knowledge independent data pre-processing. Conversely, deep learning or ’black box’ models are difficult to explain. This work presents a technique to explain the output of an unsupervised deep learning model. The well-known IoT dataset of Secure Water Treatment or SWaT has been used for model training and anomaly detection. Anomalies are detected by monitoring reconstruction errors of the LSTM Auto-encoder. LSTM Auto-encoder’s (LSTM-AE) output for detected anomalies is then attempted to be explained by training a duo of Random Forest regression models. The surrogate models are trained to replicate the output of the LSTM-AE. We then use SHAP plots to explain the output of the surrogate models. Each surrogate model captures unique dependencies of the deep learning model for the probed output and is decrypted using TreeSHAP. Finally, a dashboard is designed to answer the questions (when, how, what, and why) associated with the detected anomaly for different personas.