Explainability and Transparency of Classifiers for Air-Handling Unit Faults Using Explainable Artificial Intelligence (XAI)
<p>Proposed explainable fault detection and diagnosis pipeline. The left-hand side represents the offline model training stage, while the right-hand side corresponds to the online fault monitoring stage.</p> "> Figure 2
<p>Schematic diagram of an air-handling unit.</p> "> Figure 3
<p>Diagram of the confusion matrix of different fault classes for the logistic regression model, for five fault types. The vertical axes represent the actual labels, and the horizontal axes represent the predicted labels for the corresponding fault classes. The total number of correctly predicted samples is the sum of all actual samples labeled as true and classified as true, and the actual samples labeled as false and classified as false.</p> "> Figure 4
<p>Diagram of the confusion matrix of different fault classes for the random forest model and for five fault types.</p> "> Figure 5
<p>Diagram of the confusion matrix of different fault classes for the XGBoost model and for five fault types.</p> "> Figure 6
<p>Summary of the effects of all the features on the XGBoost model prediction of the fault type “Heating coil valve leak”. The SHAP values show the impact of each feature on the model output. The color represents the feature value (red shows a higher impact while blue shows a lower impact). For example, we can observe that the higher the deltaSupplyTemp (difference between supply air temperature and its setpoint), the more likely it is to indicate a heating coil valve leak.</p> "> Figure 7
<p>Summary of the effects of all the features on the random forest model prediction of the fault type “Heating coil valve leak”.</p> "> Figure 8
<p>Summary of the effects of all the features on the XGBoost model prediction of the fault type “Heat recovery not working”.</p> "> Figure 9
<p>Summary of the effects of all the features on the random forest model prediction of the fault type “Heat recovery not working”.</p> "> Figure 10
<p>Visualization of explanations using a sliding window for the fault type “Fan pressure sensor malfunction” during an observation period from 10:00 to 22:00. Tags labeled in green represent a healthy state, whereas those highlighted in red represent the presence of the fault. The sample evaluated at 18:00 shows an overall fault probability of 97.73% with two main contributing features, ASFS and ASFPE.</p> "> Figure 11
<p>Explanation visualized as a sliding window for the fault type “Heat recovery not working” during an observation period from 15:00 to 22:00.</p> "> Figure 12
<p>Explanation visualized as a sliding window for the fault type “Heating coil valve leak” during an observation period from 08:00 to 16:00.</p> "> Figure 13
<p>Explanation visualized as a sliding window for the fault type “Cooling coil valve stuck” during an observation period from 10:00 to 22:00.</p> "> Figure 14
<p>Explanation visualized as a sliding window for the fault type “Cooling coil valve closed” during an observation period from 08:00 to 12:45.</p> "> Figure A1
<p>Heating coil valve leakage. Standard SHAP plot for individual explanations.</p> "> Figure A2
<p>Fan pressure sensor malfunction. SHAP explanation for fault at a specific timestamp.</p> "> Figure A3
<p>Fan pressure sensor malfunction. SHAP plot for sliding window explanation visualized from “03-11 10:00” to “03-11 22:00”; <span class="html-italic">x</span> axis represents 15 min time instances.</p> "> Figure A4
<p>Fan pressure sensor malfunction. Modified SHAP plot for sliding window explanation; <span class="html-italic">x</span> axis represents 15 min time instances.</p> "> Figure A5
<p>Heating coil valve leakage. Standard SHAP plot for individual explanations.</p> "> Figure A6
<p>Heating coil valve leakage. SHAP plot for sliding window explanation, visualized from “03-25 08:00” to “03-25 16:00”.</p> "> Figure A7
<p>Heating coil valve leakage. Modified SHAP plot for sliding window explanation.</p> ">
Abstract
:1. Introduction
- A method to explain the fault diagnosis output of an XGBoost-based model using Shapley values for HVAC expert users. A sliding window system is used to visualize the short history of the relevant features and provide explanations for the diagnosed faults in the observed time period. This allows users to understand not only what happens in each individual time step but also to monitor the progress history of the fault. The fault detection and diagnosis pipeline is conducted using real-world data obtained from the air-handling unit of a commercial building.
- A method to incorporate human users into the decision-making process by allowing the selection of relevant features to be explained for each fault type. The method explains the features with high Shapley values and features corresponding to each fault type. This can provide a practical value by keeping the explained features relevant.
- An analysis for the XAI explanations of each fault type in the AHU dataset by using domain expert evaluation to obtain feedback on the generated explanations. This helps us understand the effects of the explanations on the users’ decision-making process, and how well the users can understand the explanations.
2. Technical Background: XGBoost and SHAP
2.1. Extreme Gradient Boosting
2.2. Shapley Additive Explanation
- Local accuracy: When approximating the original model f for an input x, local accuracy is the ability of the explanatory model to represent the output of the simplified model for the simplified input :
- Missingness: Missingness requires that the features that are missing from the input to have zero impact on the model output:
- Consistency: Let and denote setting . For any two models f and , if
Algorithm 1. Calculating SHAP values |
Input:X: instance for which the explanations are generated |
f: classification model for fault diagnosis |
Output:: SHAP values for all features |
1: |
2: |
3. Methodology
- Offline model training stage:
- (a)
- Data are collected for faulty and fault-free operations and labeled according to fault type. Samples that do not belong to any fault class are labeled as normal. Data are preprocessed by removing records with null or non-existing values. Samples collected during the off-state and during the first hour of operation are also removed.
- (b)
- Prior knowledge related to all fault types is gathered. This includes the mapping of feature sets that correspond to each type of fault. The feature sets can be input by end-users, who decide which relevant features they want to see for each type of fault; otherwise, the default feature sets are chosen.
- (c)
- An XGBoostClassifier model is implemented for the FDD problem. The model is a multi-class multi-label classification model, which is used to classify which fault class(es) each sample belongs to. One sample can belong to multiple fault classes.
- (d)
- The SHAP method is used to generate explanations for the fault diagnosis model. A Tree SHAP explanatory object is fit using the developed model to be able to generate explanations during the online monitoring stage.
- Online fault monitoring stage:
- (a)
- Real-time measurements are obtained from the system. The new observation is preprocessed and input into the trained XGBoost model to distinguish between the fault classes and the normal class.
- (b)
- If the sample represents a faulty operation, the interpreter module is triggered to generate the explanation. If no fault is detected, the following steps are skipped.
- (c)
- Using the fitted SHAP explanatory object from the previous offline training stage, SHAP values are generated for the observed faulty samples and the samples from a number of time steps prior to observation, to provide a short history of fault occurrence.
- (d)
- A visualization is created for relevant features using a sliding window graph.
- (e)
- A user gives feedback on the explained feature choices, which is used to update the sets of relevant features.
3.1. Description of the System
3.2. Air-Handling Unit
3.3. Data Preprocessing
3.4. Description of Faults under Observation
3.5. XGBoost and Hyperparameter Tuning
3.6. Explaining the Fault Predictions
- Features that have SHAP values higher than mean SHAP values.
- Features that are pre-selected by users or features that are mapped to each corresponding fault type using prior knowledge.
Algorithm 2. Feature selection for providing explanations |
Input: userSelectedFeatures: user-selected features, |
shapValue: SHAP values for all features (see Algorithm 1), |
features: list of all features |
Output: relevantFeatures: relevant feature list |
1: |
2: |
3: for do |
4: if then |
5: |
6: end if |
7: end for |
8: |
9: return relevantFeatures |
3.7. Domain Requirements
- R1: Option to choose variables: E2, E3, E5, and E6 would like to have the option to choose variables. E2 thinks that this option becomes less important if the result shows the most relevant variables by default. Important
- R2: Visualizing the short history of faults: E1, E5, E6, and E7 think that viewing the short history of faults is important. E2 thinks that this option is different from one fault type to another. In real life, some faults occur so suddenly that in this case, the short history of the fault is useful. Important
- R3: Visualizing only relevant variables: E1, E2, E3, E4, and E7 would like to view only the most important features that impact the fault likelihood. In addition to R1, they only want to view the variables that influence the fault, and to have the option to select more variables. Critical
- R4: Visualizing each feature’s attribution to the fault: E2, E4, and E5 think it is good to view how much each feature affects the fault likelihood in terms of probability. In addition, E2 also specifies that the probability of the diagnosed fault is also a convincing factor for users. For example, if the probability is 1.00, then it is more convincing than when the probability is only 0.97. E3 and E4 do not think that this is very important. E5 would like to observe the change in probability when the fault occurs. Optional
4. Results and Analysis
4.1. Performance Results
4.2. Comparative Study
4.3. Explaining Faults
4.4. Case Studies
4.4.1. Case 1: Fan Pressure Sensor Malfunction
4.4.2. Case 2: Heat Recovery Not Working
4.4.3. Case 3: Heating Coil Valve Leakage
4.4.4. Case 4: Cooling Coil Valve Stuck
4.4.5. Case 5: Cooling Coil Valve Closed
4.5. Expert Survey
- (a)
- Standard SHAP plot for individual instances;
- (b)
- Standard SHAP stacked plot for a specific time period;
- (c)
- Modified version of the SHAP explanation.
Expert Insights
- E2 thinks that different types of faults should have their own methods of representation. Faults are different in terms of how they develop. As an example, the heating coil fault appears gradually. Therefore, an explanation for one single instance may be enough to determine the fault at the current state. The opposite case applies to faults that occur very suddenly. For example, in the case of the malfunction of the fan pressure sensor, the fault appears abruptly, and so only a short history of the fault can help identify the problem. In the case of fan pressure sensor failure, the fan pressure measurement drops so suddenly that the users are able to see the obvious change in the pattern, where the fan speed becomes very low. In comparison, the faults that develop gradually may not always be obvious from only a short history.
- E1 would like further improvement in the visualization. The participant also mentions that the plot contains a lot of visual noise. This is not a problem with measurement values that are more static. However, it may become a problem if there are too many values that change over the short visualization history. Currently, all of the annotation labels for each variable are highlighted with the same color, either green or red. Therefore, more colors would help to distinguish between different variables and make it easier to follow.
- E5 gives positive feedback, but would also like to see more related variables that would help validate the impact of the fault. The cost impact, thermal comfort impact, and component lifespan impact would help in understanding the importance of the fault and allow the HVAC engineers to prioritize maintenance activities accordingly.
- E4 commented on the fault type “fan pressure sensor malfunction” for which the features shown are relevant. However, the participants pointed out that the supply fan pressure should receive more weight than the supply fan speed itself, the opposite of which is given in the explanation graph. However, this can take the form of either the auto-generated feature importance from the model itself or the representation of the samples in the dataset. This is important, as users can immediately determine whether the system can be trusted. If the model gives the correct weight to the correct features, then it will build trust.
- E6 commented on the “heating coil valve leakage fault” to say that the explanations given are sufficient, but there are further uncertainties. It is difficult to say with confidence that it is a fault in the heating coil, since there can also be design problems that cause the temperature increase. It may be that the temperature measurement locations are not taken exactly before and after the coil fails; therefore, the temperature increase may instead be due to heat transfer through the long ducts. For the fault type “fan pressure malfunction”, E6 was also uncertain after looking at the short history and would also like to view the history from the previous day in order to understand whether the fault was caused by the building’s schedule.
- Users would like to visualize relevant features. This includes the most important features impacting the fault and the features that would further aid in confirming or rejecting the fault.
- Users want to see how frequently the fault has appeared. Therefore, more flexibility in the sliding window graph is required, i.e., an option to view the history from the day before the fault occurs, or even earlier.
- Users want to have a reference point, i.e., expected values, which they can compare with the values of the faulty sample.
5. Discussion
Limitations
- The current work is limited to a dataset from a single air-handling unit. It may be possible to use the model to diagnose the specific AHU used to train the model. However, aggregating datasets from multiple AHUs would be useful in generalizing the faults, and would also help in studying the practicality of the data-driven method for large-scale fault diagnosis systems. Then, its usage will not be restrained to only one machine.
- Related to the previous point, our dataset contains only the minimum number of variables. Because of sensor costs, building owners tend to install the minimum number of sensors necessary for control. Therefore, not all of the sensors required for fault detection tasks are available. Features such as meter values that may provide valuable information on the “heating coil valve leakage” fault are not always measured. Although the faults can be identified based on the fault symptoms that appear, it is still difficult to pinpoint with full certainty what the causes of the fault are in the case that a crucial sensor is missing. Thus, the explanation capability is limited, and users are not convinced of the predicted fault after being provided with the explanation.
- This work is only limited to one type of sensor fault, which is available in the dataset. Sensor faults are very common for building systems. Examples may include sensor bias and sensor drift. This problem may cause uncertainties in fault diagnosis since the system may read the wrong measurement and classify the samples as anomalies. Therefore, identifying sensor faults would mitigate the risk of incorrect diagnoses and false alarms. Further analysis of sensor faults is crucial for fault diagnosis tasks.
- In this work, the definition of a good explanation is limited to only the scores from user evaluation. However, a more systematic approach would produce a more accurate evaluation and make the tasks less manual. Human users may make mistakes in the evaluation process, and one user may also perceive the fault explanations differently from another user.
- The AHU samples used in this study were collected during the COVID-19 period. During this period, the ventilation units of the shopping mall experienced unusual loads, due to changing government regulations that impacted building occupancy rates. Therefore, the faults may have appeared more frequently and may not reflect the typical conditions of the AHU of the building.
6. Conclusions and Future Work
Future Work
- The explanation and visualization methods should be further improved to cover more types of faults in a more extended dataset. For example, different types of faults, such as gradual faults and abrupt faults, may require different methods in order to be explained. For some faults, a longer history should be visualized, while for others, an explanation from only an individual instance is sufficient. More extensive involvement from HVAC engineers is necessary in order to provide explanations that suit various types of faults and that are effective in communicating information to end-users.
- We aim to strengthen the definition of a good explanation for fault detection in the HVAC systems of buildings. In addition to user evaluation, we should adopt a further systematic approach that is less reliant on manual evaluation.
- A larger dataset is required in order to test the scalability of the fault diagnosis method. The dataset may include samples from the AHU of different buildings and under different climate conditions. Other HVAC components, such as chillers or heat pumps, may also be explored. Ideally, more types of faults will be covered, including sensor faults and component faults.
- Other challenging possibilities also include the development of an unsupervised model for fault detection tasks, and the application of an explainable method to understand the output.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
1D-CNN | One-Dimensional CNN |
AHU | Air-Handling Unit |
AI | Artificial Intelligence |
API | Application Programming Interface |
BMS | Building Management System |
CNN | Convolutional Neural Network |
FDD | Fault Detection and Diagnosis |
Grad-Absolute-CAM | Absolute Gradient-weighted Class Activation Mapping |
HVAC | Heating, Ventilation and Air Conditioning |
LIME | Local Interpretable Model-Agnostic Explanations |
ML | Machine Learning |
NN | Neural Network |
SHAP | Shapley Additive Explanations |
SVM | Support Vector Machine |
XAI | Explainable Artificial Intelligence |
XGBoost | Extreme Gradient Boosting |
Appendix A. Explainable Fault Detection and Diagnosis Methods for Air-Handling Units: Research Survey
References
- Adadi, A.; Berrada, M. Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access 2018, 6, 52138–52160. [Google Scholar] [CrossRef]
- Joshi, G.; Walambe, R.; Kotecha, K. A review on explainability in multimodal deep neural nets. IEEE Access 2021, 9, 59800–59821. [Google Scholar] [CrossRef]
- Nomm, S.; Bardos, K.; Toomela, A.; Medijainen, K.; Taba, P. Detailed analysis of the Luria’s alternating SeriesTests for Parkinson’s disease diagnostics. In Proceedings of the 17th IEEE International Conference on Machine Learning and Applications (ICMLA), Orlando, FL, USA, 17–20 December 2018; pp. 1347–1352. [Google Scholar] [CrossRef]
- Machlev, R.; Heistrene, L.; Perl, M.; Levy, K.; Belikov, J.; Mannor, S.; Levron, Y. Explainable Artificial Intelligence (XAI) techniques for energy and power systems: Review, challenges and opportunities. Energy AI 2022, 9, 100169. [Google Scholar] [CrossRef]
- Machlev, R.; Perl, M.; Belikov, J.; Levy, K.Y.; Levron, Y. Measuring explainability and trustworthiness of power quality disturbances classifiers using XAI—Explainable artificial intelligence. IEEE Trans. Ind. Inform. 2022, 18, 5127–5137. [Google Scholar] [CrossRef]
- Srinivasan, S.; Arjunan, P.; Jin, B.; Sangiovanni-Vincentelli, A.L.; Sultan, Z.; Poolla, K. Explainable AI for chiller fault-detection systems: Gaining human trust. Computer 2021, 54, 60–68. [Google Scholar] [CrossRef]
- Guerra-Manzanares, A.; Nomm, S.; Bahsi, H. Towards the integration of a post-hoc interpretation step into the machine learning Workflow for IoT botnet detection. In Proceedings of the 18th IEEE International Conference On Machine Learning and Applications (ICMLA), Boca Raton, FL, USA, 16–19 December 2019; pp. 1162–1169. [Google Scholar] [CrossRef]
- Zhao, Y.; Li, T.; Zhang, X.; Zhang, C. Artificial intelligence-based fault detection and diagnosis methods for building energy systems: Advantages, challenges and the future. Renew. Sustain. Energy Rev. 2019, 109, 85–101. [Google Scholar] [CrossRef]
- Wang, S.; Xiao, F. AHU sensor fault diagnosis using principal component analysis method. Energy Build. 2004, 36, 147–160. [Google Scholar] [CrossRef]
- Du, Z.; Jin, X. Multiple faults diagnosis for sensors in air handling unit using Fisher discriminant analysis. Energy Convers. Manag. 2008, 49, 3654–3665. [Google Scholar] [CrossRef]
- Fan, B.; Du, Z.; Jin, X.; Yang, X.; Guo, Y. A hybrid FDD strategy for local system of AHU based on artificial neural network and wavelet analysis. Build. Environ. 2010, 45, 2698–2708. [Google Scholar] [CrossRef]
- Han, H.; Gu, B.; Hong, Y.; Kang, J. Automated FDD of multiple-simultaneous faults (MSF) and the application to building chillers. Energy Build. 2011, 43, 2524–2532. [Google Scholar] [CrossRef]
- Du, Z.; Fan, B.; Jin, X.; Chi, J. Fault detection and diagnosis for buildings and HVAC systems using combined neural networks and subtractive clustering analysis. Build. Environ. 2014, 73, 1–11. [Google Scholar] [CrossRef]
- Du, Z.; Fan, B.; Chi, J.; Jin, X. Sensor fault detection and its efficiency analysis in air handling unit using the combined neural networks. Energy Build. 2014, 72, 157–166. [Google Scholar] [CrossRef]
- Liao, H.; Cai, W.; Cheng, F.; Dubey, S.; Rajesh, P.B. An online data-driven fault diagnosis method for air handling units by rule and convolutional neural networks. Sensors 2021, 21, 4358. [Google Scholar] [CrossRef] [PubMed]
- Han, H.; Xu, L.; Cui, X.; Fan, Y. Novel chiller fault diagnosis using deep neural network (DNN) with simulated annealing (SA). Int. J. Refrig. 2021, 121, 269–278. [Google Scholar] [CrossRef]
- Zeng, Y.; Chen, H.; Xu, C.; Cheng, Y.; Gong, Q. A hybrid deep forest approach for outlier detection and fault diagnosis of variable refrigerant flow system. Int. J. Refrig. 2020, 120, 104–118. [Google Scholar] [CrossRef]
- Li, G.; Yao, Q.; Fan, C.; Zhou, C.; Wu, G.; Zhou, Z.; Fang, X. An explainable one-dimensional convolutional neural networks based fault diagnosis method for building heating, ventilation and air conditioning systems. Build. Environ. 2021, 203, 108057. [Google Scholar] [CrossRef]
- Montazeri, A.; Kargar, S.M. Fault detection and diagnosis in air handling using data-driven methods. J. Build. Eng. 2020, 31, 101388. [Google Scholar] [CrossRef]
- Kaplan, H.; Tehrani, K.; Jamshidi, M. A fault diagnosis design based on deep learning approach for electric vehicle applications. Energies 2021, 14, 6599. [Google Scholar] [CrossRef]
- Sairam, S.; Srinivasan, S.; Marafioti, G.; Subathra, B.; Mathisen, G.; Bekiroglu, K. Explainable Incipient Fault Detection Systems for Photovoltaic Panels. arXiv 2020, arXiv:2011.09843. [Google Scholar]
- Cepeda, C.; Orozco-Henao, C.; Percybrooks, W.; Pulgarín-Rivera, J.D.; Montoya, O.D.; Gil-González, W.; Vélez, J.C. Intelligent fault detection system for microgrids. Energies 2020, 13, 1223. [Google Scholar] [CrossRef]
- Yin, S.; Wang, G.; Karimi, H.R. Data-driven design of robust fault detection system for wind turbines. Mechatronics 2014, 24, 298–306. [Google Scholar] [CrossRef]
- Cheng, F.; Cai, W.; Zhang, X.; Liao, H.; Cui, C. Fault detection and diagnosis for Air Handling Unit based on multiscale convolutional neural networks. Energy Build. 2021, 236, 110795. [Google Scholar] [CrossRef]
- Madhikermi, M.; Malhi, A.K.; Främling, K. Explainable artificial intelligence based heat recycler fault detection in air handling unit. In Lecture Notes in Computer Science; Calvaresi, D., Najjar, A., Schumacher, M., Främling, K., Eds.; Springer Nature Switzerland AG: Cham, Switzerland, 2019; pp. 110–125. [Google Scholar]
- Yan, R.; Ma, Z.; Zhao, Y.; Kokogiannakis, G. A decision tree based data-driven diagnostic strategy for air handling units. Energy Build. 2016, 133, 37–45. [Google Scholar] [CrossRef]
- Belikov, J.; Meas, M.; Machlev, R.; Kose, A.; Tepljakov, A.; Loo, L.; Petlenkov, E.; Levron, Y. Explainable AI based fault detection and diagnosis system for air handling units. In Proceedings of the International Conference on Informatics in Control, Automation and Robotics, Lisbon, Portugal, 14–16 July 2022; pp. 271–279. [Google Scholar]
- Chen, T.; Guestrin, C. XGBoost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar] [CrossRef]
- Lundberg, S.M.; Lee, S.I. A unified approach to interpreting model predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 4768–4777. [Google Scholar]
- Akhlaghi, Y.G.; Aslansefat, K.; Zhao, X.; Sadati, S.; Badiei, A.; Xiao, X.; Shittu, S.; Fan, Y.; Ma, X. Hourly performance forecast of a dew point cooler using explainable Artificial Intelligence and evolutionary optimisations by 2050. Appl. Energy 2021, 281, 116062. [Google Scholar]
- Antwarg, L.; Miller, R.M.; Shapira, B.; Rokach, L. Explaining anomalies detected by autoencoders using Shapley Additive Explanations. Expert Syst. Appl. 2021, 186, 15736. [Google Scholar] [CrossRef]
- Knapič, S.; Malhi, A.; Saluja, R.; Främling, K. Explainable artificial intelligence for human decision support system in the medical domain. Mach. Learn. Knowl. Extr. 2021, 3, 740–770. [Google Scholar]
- Yu, Y.; Woradechjumroen, D.; Yu, D. A review of fault detection and diagnosis methodologies on air-handling units. Energy Build. 2014, 82, 550–562. [Google Scholar]
Ref. | Application | AI Model | XAI Technique | Number of Fault Classes | Evaluation Metrics | Fault Classes | F1-Score | Year |
---|---|---|---|---|---|---|---|---|
This work | Detecting AHU faults | XGBoost, RF, Logistic Regression | SHAP | 5 | Accuracy, precision, recall, sensitivity, specificity, F1-score | Heating coil valve leak | 0.978 | 2022 |
Heat recovery fault | 0.997 | |||||||
Cooling coil valve stuck | 0.986 | |||||||
Fan pressure sensor failure | 0.998 | |||||||
Cooling coil valve closed | 1.00 | |||||||
[18] | Detecting AHU faults | Rule and 1D-CNN | N/A | 4 | F1-score | Cooling coil valve stuck | 0.987 | 2021 |
Fan circuits broke down | 0.995 | |||||||
Outdoor air excessive | 0.993 | |||||||
Unit air leakage | 0.985 | |||||||
[24] | Detecting AHU faults | Multiscale convolutional neural networks | N/A | 4 | F1-score | Duct air leakage | 1.00 | 2021 |
Fan efficiency decrease | 0.999 | |||||||
Cooling coil valve stuck | 0.994 | |||||||
Outdoor air excess | 1.00 | |||||||
[25] | Detecting heat recycler faults in the AHU | SVM and NN | LIME | 1 | Accuracy, recall, precision, sensitivity, specificity, F1-score | Heat recycler failure | 1.00 | 2019 |
[26] | Detecting AHU faults | Decision tree | N/A | 8 | F1-score | Heating coil valve leakage | 0.90 | 2016 |
Cooling coil valve stuck | 1.00 | |||||||
Cooling coil valve stuck at 65% | 0.98 | |||||||
Return fan fixed at 30% | 1.00 | |||||||
Return fan failure | 1.00 | |||||||
Outdoor air damper stuck | 1.00 | |||||||
Exhaust air damper stuck | 0.90 | |||||||
Duct leakage before supply fan | 1.0 |
No. | Feature | Short Description | Unit |
---|---|---|---|
x1 | AAT | Fresh air intake temperature | °C |
x2 | ACCVO | Cooling coil valve opening | % |
x3 | AHCVO | Heating coil valve opening | % |
x4 | AHRS | Heat recovery rotation speed | % |
x5 | AHRST | Supply air temperature after heat recovery | °C |
x6 | ARAT | Return air temperature | °C |
x7 | ASAT | Supply air temperature | °C |
x8 | ASATCSP | Supply air temperature calculated setpoint | °C |
x9 | ASFPE | Supply fan static pressure | Pa |
x10 | ASFS | Supply fan speed | % |
x11 | tempDiffHC | Temperature difference before and after heating coil | °C |
x12 | HREfficiency | Heat recovery efficiency | % |
x13 | deltaSupplyTemp | Difference between supply air temperature and supply air temperature setpoint | °C |
No. | Abbreviation | Fault Type | Component | Sample Size |
---|---|---|---|---|
Fault 1 | FPES_M | Fan pressure sensor malfunction | Fan pressure sensor | 894 |
Fault 2 | HR_NW | Heat recovery not working | Heat recovery | 1146 |
Fault 3 | HCV_L | Heating coil valve leakage | Heating coil | 794 |
Fault 4 | CCV_S | Cooling coil valve stuck | Cooling valve | 434 |
Fault 5 | CCC_V | Closed cooling coil valve | Control | 768 |
– | Normal | – | – | 20,925 |
x1 | x2 | x3 | x4 | x5 | x6 | x7 | x8 | x9 | x10 | x11 | x12 | x13 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Fault 1 | ✓ | ✓ | |||||||||||
Fault 2 | ✓ | ✓ | ✓ | ✓ | ✓ | ||||||||
Fault 3 | ✓ | ✓ | ✓ | ✓ | |||||||||
Fault 4 | ✓ | ✓ | ✓ | ✓ | |||||||||
Fault 5 | ✓ | ✓ | ✓ | ✓ |
Model | Fault Class | Accuracy | Precision | Recall | Sensitivity | Specificity | F1 |
---|---|---|---|---|---|---|---|
LR | HCV_L | 0.981 | 0.731 | 0.828 | 0.828 | 0.987 | 0.776 |
HR_NW | 0.995 | 0.932 | 0.971 | 0.971 | 0.996 | 0.951 | |
CCV_S | 0.997 | 0.888 | 0.972 | 0.972 | 0.997 | 0.928 | |
FPES_M | 0.999 | 0.996 | 1 | 1 | 0.999 | 0.998 | |
CCV_C | 0.985 | 0.791 | 0.748 | 0.748 | 0.993 | 0.769 | |
Normal | 0.883 | 0.954 | 0.889 | 0.889 | 0.861 | 0.920 | |
Weighted | 0.859 | 0.943 | 0.900 | 0.900 | 0.893 | 0.887 | |
RF | HCV_L | 0.997 | 0.955 | 0.988 | 0.988 | 0.988 | 0.971 |
HR_NW | 0.999 | 0.997 | 0.997 | 0.997 | 0.999 | 0.997 | |
CCV_S | 0.999 | 0.960 | 0.986 | 0.986 | 0.999 | 0.973 | |
FPES_M | 0.999 | 0.996 | 1 | 1 | 0.999 | 0.998 | |
CCV_C | 0.999 | 0.999 | 0.989 | 0.989 | 0.999 | 0.993 | |
Normal | 0.995 | 0.996 | 0.994 | 0.994 | 0.998 | 0.997 | |
Weighted | 0.993 | 0.996 | 0.994 | 0.994 | 0.998 | 0.994 | |
XGB | HCV_L | 0.998 | 0.974 | 0.982 | 0.982 | 0.998 | 0.978 |
HR_NW | 0.999 | 0.997 | 0.997 | 0.997 | 0.999 | 0.997 | |
CCV_S | 0.999 | 0.973 | 1 | 1 | 0.999 | 0.986 | |
FPES_M | 0.999 | 0.996 | 1 | 1 | 0.999 | 0.998 | |
CCV_C | 1 | 1 | 1 | 1 | 1 | 1 | |
Normal | 0.997 | 0.998 | 0.997 | 0.997 | 0.996 | 0.998 | |
Weighted | 0.996 | 0.997 | 0.997 | 0.997 | 0.997 | 0.997 |
FPES_M | HCV_L | ||||||
---|---|---|---|---|---|---|---|
Measures | (a) | (b) | (c) | (a) | (b) | (c) | |
Explainability score | mean | 3 | 2.71 | 3.71 | 3.14 | 2.28 | 4.71 |
median | 4 | 2 | 5 | 4 | 1 | 5 | |
User satisfaction | mean | 5.71 | 6.57 | 9 | 5.71 | 5.85 | 7.85 |
median | 6 | 7 | 9 | 6 | 6 | 9 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Meas, M.; Machlev, R.; Kose, A.; Tepljakov, A.; Loo, L.; Levron, Y.; Petlenkov, E.; Belikov, J. Explainability and Transparency of Classifiers for Air-Handling Unit Faults Using Explainable Artificial Intelligence (XAI). Sensors 2022, 22, 6338. https://doi.org/10.3390/s22176338
Meas M, Machlev R, Kose A, Tepljakov A, Loo L, Levron Y, Petlenkov E, Belikov J. Explainability and Transparency of Classifiers for Air-Handling Unit Faults Using Explainable Artificial Intelligence (XAI). Sensors. 2022; 22(17):6338. https://doi.org/10.3390/s22176338
Chicago/Turabian StyleMeas, Molika, Ram Machlev, Ahmet Kose, Aleksei Tepljakov, Lauri Loo, Yoash Levron, Eduard Petlenkov, and Juri Belikov. 2022. "Explainability and Transparency of Classifiers for Air-Handling Unit Faults Using Explainable Artificial Intelligence (XAI)" Sensors 22, no. 17: 6338. https://doi.org/10.3390/s22176338
APA StyleMeas, M., Machlev, R., Kose, A., Tepljakov, A., Loo, L., Levron, Y., Petlenkov, E., & Belikov, J. (2022). Explainability and Transparency of Classifiers for Air-Handling Unit Faults Using Explainable Artificial Intelligence (XAI). Sensors, 22(17), 6338. https://doi.org/10.3390/s22176338