Science, Technology, Engineering and Mathematics.
Open Access

EXPLAINABLE AI FOR TRANSPARENT EMISSION REDUCTION DECISION-MAKING

Download as PDF

Volume 2, Issue 2, Pp 54-62, 2024

DOI: 10.61784/fer3005

Author(s)

Jeng-Jui Du, Shiu-Chu Chiu*

Affiliation(s)

College of Science and Engineering, Flinders University, Clovelly Park, SA 5042, Australia.

Corresponding Author

Shiu-Chu Chiu

ABSTRACT

This paper examines the critical role of Explainable AI (XAI) in enhancing transparency in emission reduction decision-making processes. As climate change poses an urgent global challenge, effective strategies for reducing greenhouse gas emissions are essential for mitigating its impacts. Artificial Intelligence has emerged as a powerful tool in environmental management, facilitating data analysis and optimizing emission reduction efforts. However, the increasing reliance on AI raises concerns about transparency and accountability, which are vital for gaining public trust. This paper defines XAI and explores its methodologies, emphasizing their potential to improve stakeholder engagement and decision-making in environmental policy. By synthesizing existing literature and case studies, we highlight the importance of explainability in fostering trust among stakeholders and ensuring effective and accountable emission reduction strategies. The findings contribute to the ongoing discourse on the ethical and practical implications of AI in environmental governance and underscore the necessity of incorporating XAI into future emission reduction initiatives.

KEYWORDS

Explainable AI; Emission reduction; Transparency

CITE THIS PAPER

Jeng-Jui Du, Shiu-Chu Chiu. Explainable AI for transparent emission reduction decision-making. Frontiers in Environmental Research. 2024, 2(2): 54-62. DOI: 10.61784/fer3005.

REFERENCES

[1] Gunning, D, Aha, DW. DARPA's explainable artificial intelligence program. AI Magazine, 2019, 40(2): 44-58.

[2] Murdoch, WJ, Singh, C, Kumbier, K, et al. Definitions, methods, and applications in interpretable machine learning. Proceedings of the National Academy of Sciences, 2019, 116(44): 22071-22080.

[3] Fuss, S, Canadell, JG, Peters, GP, et al. Betting on negative emissions. Nature Climate Change, 2014, 4(10): 850-853.

[4] Wang, X, Wu, YC, Zhou, M, et al. Beyond surveillance: privacy, ethics, and regulations in face recognition technology. Frontiers in big data, 2024, 7, 1337465.

[5] Ma, Z, Chen, X, Sun, T, et al. Blockchain-Based Zero-Trust Supply Chain Security Integrated with Deep Reinforcement Learning for Inventory Optimization. Future Internet, 2024, 16(5): 163.

[6] Wang, X, Wu, YC, Ma, Z. Blockchain in the courtroom: exploring its evidentiary significance and procedural implications in US judicial processes. Frontiers in Blockchain, 2024, 7, 1306058.

[7] Wang, X, Wu, YC, Ji, X, et al. Algorithmic discrimination: examining its types and regulatory measures with emphasis on US legal practices. Frontiers in Artificial Intelligence, 2024, 7, 1320277.

[8] Chen, X, Liu, M, Niu, Y, et al. Deep-Learning-Based Lithium Battery Defect Detection via Cross-Domain Generalization. IEEE Access, 2024, 12, 78505-78514.

[9] Liu, M, Ma, Z, Li, J, et al. Deep-Learning-Based Pre-training and Refined Tuning for Web Summarization Software. IEEE Access, 2024, 12, 92120-92129.

[10] Li, J, Fan, L, Wang, X, et al. Product Demand Prediction with Spatial Graph Neural Networks. Applied Sciences, 2024, 14(16): 6989.

[11] Asif, M, Yao, C, Zuo, Z, et al. Machine learning-driven catalyst design, synthesis and performance prediction for CO2 hydrogenation. Journal of Industrial and Engineering Chemistry, 2024. DOI: https://doi.org/10.1016/j.jiec.2024.09.035.

[12] Lin, Y, Fu, H, Zhong, Q, et al. The influencing mechanism of the communities’ built environment on residents’ subjective well-being: A case study of Beijing. Land, 2024, 13(6): 793.

[13] Sun, T, Yang, J, Li, J, et al. Enhancing Auto Insurance Risk Evaluation with Transformer and SHAP. IEEE Access, 2024, 12, 116546-116557.

[14] Srivastava, N, Hinton, G, Krizhevsky, A, et al. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research, 2014, 15(1): 1929-1958.

[15] Le, QV, Ranzato, MA, Monga, R, et al. Building High-level Features Using Large Scale Unsupervised Learning. Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2015, 8595-8599.

[16] Shapley, LS. A value for n-person games. Contributions to the Theory of Games, 1953, 2(28): 307-317.

[17] Vinuesa, R, Azizpour, H, Leite, I, et al. The role of artificial intelligence in achieving the Sustainable Development Goals. Nature Communications, 2020, 11(1): 1-10.

[18] Doshi-Velez, F, Kim, B. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608. 2017. DOI: https://doi.org/10.48550/arXiv.1702.08608.

[19] Guo, W, Zhao, Y, Lu, H. Big data analytics for concept drift detection in non-stationary data streams. In 2019 IEEE International Conference on Big Data (Big Data), 2019, 2362-2371.

[20] Creutzig, F, Roy, J, Lamb, WF, et al. Towards demand-side solutions for mitigating climate change. Nature Climate Change, 2018, 8(4): 260-263.

[21] Obermeyer, Z, Powers, B, Vogeli, C, et al. Dissecting racial bias in an algorithm used to manage the health of populations. Science, 2019, 366(6464): 447-453.

[22] Gass, V, Schmidt, J, Strauss, F, et al. Assessing the economic wind power potential in Austria. Energy Policy, 2013, 53, 323-330.

[23] Kroll, JA. The fallacy of inscrutability. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 2018, 376(2133): 20180084.

[24] Lundberg, SM, Lee, SI. A unified approach to interpreting model predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS'17). Curran Associates Inc., Red Hook, NY, USA. 2017, 4768-4777.

[25] Eckstein, D, Künzel, V, Sch?fer, L, et al. Global Climate Risk Index 2020. Bonn: Germanwatch. 2019.

[26] Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 2019, 1(5): 206-215.

[27] Gilpin, LH, Bau, D, Yuan, BZ, et al. Explaining explanations: An overview of interpretability of machine learning. In 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA) (pp. 80-89). IEEE. 2018.

[28] Carleton, TA, Hsiang, SM. Social and economic impacts of climate. Science, 2016, 353(6304): aad9837.

[29] Adadi, A, Berrada, M. Peeking inside the black-box: A survey on Explainable Artificial Intelligence (XAI). IEEE Access, 2018, 6, 52138-52160.

[30] Gilvary, C, Madhukar, N, Elkhader, J, et al. The missing pieces of artificial intelligence in medicine. Trends in Pharmacological Sciences, 2020, 41(8): 555-564.

[31] Hao, K. The AI gurus are leaving Big Tech to work on climate change. MIT Technology Review. 2018.

[32] Papernot, N, McDaniel, P. Deep k-nearest neighbors: Towards confident, interpretable and robust deep learning. arXiv preprint arXiv:1803.04765. 2018. DOI: https://doi.org/10.48550/arXiv.1803.04765.

[33] Bauer, N, Calvin, K, Emmerling, J, et al. Shared socio-economic pathways of the energy sector–quantifying the narratives. Global Environmental Change, 2017, 42, 316-330.

[34] Wang, X, Wu, YC. Balancing innovation and Regulation in the age of geneRative artificial intelligence. Journal of Information Policy, 2024, 14. DOI: https://doi.org/10.5325/jinfopoli.14.2024.0012.

[35] Hastie, T, Tibshirani, R, Friedman, J. The elements of statistical learning: data mining, inference, and prediction. Springer Science & Business Media. 2009. DOI: https://doi.org/10.1007/978-0-387-84858-7.

[36] Kaur, H, Nori, H, Jenkins, S, et al. Interpreting Interpretability: Understanding Data Scientists' Use of Interpretability Tools for Machine Learning. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 2020, 1-14. DOI: https://doi.org/10.1145/3313831.3376219.

[37] Zednik, C. Solving the black box problem: A normative framework for explainable artificial intelligence. Philosophy & Technology, 2019, 1-24.

[38] Langer, M, Oster, D, Speith, T, et al. What do we want from Explainable Artificial Intelligence (XAI)?–A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artificial Intelligence, 2021, 296, 103473.

[39] Lipton, ZC. The mythos of model interpretability. Queue, 2018, 16(3): 31-57.

[40] Montavon, G, Samek, W, Müller, KR. Methods for interpreting and understanding deep neural networks. Digital Signal Processing, 2018, 73, 1-15.

[41] Rolnick, D, Donti, PL, Kaack, LH, et al. Tackling climate change with machine learning. ACM Computing Surveys, 2019, 55(2): 1-96. DOI: https://doi.org/10.1145/3485128.

[42] Barredo Arrieta, A, Díaz-Rodríguez, N, Del Ser, J, et al. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 2020, 58, 82-115.

[43] Lapuschkin, S, W?ldchen, S, Binder, A, et al. Unmasking Clever Hans predictors and assessing what machines really learn. Nature Communications, 2019, 10(1): 1-8.

[44] Mitchell, M, Wu, S, Zaldivar, A, et al. Model cards for model reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency, 2019, 220-229. DOI: https://doi.org/10.1145/3287560.3287596.

[45] Strobelt, H, Gehrmann, S, Pfister, H, et al. LSTMVis: A tool for visual analysis of hidden state dynamics in recurrent neural networks. IEEE Transactions on Visualization and Computer Graphics, 2018, 24(1): 667-676.

[46] Guidotti, R, Monreale, A, Ruggieri, S, et al. A survey of methods for explaining black box models. ACM Computing Surveys, 2018, 51(5): 1-42.

[47] Holzinger, A, Biemann, C, Pattichis, CS, et al. What do we need to build explainable AI systems for the medical domain? arXiv preprint arXiv:1712.09923. 2017. DOI: https://doi.org/10.48550/arXiv.1712.09923.

[48] Arrieta, AB, Díaz-Rodríguez, N, Del Ser, J, et al. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 2020, 58, 82-115.

[49] Vaughan, J, Wallach, H. A human-centered agenda for intelligible machine learning. Machines We Trust: Perspectives on Dependable AI. MIT Press. 2020. DOI: https://doi.org/10.7551/mitpress/12186.003.0014.

[50] Srivastava, M, Heidari, H, Krause, A. Mathematical notions vs. human perception of fairness: A descriptive approach to fairness for machine learning. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019, 2459-2468. DOI: https://doi.org/10.1145/3292500.3330664.

[51] Samek, W, Wiegand, T, Müller, KR. Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:1708.08296. 2023. DOI: https://doi.org/10.48550/arXiv.1708.08296.

[52] Das, A, Rad, P. Opportunities and challenges in explainable artificial intelligence (XAI): A survey. arXiv preprint arXiv:2006.11371. 2024. DOI: https://doi.org/10.48550/arXiv.2006.11371.

[53] Zuo, Z, Niu, Y, Li, J, et al. Machine Learning for Advanced Emission Monitoring and Reduction Strategies in Fossil Fuel Power Plants. Applied Sciences, 2024, 14(18): 8442.

[54] Cheng, HF, Wang, R, Zhang, Z, et al. Explaining decision-making algorithms through UI: Strategies to help non-expert stakeholders. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 2019, 559, 1-12. DOI: https://doi.org/10.1145/3290605.3300789.

[55] Miller, T. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 2019, 267, 1-38.

[56] Ribeiro, MT, Singh, S, Guestrin, C. "Why should I trust you?": Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, 1135-1144. DOI: https://doi.org/10.1145/2939672.2939778.

[57] Dubey, A, Naik, N, Parikh, D, et al. Deep learning the city: Quantifying urban perception at a global scale. In Proceedings of the European Conference on Computer Vision (ECCV), 2016, 9905, 196-212. DOI: https://doi.org/10.1007/978-3-319-46448-0_12.

[58] Vafa, K, Naidu, S, Blei, D. Text-based ideal points. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020, 5345-5357. DOI: https://doi.org/10.48550/arXiv.2005.04232.

[59] Wang, D, Yang, Q, Abdul, A, et al. Designing theory-driven user-centric explainable AI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 2019, 601, 1-15. DOI: https://doi.org/10.1145/3290605.3300831.

[60] Wachter, S, Mittelstadt, B, Russell, C. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harvard Journal of Law & Technology, 2017, 31(2): 841-887.

[61] Yang, K, Qinami, K, Fei-Fei, L, et al. Towards fairer datasets: Filtering and balancing the distribution of the people subtree in the ImageNet hierarchy. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 2020, 547-558. DOI: https://doi.org/10.1145/3351095.3375709.

[62] Díaz-Rodríguez, N, Lamas, A, Sanchez, J, et al. EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep learning representations with expert knowledge graphs: The MonuMAI cultural heritage use case. Information Fusion, 2022, 79, 58-83.

[63] Sundararajan, M, Taly, A, Yan, Q. Axiomatic attribution for deep networks. In Proceedings of the 34th International Conference on Machine Learning, 2017, 70, 3319-3328. DOI: https://doi.org/10.48550/arXiv.1703.01365.

All published work is licensed under a Creative Commons Attribution 4.0 International License. sitemap
Copyright © 2017 - 2024 Science, Technology, Engineering and Mathematics.   All Rights Reserved.