EXPLAINABLE AI FOR TRANSPARENT EMISSION REDUCTION DECISION-MAKING

Authors

  • Jeng-Jui Du College of Science and Engineering, Flinders University, Clovelly Park, SA 5042, Australia.
  • Shiu-Chu Chiu (Corresponding Author) College of Science and Engineering, Flinders University, Clovelly Park, SA 5042, Australia.

Keywords:

Explainable AI, Emission reduction, Transparency

Abstract

This paper examines the critical role of Explainable AI (XAI) in enhancing transparency in emission reduction decision-making processes. As climate change poses an urgent global challenge, effective strategies for reducing greenhouse gas emissions are essential for mitigating its impacts. Artificial Intelligence has emerged as a powerful tool in environmental management, facilitating data analysis and optimizing emission reduction efforts. However, the increasing reliance on AI raises concerns about transparency and accountability, which are vital for gaining public trust. This paper defines XAI and explores its methodologies, emphasizing their potential to improve stakeholder engagement and decision-making in environmental policy. By synthesizing existing literature and case studies, we highlight the importance of explainability in fostering trust among stakeholders and ensuring effective and accountable emission reduction strategies. The findings contribute to the ongoing discourse on the ethical and practical implications of AI in environmental governance and underscore the necessity of incorporating XAI into future emission reduction initiatives.

References

[1] Gunning, D, Aha, DW. DARPA's explainable artificial intelligence program. AI Magazine, 2019, 40(2): 44-58.

[2] Murdoch, WJ, Singh, C, Kumbier, K, et al. Definitions, methods, and applications in interpretable machine learning. Proceedings of the National Academy of Sciences, 2019, 116(44): 22071-22080.

[3] Fuss, S, Canadell, JG, Peters, GP, et al. Betting on negative emissions. Nature Climate Change, 2014, 4(10): 850-853.

[4] Wang, X, Wu, YC, Zhou, M, et al. Beyond surveillance: privacy, ethics, and regulations in face recognition technology. Frontiers in big data, 2024, 7, 1337465.

[5] Ma, Z, Chen, X, Sun, T, et al. Blockchain-Based Zero-Trust Supply Chain Security Integrated with Deep Reinforcement Learning for Inventory Optimization. Future Internet, 2024, 16(5): 163.

[6] Wang, X, Wu, YC, Ma, Z. Blockchain in the courtroom: exploring its evidentiary significance and procedural implications in US judicial processes. Frontiers in Blockchain, 2024, 7, 1306058.

[7] Wang, X, Wu, YC, Ji, X, et al. Algorithmic discrimination: examining its types and regulatory measures with emphasis on US legal practices. Frontiers in Artificial Intelligence, 2024, 7, 1320277.

[8] Chen, X, Liu, M, Niu, Y, et al. Deep-Learning-Based Lithium Battery Defect Detection via Cross-Domain Generalization. IEEE Access, 2024, 12, 78505-78514.

[9] Liu, M, Ma, Z, Li, J, et al. Deep-Learning-Based Pre-training and Refined Tuning for Web Summarization Software. IEEE Access, 2024, 12, 92120-92129.

[10] Li, J, Fan, L, Wang, X, et al. Product Demand Prediction with Spatial Graph Neural Networks. Applied Sciences, 2024, 14(16): 6989.

[11] Asif, M, Yao, C, Zuo, Z, et al. Machine learning-driven catalyst design, synthesis and performance prediction for CO2 hydrogenation. Journal of Industrial and Engineering Chemistry, 2024. DOI: https://doi.org/10.1016/j.jiec.2024.09.035.

[12] Lin, Y, Fu, H, Zhong, Q, et al. The influencing mechanism of the communities’ built environment on residents’ subjective well-being: A case study of Beijing. Land, 2024, 13(6): 793.

[13] Sun, T, Yang, J, Li, J, et al. Enhancing Auto Insurance Risk Evaluation with Transformer and SHAP. IEEE Access, 2024, 12, 116546-116557.

[14] Srivastava, N, Hinton, G, Krizhevsky, A, et al. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research, 2014, 15(1): 1929-1958.

[15] Le, QV, Ranzato, MA, Monga, R, et al. Building High-level Features Using Large Scale Unsupervised Learning. Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2015, 8595-8599.

[16] Shapley, LS. A value for n-person games. Contributions to the Theory of Games, 1953, 2(28): 307-317.

[17] Vinuesa, R, Azizpour, H, Leite, I, et al. The role of artificial intelligence in achieving the Sustainable Development Goals. Nature Communications, 2020, 11(1): 1-10.

[18] Doshi-Velez, F, Kim, B. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608. 2017. DOI: https://doi.org/10.48550/arXiv.1702.08608.

[19] Guo, W, Zhao, Y, Lu, H. Big data analytics for concept drift detection in non-stationary data streams. In 2019 IEEE International Conference on Big Data (Big Data), 2019, 2362-2371.

[20] Creutzig, F, Roy, J, Lamb, WF, et al. Towards demand-side solutions for mitigating climate change. Nature Climate Change, 2018, 8(4): 260-263.

[21] Obermeyer, Z, Powers, B, Vogeli, C, et al. Dissecting racial bias in an algorithm used to manage the health of populations. Science, 2019, 366(6464): 447-453.

[22] Gass, V, Schmidt, J, Strauss, F, et al. Assessing the economic wind power potential in Austria. Energy Policy, 2013, 53, 323-330.

[23] Kroll, JA. The fallacy of inscrutability. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 2018, 376(2133): 20180084.

[24] Lundberg, SM, Lee, SI. A unified approach to interpreting model predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS'17). Curran Associates Inc., Red Hook, NY, USA. 2017, 4768-4777.

[25] Eckstein, D, Künzel, V, Sch?fer, L, et al. Global Climate Risk Index 2020. Bonn: Germanwatch. 2019.

[26] Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 2019, 1(5): 206-215.

[27] Gilpin, LH, Bau, D, Yuan, BZ, et al. Explaining explanations: An overview of interpretability of machine learning. In 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA) (pp. 80-89). IEEE. 2018.

[28] Carleton, TA, Hsiang, SM. Social and economic impacts of climate. Science, 2016, 353(6304): aad9837.

[29] Adadi, A, Berrada, M. Peeking inside the black-box: A survey on Explainable Artificial Intelligence (XAI). IEEE Access, 2018, 6, 52138-52160.

[30] Gilvary, C, Madhukar, N, Elkhader, J, et al. The missing pieces of artificial intelligence in medicine. Trends in Pharmacological Sciences, 2020, 41(8): 555-564.

[31] Hao, K. The AI gurus are leaving Big Tech to work on climate change. MIT Technology Review. 2018.

[32] Papernot, N, McDaniel, P. Deep k-nearest neighbors: Towards confident, interpretable

Downloads

Published

2024-01-01

Issue

Section

Research Article

DOI:

How to Cite

Du, J., Chiu, S. (2024). Explainable Ai For Transparent Emission Reduction Decision-Making. Eurasia Journal of Science and Technology, 2(2), 54-62. https://doi.org/10.61784/fer3005