DEEP REINFORCEMENT LEARNING FOR DYNAMIC SHARDING IN UAV NETWORKS
Volume 2, Issue 3, Pp 58-65, 2024
DOI: https://doi.org/10.61784/wjit3016
Author(s)
Victoria Lee
Affiliation(s)
Department of Mechanical Engineering, National University of Singapore, Singapore.
Corresponding Author
Victoria Lee
ABSTRACT
This study investigates the application of Deep Reinforcement Learning (DRL) for dynamic sharding in Unmanned Aerial Vehicle (UAV) networks, addressing the limitations of traditional static resource management techniques. As UAV networks expand their roles across diverse sectors, including agriculture, logistics, surveillance, and disaster management, the need for efficient and adaptive resource allocation becomes increasingly critical. UAVs operate under constraints such as limited battery life, communication bandwidth, and processing power, making optimal resource management essential for mission success. Traditional static sharding methods often fail to adapt to rapidly changing operational conditions, such as fluctuations in environmental factors or mission requirements, leading to inefficiencies, increased latency, and potential mission failures.
This research proposes a DRL-based framework that dynamically allocates tasks and resources among UAVs based on real-time performance metrics and environmental conditions. By employing a learning-based approach, the DRL framework is capable of continuously improving its decision-making processes through experience, allowing it to respond effectively to the complexities inherent in UAV operations. The findings indicate that the DRL-based dynamic sharding solution significantly enhances operational efficiency, reduces latency, and improves overall resource utilization in UAV networks. The results demonstrate that the DRL approach not only optimizes task allocation but also ensures a balanced distribution of workload among UAVs, ultimately leading to increased reliability and responsiveness of the network.
This work contributes to the development of more resilient and adaptive UAV systems, addressing the challenges posed by static resource management methods. Furthermore, it lays the groundwork for future advancements in UAV network management, highlighting the potential of machine learning techniques to revolutionize resource allocation strategies in dynamic and complex environments. The implications of this research extend beyond UAV networks, offering insights into the broader applications of DRL in distributed systems and real-time decision-making scenarios.
KEYWORDS
Deep Reinforcement Learning; UAV networks; Dynamic sharding; Resource management; Adaptive systems
CITE THIS PAPER
Victoria Lee. Deep reinforcement learning for dynamic sharding in UAV networks. World Journal of Information Technology. 2024, 2(3): 58-65. DOI: https://doi.org/10.61784/wjit3016.
REFERENCES
[1] Zhang, X, Li, P, Han, X, et al. Enhancing Time Series Product Demand Forecasting with Hybrid Attention-Based Deep Learning Models. IEEE Access, 2024, 12, 190079-190091. DOI: 10.1109/ACCESS.2024.3516697.
[2] Sonavane, S M, Prashantha, G R, Nikam, P D, et al. Optimizing QoS and security in agriculture IoT deployments: A bioinspired Q-learning model with customized shards. Heliyon, 2024, 10(2).
[3] Li, P, Ren, S, Zhang, Q, et al. Think4SCND: Reinforcement Learning with Thinking Model for Dynamic Supply Chain Network Design. IEEE Access, 2024. DOI: 10.1109/ACCESS.2024.3521439.
[4] Kersandt, K, Munoz, G, Barrado, C. Self-training by reinforcement learning for full-autonomous drones of the future. In 2018 IEEE/AIAA 37th Digital Avionics Systems Conference (DASC), London, UK, 2018, 1-10. DOI:10.1109/DASC.2018.8569503.
[5] Liu, Y, Ren, S, Wang, X, et al. Temporal Logical Attention Network for Log-Based Anomaly Detection in Distributed Systems. Sensors, 2024, 24(24): 7949.
[6] Kersandt, K. Deep reinforcement learning as control method for autonomous uavs. Master's thesis, Universitat Politècnica de Catalunya. 2018.
[7] Mahto, R, Sood, K. Harnessing the Power of Neural Networks for Predicting Shading. In 2023 IEEE Global Humanitarian Technology Conference (GHTC), Radnor, PA, USA, 2023, 327-333. DOI: 10.1109/GHTC56179.2023.10354791.
[8] Sellami, B, Hakiri, A, Yahia, S. B. Deep Reinforcement Learning for energy-aware task offloading in join SDN-Blockchain 5G massive IoT edge network. Future Generation Computer Systems, 2022, 137, 363-379.
[9] Alamro, H., Alqahtani, H, Alotaibi, F A, et al. Deep reinforcement learning based solution for sustainable energy management in photovoltaic systems. Optik, 2023, 295, 171530.
[10] Lu, K, Zhang, X, Zhai, T, et al. Adaptive Sharding for UAV Networks: A Deep Reinforcement Learning Approach to Blockchain Optimization. Sensors, 2024, 24(22): 7279.
[11] Alam, T, Ullah, A, Benaida, M. Deep reinforcement learning approach for computation offloading in blockchain-enabled communications systems. Journal of Ambient Intelligence and Humanized Computing, 2023, 14(8): 9959-9972.
[12] Berghout, T, Benbouzid, M, Ma, X, et al. Machine learning for photovoltaic systems condition monitoring: A review. In IECON 2021–47th Annual Conference of the IEEE Industrial Electronics Society, Toronto, ON, Canada, 2021, 1-5. DOI: 10.1109/IECON48115.2021.9589423.
[13] Abou El Houda, Z, Moudoud, H, Brik, B. Federated Deep Reinforcement Learning for Efficient Jamming Attack Mitigation in O-RAN. IEEE Transactions on Vehicular Technology, 2024, 73(7): 9334-9343. DOI: 10.1109/TVT.2024.3359998.
[14] Alsamhi, S H, Almalki, F A, Afghah, F, et al. Drones’ edge intelligence over smart environments in B5G: Blockchain and federated learning synergy. IEEE Transactions on Green Communications and Networking, 2021, 6(1): 295-312.
[15] Berghout, T, Benbouzid, M, Bentrcia, T, et al. Machine learning-based condition monitoring for PV systems: State of the art and future prospects. Energies, 2021, 14(19): 6316.
[16] Zhang, X., Chen, S., Shao, Z., Niu, Y., & Fan, L. (2024). Enhanced Lithographic Hotspot Detection via Multi-Task Deep Learning with Synthetic Pattern Generation. IEEE Open Journal of the Computer Society.
[17] Wang, X, Wu, Y C, Ji, X, et al. Algorithmic discrimination: examining its types and regulatory measures with emphasis on US legal practices. Frontiers in Artificial Intelligence, 2024, 7, 1320277.
[18] Alsamhi, S H, Lee, B, Guizani, M, et al. Blockchain for decentralized multi‐drone to combat COVID‐19 and future pandemics: framework and proposed solutions. Transactions on Emerging Telecommunications Technologies, 2021, 32(9): e4255.
[19] Moeinizade, S, Pham, H, Han, Y, et al. An applied deep learning approach for estimating soybean relative maturity from UAV imagery to aid plant breeding decisions. Machine Learning with Applications, 2022, 7, 100233.
[20] Yadav, S, Rishi, R. Joint mode selection and resource allocation for cellular V2X communication using distributed deep reinforcement learning under 5G and beyond networks. Computer Communications, 2024, 221, 54-65.
[21] Kumar, R, Kumar, P, Tripathi, R, et al. SP2F: A secured privacy-preserving framework for smart agricultural Unmanned Aerial Vehicles. Computer Networks, 2021, 187, 107819.
[22] Wang, X, Wu, Y C, Zhou, M, et al. Beyond surveillance: privacy, ethics, and regulations in face recognition technology. Frontiers in big data, 2024, 7, 1337465.
[23] Rawat, D B. Secure and trustworthy machine learning/artificial intelligence for multi-domain operations. In Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications III, SPIE. 2021, 11746, 44-54.
[24] Liu, Y, Hu, X, Chen, S. Multi-Material 3D Printing and Computational Design in Pharmaceutical Tablet Manufacturing. Journal of Computer Science and Artificial Intelligence. 2024.
[25] Wang, M. AI Technologies in Modern Taxation: Applications, Challenges, and Strategic Directions. International Journal of Finance and Investment, 2024, 1(1): 42-46.
[26] Qiu, L. DEEP LEARNING APPROACHES FOR BUILDING ENERGY CONSUMPTION PREDICTION. Frontiers in Environmental Research, 2024, 2(3): 11-17.