Dynamic Resource Allocation in Cloud Environments Using Large Language Models

Authors

  • William K. Kwaku Department of Computer Science, University of Ghana, Ghana

Abstract

Dynamic resource allocation in cloud environments is enhanced by integrating large language models (LLMs). These models can predict resource demand by analyzing data such as user requests and historical usage patterns. Leveraging this capability, LLMs enable adaptive strategies that adjust resources in real-time, improving performance and reducing costs. By providing predictive analytics and recommendations, LLMs help cloud service providers optimize resource allocation, preemptively address bottlenecks, and ensure seamless user experiences. This application of LLMs represents a significant advancement in the efficiency and scalability of cloud computing.

Downloads

Published

2024-05-14

How to Cite

Kwaku, W. K. (2024). Dynamic Resource Allocation in Cloud Environments Using Large Language Models. MZ Journal of Artificial Intelligence, 1(1). Retrieved from http://mzjournal.com/index.php/MZJAI/article/view/251