Innovative Approach to Dynamic Cloud Workflow Management and Scheduling for Big Data Applications
Main Article Content
Abstract
Most recent innovations in technology, such as cloud computing and distributed file systems, the problems that huge newline data has introduced are solved by in-memory computing and parallel computing. Computing resources for current pools may be distributed according to demand and scientific computing can benefit from the latest advancements in virtualization technology and cloud computing. Computing power, storage, platforms, and software applications are all part of the abstracted and virtualized resources that customers have access to over the internet through cloud computing. Efficient storage and processing of big data is necessary for knowledge and information extraction from it. Resources for processing data are not keeping pace with its exponential growth. Therefore, it is a tough newline task to manage and interpret information from enormous datasets. Computational time is proportional to the size of the dataset. In addition, the newline workflow has become more complex, with several subtasks that must be newline executed simultaneously or sequentially.