Hyper-Convergence Storage Framework for EcoCloud Correlates

: Cloud computing is an emerging domain that is capturing global users from all walks of life—the corporate sector, government sector, and social arena as well. Various cloud providers have offered multiple services and facilities to this audience and the number of providers is increasing very swiftly. This enormous pace is generating the requirement of a comprehensive ecosystem that shall provide a seamless and customized user environment not only to enhance the user experience but also to improve security, availability, accessibility, and latency. Emerging technology is providing robust solutions to many of our problems, the cloud platform is one of them. It is worth mentioning that these solutions are also amplifying the complexity and need of sustenance of these rapid solutions. As with cloud computing, new entrants as cloud service providers, resellers, tech-support, hardware manufacturers, and software developers appear on a daily basis. These actors playing their role in the growth and sustenance of the cloud ecosystem. Our objective is to use convergence for cloud services, software-defined networks, network function virtualization for infrastructure, cognition for pattern development, and knowledge repository. In order to gear up these processes, machine learning to induce intelligence to maintain ecosystem growth, to monitor performance, and to become able to make decisions for the sustenance of the ecosystem. availability,downtime, and outage parameters. Results showed that the storage reliability in a hyper-converged system is above 92%.


Introduction
Cloud computing has become a popular technology platform that delivers various services over the internet world along with an effective "pay as you use" low-cost model. Due to its rapid growth, all large industries and organizations have switched their data to the cloud. Cloud computing has mitigated myriad issues regarding time, effort, and cost by providing services with the least amount, time, and effort. As per its basic model, cloud is having three types of services for respective users. Though there are hundreds of services developed and offered by cloud service providers but almost all services are linked to these three key service categories. Cloud computing provides access to cloud services, modules, and infrastructure through the internet due to its global accessibility [1].
Hyperconvergence is described as the replacement of proprietary hardware-defined storage and physically converged infrastructure with a software-defined storage infrastructure that is virtually converged inside the hypervisor tier. Hyperconvergence is a software-defined virtual architecture that combines hardware-defined features for computation, storage, networking, and administration.
Cloud computing is meant for the delivery of services utilizing a network. Cloud allows users to compute the resources for storing the data whether in a virtual machine, application, or software tool. It is considered the best service provider where the user needs to store the data and data storage not for temporary usage, but for permanently storing the data. Currently, technologies have changed the face of the internet with computing power. The computing era is changed from parallel computing to distributed computing, fog computing, and cloud computing. With the increase of internet traffic, people store and access data from different servers. Cloud computing appears as a technology that allows the remote storage and access of data. Cloud technology takes computation in terms of efficiency and software as a service [2].
The use of computational resources and the provision of cloud services ease the cloud user accessing data. It can be defined as an environment where computing resources required by one party can be outsourced from the other party and these resources can be accessed over the internet when required. The technology uses a distributed architecture that centralizes the resources of servers on the accessible platform so that the services can be provided to the user on demand. Cloud computing has various benefits and several advantages to users in terms of less pay per use cost for cloud resources like storage, compute and network in hyper-converged as shown in Fig. 1. Storage requirements can be increased or decreased according to the user requirements and can be adjusted with flexibility. Operational cost is also low as the users have to pay just for services they are using. This is termed as pay per use or subscription cost which is relatively less compared to maintain the actual resources [3].
Multiple nodes should be clustered together to generate pools of shared computing and storage resources for hyper-converged architectures to work effectively. Background services are always running to manage anything from fundamental internode communication to cluster-wide data transfer for data protection and robustness, as well as deduplication and compression for effective capacity usage. These services need cluster resources and therefore add to the list of non-application-specific factors that might affect app performance. To avoid application impact, well-engineered systems will have built-in techniques to limit resource consumption by underlying services.

Related Work
By keeping the focus on reducing service response time, and performance of load balancing techniques for efficient traffic management cloud computing is a major challenge for big data applications. The solution to the challenges discussed above is a simple system, that requires less hardware, has only one point of management, a system that has all the resources that are validated, tested, and installed before deployment. A flexible system means where the addition or removal of nodes is easy, that can be easily shifted, and is more agile [4]. The system is cost-effective, requires less hardware, is easy to maintain, and is cheaper to deploy. The newly proposed model of hyper-converged infrastructure meets all the requirements. Today the world is transforming the idea of digitization so there is a need to transform the IT infrastructure too. This transformation [5] will result in the adoption of new IT techniques and focus on the agility of the business. As a result, there will be more efficient, agile, and responsive innovations. Hyper-Convergence Infrastructure (HCI) supports the business in this agility [6].
Hyper-convergence is defined as a software-defined infrastructure that integrates all the compute, network, and storage resources into a single unit supported by a single vendor that is deployed on and that runs on ×86 systems that run the applications in the virtual environment. The infrastructure is a combination of fully virtualized, scalable, and clustered components which make it agile. HCI contains various standard PCs that have a layer of software installed in them that will combine the resources for the unit [7].
"Hyperconverged infrastructure combines fundamental storage, compute, and networking operations into a single software solution or service". It's just a more tightly integrated convergent system, with computing, storage, and networks divorced from the underlying infrastructure and configured at the software level.
A hypervisor and a unit that will manage the hypervisor are also installed on the same PC. Hypervisor is the main administration point in HCI systems. A hypervisor is software that creates and runs virtual machines. The purpose of the hypervisor in the HCI stack is to keep an eye on the virtualization of servers. The infrastructure produces more cloud-like services. In the environment of the data center, reliability and performance of the system are the key factors. The HCI appliances provide improved reliability and performance as these appliances are passed through various testing and validating processes. The HCI systems are easy to deploy and they come with all the packages that are required for upgrading purposes or scaling the system [8].
The infrastructure also makes the installation, purchasing, and management of the hardware and software a much easy task as the customer does not have to spend a lot of time to select the individual hardware and software components that meet their workload requirements, that not only consumes a lot of time but also takes a cost a huge revenue. The HCI system was designed to meet the specified workload that will make purchasing the appliance easy [9].
IT industry does not need to buy the components separately. HCI appliances have the capacity of scaling and when all the components have integrated the management of the single unit becomes easy. For maintenance and up-gradation purposes these nodes can be swapped easily. Various integrated software interfaces are used to manage the operations of the infrastructure and all these operations are virtualized as shown in Fig. 2. HCI can be deployed in an organization in two common ways. One way is to build the infrastructure. For this purpose, the organization needs to buy servers and the HCI software. These components are then merged to create an HCI solution. The second way is to purchase an HCI solution that is configured and tested before purchase [10]. The ecosystem similarity is based on an expansive hypothetical establishment concerning firm connections and coordinated effort, between inter-organization networks, and multifaceted quality hypothesis. Advancement is a fundamental characteristic of ecosystem systems, which are prepared to do adjustments according to changes both inside the ecosystem and its respective environment [11].
Microprocessor and storage technologies, computer architecture and software systems, parallel algorithms, and distributed control mechanisms have all evolved over decades, paving the path for cloud computing. Cloud computing was made possible by the interconnection provided by an ever-evolving Internet using a magic box of hyper-convergent in terms of storage, compute and network. The servers in a cloud architecture communicate over high-bandwidth, low-latency networks, which are structured around a high-performance interconnect.

Proposed Methodology
Structure Network comes about by breaking down ecosystem structures in a conceivable manner by catching the associations, association sorts, authoritative characteristics, and their connections. In a dynamic structure both internal and external can be the origins or by triggering the connection between ecosystem members. In the dynamic structure of the core ecosystem, the multi-sided platforms are connecting different members and adding the value of the platform players [12].
Hyper-convergence is the more efficient and latest technology brought about combining the different cloud service models through advanced hardware solutions. Using the HCI solution, the ease of management and infrastructure deployment is becoming easier and flexible. Many benefits are associated with HCI like cost reduction, higher scalability, data protection and efficient management of IT resources in terms of the virtual network, storage, and software-based architecture. Latest data center technology equipped with HCI that automates the data center operations like virtual machine deployment, monitoring, and pre-defined security policies. In a virtual environment, the resources are grouped into a magic box that provides efficient resource pooling, high performance, and better resilience [13].
These solutions can provide a blueprint for achieving a secure hyper-converged data center. This research aims to formalize an autonomous management model to evaluate a hyper-convergent virtual cloud ecosystem using cognitive management in hyper-convergence. It is important to note that emerging technologies are bringing more robust solutions to complex problems as we have discussed HCI but at the same time in itself, it is reaching another level of complexity that is merely not possible to be handled by humans alone. Therefore, artificial intelligence, machine learning, deep learning, and neural networks are becoming vital to manage this new level of complexity.
In artificial intelligence, we are engaging more in cognitive science to make AI-solutions more human-like. Our proposed model is meant to deal with cloud ecosystem which is a combination of multiple Cloud Service Providers (CSP), services, security, infrastructure, and users. Therefore, the complexity level is very high while our focus is also to deal with this complexity in a human manner which is done through incorporation of cognitive correlates along with machine learning into our proposed ICE.
The proposed model can evaluate the hyper-convergent virtual cloud ecosystem provided by the different cloud service providers. Cloud management uses the virtualized environment of versatile service providers. The focus of our proposed model ICE is to deal with the following factors: • Optimized the existing cloud eco network to provide the centralized management of cloud systems and service-clustering through convergence services.
• Evolve the current cloud system in the heterogeneous environment using machine learning algorithms and map the cloud services to form a complete configuration cluster for hyperconvergent virtual cloud ecosystem. • Creation and adaptation of new cloud service structure and virtual memories replicas for virtual clouds.
The responsibilities of the cloud ecosystem controller are to manage different types of cloud services like storage, compute, and network. All the cloud stakeholders like providers, resellers, and adopters have a cloud ecosystem network with many cloud services with different naming conventions and heterogeneity of cloud services. The role of the cloud ecosystem controller is very important for managing cloud service clusters and heterogeneity. The cloud ecosystem controller is further divided into three sub-modules-service cluster, ecosystem network, and heterogeneous ecosystem. All clouds are physically heterogeneous and run in their own data centre. Two or more cloud service providers can virtually connect on a common platform where all services, structures, and networks can combine in form of a virtual cloud as shown in Fig. 3.

Figure 3: Proposed hyper-convergence model
Service mapping contains all the information of the basic cloud service model, service name, its description offered by the different service providers. There are different ways of cloud service mapping between cloud services to cloud service providers. The proposed infrastructure with the ability to make decisions based upon the knowledge obtained from the past will enhance the reliability and performance of the data center. The performance management module is linked directly to the learning module and the QoS services module. HCI is a very complicated infrastructure that is based on several components that work together to achieve the goals of reliability as shown in Tab. 1.

Results and Discussion
The parameters taken to measure the reliability of the storage of the system are availability, downtime, and outage. In the proposed infrastructure first, it was checked whether the storage services are available or not. After that, the downtime and outage time of the storage were measured. For this purpose, the services from different service providers were taken. The list of services is taken from the cloud harmony website (www.cloudharmony.com). In this section, the simulation results for the reliability of the storage are discussed [14]. The results are obtained using Mamdani Fuzzy Inference System in Fig. 4. Based upon the input variables the reliability values were designed on the assigned rules. Some of the rules were described in Tab. 3.
All the rules were generated using Mamdani fuzzy inference rule-making system in MATLAB. The rules diagram for the rules is shown in Fig. 5.
De-Fuzzifier is one of the basic segments of any decision-based autonomous system. There are various kinds of De-fuzzifier. In this study, centroid sort of De-fuzzifier is developed [16].  The rules show that based upon certain values the reliability of the system is measured to be reliable, highly reliable, or not-reliable [17]. The system will be highly reliable only when the storage is highly available, the outage is low and downtime is low. The lookup diagrams for case 1 are shown in Fig. 8   The reliability of the system will be reliable if the availability is available, the downtime is medium and the outage of the services is medium which is shown in Fig. 9.
The system's reliability will be Not-reliable when the availability is Not-available and downtime is low and the outage value is also low which is shown in Fig. 10.

Conclusion
An intelligent cloud ecosystem has been proposed, built, and tested with a focus on emerging demands of the Internet of Things, smart corporates, and smart cities. As all such applications of cloud and allied domains engaged not only sophisticated networks but data management is also an integral part of it. Therefore, in the proposed model, our objective is to provide such platform which can address these demands by using artificial intelligence and virtualization techniques. In our validation, all the components have provided tangible and favorable results to ensure the workability of the model. The intelligent cloud ecosystem is one contribution among many research frontiers on the horizon of cloud computing and data science. There are many areas and parameters that will emerge in swiftly changing environment and demands that they get as much versatile with every passing day. Our model is flexible enough to incorporate new allied parameters and learn new structures and services which are becoming essential for a sustainable system. The results showed more than 92% accuracy in reliability.