Kubernetes is an open-source container management tool which automates container deployment, container load balancing and container(de)scaling, including Horizontal Pod Autoscaler (HPA), Vertical Pod Autoscaler (VPA). HPA enables flawless operation, interactively scaling the number of resource units, or pods, without downtime. Default Resource Metrics, such as CPU and memory use of host machines and pods, are monitored by Kubernetes. Cloud Computing has emerged as a platform for individuals beside the corporate sector. It provides cost-effective infrastructure, platform and software services in a shared environment. On the other hand, the emergence of industry 4.0 brought new challenges for the adaptability and infusion of cloud computing. As the global work environment is adapting constituents of industry 4.0 in terms of robotics, artificial intelligence and IoT devices, it is becoming eminent that one emerging challenge is collaborative schematics. Provision of such autonomous mechanism that can develop, manage and operationalize digital resources like CoBots to perform tasks in a distributed and collaborative cloud environment for optimized utilization of resources, ensuring schedule completion. Collaborative schematics are also linked with Bigdata management produced by large scale industry 4.0 setups. Different use cases and simulation results showed a significant improvement in Pod CPU utilization, latency, and throughput over Kubernetes environment.
Emerging technologies are reshaping the digital ecosystem at a very fast pace. New platforms, tools, and hardware rapidly take the digital scenario to the next level. This swift pace, versatility of digital entities and increasing demand need careful attention towards autonomous and dynamic interfacing to reduce the complexity. Cloud computing plays a central role and allied technologies to address the need for autonomy and evolving interfacing. Cloud technology is getting complex and requires significant skill to configure and deploy professional solutions. On the other side widespread internet of things (IoT) has created another challenge in terms of data versatility and volume that is also transforming into big data. The current digital ecosystem is taking advantage of artificial intelligence, but it also requires to adhere the challenges brought by Industry 4.0 and robotics [
The emergence of internet and World Wide Web gradually transformed into a more productive, distributed and cost effective platform i.e., cloud computing. It is a platform that provides IT resources on demand through internet connectivity. It has become phenomenal due to the, cost-effectiveness and availability of services globally. Instead of bearing the infrastructural costs and engaging a physical data center, cloud computing has provided a more straightforward managed services solution with a pay-as-you-use concept accompanied by easy to use services and professional Management of cloud structure. Many international players provide cloud computing including IBM, Microsoft, Amazon, Google and Oracle [
These categories provide various services and on-demand configurable computation capacity, storage facilities, and high-tech network; therefore, cloud platform is becoming a highly suitable platform for individuals, small groups, small-to-medium enterprise (SME)s, and large size organizations. Though such versatile configurations made cloud management significantly complex, distributed computing reaches a new level. In cloud computing, infrastructure and application are managed simultaneously using virtualization technique that makes it possible to generate multiple virtual machines to provide network, compute and storage to a huge number of users without compromising quality and security [
Software as a service (SaaS) is owned and managed remotely by the service provider, allowing us to connect and use cloud based apps over the internet. The service provider manages all parts of the applications, such as data centre, networking, and operating system. End user only focus on the usage of the application. Off-the-shelf applications are access over the internet like google apps, spreadsheet. SaaS allow user to run existing online applications, accessible from any computer via internet, facilitate collaborative working [
Platform as a service (PaaS) gives us a development framework. PaaS allows user to create their on-cloud applications. PaaS helps us to build, test, deploy, and manage applications e.g., google apps engine. Infrastructure as a Service (IaaS) enables users to run applications as they want on their own cloud hardware. IaaS allows existing applications to be run on cloud suppliers’ hardware. Existing applications can be moved from the data centre to the cloud environment. The fundamentals unit of the cloud is a server. The server can be physical or virtual. Physical servers are individual computers, in contrast, virtual servers’ instants are software-controlled spices for physical services [
These cloned virtual entities are virtual machines which are generated from a common hardware resource but maintain complete isolation. Virtualization has make this possible to reduce redundant hardware, configurations and maintenance while extending better management in a secure distributed environment. It also became possible to configure versatile computing environments and platforms on various virtual machines and link them together transparently. Virtualization allows many users to share one physical server. Virtualization is making feasible with hypervisor, a software that run on physical hosts or servers. There are two type of hypervisor one can be directly installed on the top of the physical servers, also called bare metals hypervisor, more secure and low latency. Second type of hypervisor is based on the hosts operating system (OS) are called hosted OS. These are less frequent, mainly used for user virtualizations like Oracle virtual box, VMware Workstation [
Developer initiate by accessing the Docker Hub, cloud repository of Docker Hub containers. Containers can be easily paused, resumed and removed. Container generally perform a single operation like PostgreSQL database, new javascript (JS) application, and network simultaneously to potentially scale. Unlike the VMs in containers resources are directly shared to host. This allows users to run many Docker’s containers using the less disk space [
The utilization of these facilities needs professional tools like Kubernetes. Kubernetes [
IoT in assembly unit is the solution family that incorporates the ability to value chains from analytics, cognitive computing, and operational performance. In the automotive sector, this approach has three advantages. Finding and solving problems before casting delays helps to plan out and get 100 percent productivity out of their equipment [
Third is supply chain management beyond the factory the artificial intelligence (AI) can anticipate in the supply chains with predictive algorithms. AI is essential to reach the cutting edge of industrial automation design.
Big data is divided into structured, unstructured, and semi-structured data. Structured data refers mostly to traditional data sources saved and organized in a database. Unstructured has no defined data models and clear storage format and is difficult to search. The line between semi-structured and structured data has always been vague. It is a type of structured data but lacks a strict data model structure, which helps analyze it easily [
Machine-related data are facts and figures from sensors, weblogs, financial systems, medical instruments, Global Positioning System (GPS) data, data usage statistics, satellite images, scientific data, and radar data. The ‘API server’ provides an application programme interface through which Kubernetes provides its functionality as shown in
Data from different sources are routed to one system, and for velocity, we require faster massive data processing. Similarly, variety refers to structured, semi-structured, and unstructured data, while veracity means accuracy and trustworthiness. All this data will benefit the relevant sector by performing its analytics, known as the value of data [
For scientific workflows, infrastructure automation techniques reduce complications related to repeatable infrastructures working and participating in scientific experiments that are reproduced able. The business that had already proposed virtualization can incorporate database servers into fewer domain controllers and benefit from multidimensional feature space, power, and requirements needed. Optimization use is expected to increase efficiency and system responsiveness for consumers without increasing server utility functionality. Virtualization allows us by using containers technology which is much elasticity for capacity management in a server. Due to the overhead delay in scaling containers, main module on various computers takes longer than that on a dedicated processor [
A new term in software architecture patterns is the micro-service approach, which is getting famous due to its elasticity, granular approach and loosely linked services. An API Gateway system is a scaling system in Kubernetes and Prometheus that adjust things. Varley the number of Instances that occurs during the course of action. It can enhance the use of system resources while ensuring the application high availability and quality of service. The Auto Scaling System can effectively guide the number of service cases according to the API Gateway stack. When its load exceeds the specified procedure, more instances are created with dynamism to balance the workload [
Distributed technology is a term that adds new application development stacks to asset virtualization. Due to this sense of expanding the load on the data centres, the facilities are not efficiently manipulated. They explained what Docker and container are, allowing us better to understand the technology of Docker swarm and Kubernetes. Docker is a phase of the program that allows you to produce, test, and quickly send applications. Docker combines computing into institutional units that are deemed holders that do everything the product required to develop, such as code, library resources, configuration, system and systems [
Kubernetes is an innovation compartment for managers created in Google Experiment to monitor container-based applications in various circumstances, such as physical, virtual, and cloud structures. It is open-source and helps to create and supervise the use of container shipping. Kubernetes enables healing to enhance the quality and comprehensive through disappointment recovery actions. Since our primary objective is to build configurations with Kubernetes for middleware applications to enable high availability, we investigate the consistency obtained by Kubernetes underneath the default settings in this document. We have carried out several studies that demonstrate that the outage of the service can be substantially greater than anticipated [
Through malfunction following the decision, Kubernetes makes healing and internal communications also test them. For these kinds of programs, relative to their sensitivity to errors arising from external causes, Kubernetes responds relatively strongly. An intelligent development device based on Industry 4.0 in such a sunroof light source development system [
Industry 4.0 the paradigm means a reduction in the sensitivity of industrial processes, an increase in the machine’s long career, and an improvement in the standard of products. The data from the entire production process is the fourth industrial revolution’s fundamental principle of this 4th industrial revolution. Techniques that work on many data collected from the field by many sensors are usually automatically extracted [
In cloud computing, another problem is the complexity of collaborative schemata considering the distributed task allocation. The design structure collaboration is different from functional collaboration and collaborative task assignment. Similarly, it requires an action performing schema with a feedback structure schema to carry on the collaborative schematics in the cloud service platform. Various frameworks for decomposing collaborative schemas as shown in
The proposed model consists of four layers of multiple components i.e., Structure, Management, Execution and Analytics. Structuring and management is linked to the execution layer where individual team members shall perform each task, this layer is cloned using virtualization, therefore, it shall be executed on multiple instances. Considering the problem statement, it is required to deal with the dynamic nature of collaborative schema; therefore, on cloud platform, segregation of relevant structuring is vital to formulate cloud structure for adaptable, collaborative schema including cognitive correlates with ensuring the learning phenomenon. Similarly, collaborative tasking needs a management module that shall cater to multiple task scheduling, allocation and sub-tasking to achieve individual task goals. This module shall also manage the taskers/team members, it is notable that these team members may be devices, bots, or soft agents to generalize this model for multiple scenarios where collaborative tasking is involved. Finally, the proposed model provides a complete layer for analytics to manage neural networks modifications and suitable configurations for learning.
The use case we will be using for collaborative robot (CoBots), which are becoming an important constituent of industry 4.0 and industrial robotics. CoBots shall develop a working ecosystem and collaboratively perform tasks to attain maximum productivity and coherence, essential in industrial robotics. In CoBots, it is required to deploy a collaborative schema to execute tasks in parallel, sequential and hierarchical modes. As in human tasking there is a need to provide task swaps, time management and skill development in terms of knowledge sharing and exchange. Beyond this use case, the proposed model is generalizable for IoT devices and machine to machine collaboration management in the cloud environment.
The Proposed structure as shown in
The management layer is divided into three main components i.e., tasker, swapper and project. Tasker and swapper are receiving information from the control module of the structure layer, while the project is an overlapping module between the structure and management layer containing the team, properties, parameters, goals and versioning. Management layer distributes tasks to execution layer to multiple virtual users or CoBots. Tasker is splitting into assignment with a schedule containing the estimated time for completion while it also contains sub-tasking and may create groups to perform assignment, accordingly team members will get allocations. A complex routine in the collaborative schema is swapping of the tasks/assignments. Swapper provides free team members/nodes and shall develop a sub-schedule in case of swapping, here we treat swapping as a sub-task to the target swap node/member. Every instance of swap contains swap tag to make source and target linked along with the projected goal of the swap and estimated time, which in ideal conditions shall not disturb the primary task schedule but in case of additional time required, that will be adjusted by control layer. Tasker is linked with the monitor module, memory module, and control module in the structure layer and supported by learner module from the analytics layer.
The execution layer contains user modules that work in the environment developed by the structure layer and perform operations developed and configured by the management layer. As mentioned earlier, the execution layer and user module will work virtually in multiple users in terms of CoBots or IoT devices which will act as team members/nodes in a project. The management layer handles the collaborative schema, but we also provide a local knowledge base to enable every member to have its own repository to store the experience and skill it is learning while performing the tasks. The user module contains a task, affiliated nodes/members, and a schedule. For inter-exchange of tasks and communication, authenticity modulation is also available to ensure an encapsulated environment for each node which is shareable only if authorized or having consensus between two nodes/members. The user module will make milestones and timelines of those milestones for each task and record exceptions and constraints that may occur during the execution of the tasks. Therefore, if the skillset required to perform a certain task is not available or insufficient, it will first consult the learning module or communicate with other nodes to share their local knowledge base if it gets authorization. In this manner the collaborative schema will also cater to implicit collaboration at knowledge level and explicit collaboration at functional level. In case of such constraints, a swapping option is available to hand over or share the task with another free node.
The final layer of the proposed module is analytics which is getting data from all modules to formulate constraints and exceptions for the structure layer to develop clusters and patterns, which will be used to develop more adaptable configurations for future projects and collaborative schemas. This layer will also provide projections for the skill threshold level of local knowledge bases and the structure knowledge base. All these components are encapsulated in a learner module provided with a neural control which will provide machine learning to perform analytics.
In a physical machines that runs on Intel(R) Core (TM) i9–12900F @ 5.10 GHz * 16 Intel® Iris® Xe Graphics eligible, We set up a Kubernetes cluster with five, one master and four worker nodes. Each cluster node is a virtual machine running Ubuntu 20.04.4 LTS, Docker version 20.10.13, and Kubernetes version 1.23.5. In terms of computational power, the master has four core processors and sixteen gigabytes of random access memory (RAM), while each of the worker nodes has two core processors and two gigabytes of RAM. Furthermore, the load generator is Gatling open-source version 3.7.6, On each worker node, an HTTP request is delivered to our application via a configured NodePort [
Our program is built to use a lot of CPU resources. In other words, it requires CPU resources until it sends back a response to the sender after successfully receiving an HTTP request. Each replica CPU request and limit are 100 and 200 m, respectively. The number of copies varies from 4 (average of 1 replica per node) to 24 (average of 6 replicas per node).
Every experiment is conducted for 300 s. Gatling average inbound request rate is around 1800 requests/s for the first 100 s, then 600 requests/s for the next 100 s, resulting in a total of 240,000 requests. The terms we use to describe these two times are high traffic period (HTP) and low traffic period [
In
In
In
In
We have a nextcloud application that is deployed to amazon web services (AWS) cloud compute instances. We have four amazon elastic compute cloud (EC2) instances with three replicas in three different availability zone. Each ec2 instance type is m5.xlarge, vCPU 4, Memory 16GB, Instance Storage type is amazon elastic block store (EBS), Network Bandwidth up to 10 Gbps, EBS Bandwidth up to 4750 Mbps. Nextcloud application is deployed to these EC2 instaces. We have configured a web based load balancer infront of these EC2 instances to manage the web traffic load.
In
In
In
In
In
In
In
In
ComputeIII
In
In
In
In
In
In
In
In
Use case-1 | Use case-2 | ||||||||
---|---|---|---|---|---|---|---|---|---|
Iteration | CPU Use | Storage |
Write |
Read |
Latency | CPU utilization | Memory utilization | ||
Max | Avg | Max | Avg | Read | Max | Avg | Max | Avg | |
Compute-I | 3.0% | 55.0% | 0–11MB/s | 0–15Mb/s | 10s | 45.6% | 6.0% | 33.3% | 18.0% |
Compute-II | 2.5% | 55.0% | 0–11MB/s | 0–15MB/s | 10s | ||||
Compute-III | 3.0% | 55.0% | 0–11MB/s | 0–15MB/s | 10s | 45.6% | 6.0% | 33.3% | 18.0% |
Compute-IV | 45.6% | 6.0% | 33.3% | 18.0% |
In containers, environment provide fine-grained resource sharing, lightweight performance isolation, and quick and flexible deployment. Containers are self-contained, stand-alone containers that bundle software and its dependencies. The scheduler optimizes container placement at the start, to enhance system utilization and minimize cost. The autoscaler allows the current demand for resources to be met while underutilized or idle nodes are shut down. The rescheduler allows changes to the initial placement of containers at runtime to eliminate fragmentation and combine loads for improved resource efficiency. The Kubernetes Vertical Pod Autoscaler modified your pods’ CPU and memory reservations to help you “right-size” your applications. Compute results showed that this change can increase cluster resource usage while freeing up CPU and Memory for other pods.
Thanks to our teachers and families who provided moral support.