Effectively Handling Network Congestion and Load Balancing in Software-Defined Networking

: The concept of Software-Defined Networking (SDN) evolves to overcome the drawbacks of the traditional networks with Internet Protocol (I.P.) packets sending and packets handling. The SDN structure is one of the critical advantages of efficiently separating the data plane from the control plane to manage the network configurations and network management. Whenever there are multiple sending devices inside the SDN network, the OpenFlow switches are programmed to handle the limited number of requests for their interface. When the recommendations are exceeded from the specific threshold, the load on the switches also increases. This research article introduces a new approach named LBoBS to handle load balancing by adding the load balancing server to the SDN network. Besides, it is used to maximize SDN’s reliability and efficiency. It also works in coordination with the controller to effectively handle the load balancing policies. The load balancing server is implemented to manage the switches load effectively. Results are evaluated on the NS-3 simulator for packet delivery, bandwidth utilization, latency control, and packet decision ratios on the OpenFlow switches. It has been found that the proposed method improved SDN’s load balancing by 70% compared to the previous state-of-the-art methods.

In addition to the decoupling of the control plane and data plane, SDN offers many benefits. The virtualization of physical networks allows separation from the physical network. As a consequence, the physical network does not affect the corresponding logical network. Additionally, the open-source API in SDN allows a more customized and manageable network business. As users interact with only the upper layer, the user application layer provides a user interface for handling networks to meet their different needs. Finally, the separation of control plane and forward data plane is crucial for customers to manage their network management, innovations and flexibility. The centralized control provides the control and other administrative operations over the network, such as upgrades, business configuration speed. The generic structure of SDN is depicted in Fig. 1.
For load balancing and traffic engineering, centralized management is desired due to its faster convergence to the optimization objective and higher network performance. Since SDN works at the same network, the network switches can be managed and monitored by SDN central control. However, in a peak situation when the traffic increases on switches than its threshold, certain backup plans are required for graceful degradation. The increase in traffic is due to the number of requests generated by switches and nodes. End-to-end packet delay from a host to another host is upper bounded. Therefore, to tackle this challenge, we propose introducing a new module named LBoBS to dynamically offload the overly-congested node and maintain balance over the network.
The rest of the paper is generally organized as follows; Section 2 covers the related studies on Load balancing problems in SDN. Section 3 presents the design of the proposed solution, and Section 4 exhibits the simulation results. Finally, Section 5 concludes the paper and identifies future directions for this work. Various attempts have been made over the past few years to contribute towards load balancing challenges in contemporary SDN networks. The Equal-Cost Multipath approach [7] distributes the data load and flow to the hopes/switches without prior knowledge. William et al. [8] proposed the Valiant Load Balance approach that distributes all the traffic to different paths by picking the random next hope based on the random picking technique.
Handigol et al. [9], Wang et al. [10], and Wang et al. [11] proposed improved load balancing strategies and claimed that the controller is the critical component to handle the load balancing problem. The controller node monitors the response time from OpenFlow switches and updates the flow table to apply the load balancing technique specified by each system to balance SDN networks. However, one of the disadvantages of these strategies is they all are static in nature; thus, no real-time monitoring of traffic is available inside these strategies. Li et al. [12] proposed a novel load balancing technique based on a dynamic approach known as dynamic load balancing (DLB). The idea behind the DLB is to apply the greedy approach to pick up the next hope/switch, which transmits the minor data loads. DLB technique only decides the load on the next hope without determining the load on a global approach. In the global view of the transmission, this algorithm does not find the best path; hence, it does not achieve the system's best load balance effect.
Koushika et al. [13] proposed a new load balancing technique based on the Ant Colony Optimization approach. This technique finds the best path and the best server combinations for efficient path collection and optimization to collect the information from the network for link usage and calculate the delays in the links. However, this approach is a simple method to apply the network path information on a single criterion and thus does not scale well with future networks. Guo et al. [14] proposed that the controller complete a series of Real-time Least loaded Server selections (RLSs) for multiple domains to find the highly loaded table and direct the new flow to the least loaded server. It is also used to compute a path leading to the target server. One of the problems with this approach is whenever a new flow enters into a domain; the RLS makes the forwarding decision for every new flow. However, this technique poses a problem such as a single controller bottleneck, poor scalability, reliability, and responsiveness.
In a nutshell, load balancing in the SDN has been extensively carried out in the recent past and different strategies are adapted to mitigate this hurdle. However, many strategies, despite allocating the best decision path, do not offer resource optimization when a load is adequately handled. SDN has been presented as an approach to bring high programmability to network components by decomposing a network's forwarding function into an efficient, fast path detection, and a programmable slow path. The extensibility is introduced through the latter, enabling new routing and forwarding approaches without replacing hardware components in the core network.
Load handling and balancing are significant issues in the SDN network, which causes network latency and slower response time. At times, packets may lose their path by searching the new switch for path forwarding and routing. Some approaches use SDN, virtualization [15][16][17][18], and contemporary methods employed in the Internet of things networks [19][20][21][22]. Nevertheless, a state-of-the-art load balancing mechanism is still considered at its infant stage for modern SDN networks. This research proposed a new technique for efficient forwarding of packets after facing their issues. We evaluate the parameters for performance, outlined in Tab. 1, mainly, Packet Decision Time on Server, Path Detection, Throughput, Bandwidth, and caching issues

Proposed Model
The model of our proposed load balancer in SDN is shown in Fig. 2. SDN controller take cares of the issues such as load balancing, security, topology, monitoring, loading, forwarding, etc., across the network. The addition of a load balancer in the SDN network supports intelligent decision-making where the Load Balancer controls every SDN switch's load. In the proposed technique, the proposed server is used to handle load balancing problems in the SDN. As the SDN controller is one of the servers responsible for managing all the switches, real-time load and path calculation control the load balancing problems. A controller is connected to the load balancer and switches. So, the controller periodically connects and transmits the Load balancing information to the load balancer regarding the load and distribution of incoming packets to different nodes.
Load control is the primary responsibility of the Load Balancer. Whenever the controller needs to process the load balancing scheme, the balancer returns the load balancing condition based on the calculated load path. The load balancer is directly in contact with the switches. Due to this, transmission overhead on a controller is reduced, and there is a direct connection between the load balancer and open flow switches. Balancer on one end relates to a controller and on the other end connects with the open flow switches. c) The SDN controller is responsible for deciding the best transmission path in collaboration with the load balancer. g) Finally, the SDN controller creates a single path or multiple path information based on load balancer information.
As shown in Fig. 3, the direct link between the load balancer and switches and load balancer and controller effectively reduces transmission between controller and Load balancer and ultimately among switches and controller. o) In the proposed model, the direct link between load balancer and switches and load balancer and controller effectively reduces transmission between controller and load balancer and ultimately switches.
In Fig. 4, we have evaluated the flow model of our approach in which the load balancing is considered at very high rates by using a load balancer for the effective mechanism.

OpenFlow Switch and Controller Data Flow
The controller in SDN exchanges information with the SDN switch to forward the packets to the destination node and afterward delivers all of the concerned packets to the correct destination. The status information about each node's load is exchanged, tracked the loaded node, and triggers the algorithm to balance the load on the SDN network and forward packets-based information based on the information in the SDN flow table controller. Fig. 6 explores all of the data. The incomings packets in the SDN control and out port show the outgoing packets to the best and suitable path.

OpenFlow Switch and Controller with Load Balancing Server
To overcome the load balancing strategies on the controller, we introduce a new load balancing server. This load balancer directs the load balancing and network congestion in coordination with the SDN controller. The data packet rates may get high at some time so that switches and controllers cannot control such a situation effectively, so that network congestion occurs in some places in the network. Fig. 7 shows the load balancer's coordination with the SDN controller to be divided with an adequate load balancing scheme.

Results and Discussions
This section describes the solution scenario for information-centric networks-based SDN load balancing and traffic congestion handling strategies. The simulation environment is set up for load balancing and congestion handling. Once the simulations are performed, the results are compared with existing methods. The basic parameters discussed are the packet decision time ratio, packet delivery, and bandwidth utilization.

Simulation Environment
We have deployed the NS-3 simulation environment [35,36] with the existing strategies to compare the results. The network environment is hosted on H.P. Elitebook 840 Pro with Intel Core i5-5200 CPU, 8 G.B. of RAM, 1 Tb SATA Hard Drive, Linux Operating System. Furthermore, we have deployed SDN Controller and Load Balancer Server and simulated traffic for a particular number of hosts and SDN switches. We have taken multiple senders and multiple receivers with several SDN switches and one SDN controller and load balancing server for balancing the load of the network. The load balancing server directly connects with the controller and SDN switches in our approach, so the load balancing load is divided.
Tab. 2 summarizes the symbols and notation used throughout this section. The notations, along with their brief explanation, are listed below. There should be no network congestion that will occur in this case. In this simulation environment, we have set up all the nodes, switches, controllers, and load balancers in the existing environment to produce the results. Connection setup can be made using wired and wireless connections. Some of the sending/source devices send the packets' devices, and some are receiving devices for the connection environment.
The complete working mechanism is described in Fig. 5, where all nodes are illustrated with relevant scenarios. We have compared achieved results with the relevant state of the art methods shown in Tab. 3. The obtained results showed that our technique improved the load balancing and handling SDN network congestion quite effectively.
Algorithm 1: Load handling policy Input: New Node, data packets Output: Forwards packets on an efficient path. Procedure:

Results
The results are compared with the existing techniques shown in Tab. 3. Each testing policy is further evaluated with other methodologies for assessing its effectiveness. In the proposed method, the network topology matters for selecting packets, and we have used the SDN picketing set up for the effective results maintenance and results in a generation. We have applied the SDN protocols for the simulation environment and tested them to perform several tests. At the start of the simulation, we have forwarded fewer packets. Over time, we have increased the number of packets, data delivery ratio, and several SDN switches and consistently recorded all of the results. The best results are taken from the environment and compared with the strategies listed in Tab. 3. Fig. 8a elucidates the network latency results in load balancing and SDN network congestion handling parameters. The x-axis represents the switches, whereas the y-axis depicts the latency in ms. Even for a massive number of switches of 25 or more, the recorded latency is 41000 ms for the proposed solution, and on the same number of switches, the other methods perform worse than the proposed method. Similarly, on increasing the number of switches to 150, the latency of the proposed method is slightly increased to 45900 ms, but in W/S-Clustering and Clustering techniques, the latency is more than 4650 ms. These results imply that the performance of the proposed system is better than W/S-Clustering and Clustering techniques in terms of latency. Over a keen look, it is found that the proposed technique reduces the network latency of load-balancing by 3.54%.  Fig. 8b exhibits the bandwidth utilization of the proposed method in comparison with other approaches to assess the performance of the proposed method. Results indicate the selection of best and most free path for incoming requests based on load balancing policy. ANNLB performs relatively similarly when the congestion is not high. However, for more congested routes, it tends to lose its effectiveness. In the proposed method, the bandwidth utilization is on a higher side and even achieve desirable results for highly congested paths. The proposed method not only chooses the best path to reduce latency but also improves bandwidth utilization. This bandwidth utilization increases the popular post-probability of congestion in the SDN network. Fig. 8c shows the data packet flow entries inside the OpenFlow switch compared to the other three techniques. With just the simulation's start, we have set the level-0 for all of the entries. We have duplicate flow entries in all methods in the first second. Every second, all flow entries increase rapidly, but in the LBoBS technique, the increase is steady because the network packets are distributed, and load is balanced accordingly. With every second, the growth increases with time; thus, the increasing growth is slow compared to other techniques. After reaching the threshold values, the load balancing techniques show better results than different approaches. Fig. 8d portrays the performance results of the LBoBs in terms of the packet transmission rate. From the graph, it is evident that it is more effectively deliver more packets compared to its counterparts methods. LBoBS detects the congestion and load on the network early with the load balancing server's help and applies the load balancing algorithm to identify the packets' new path effectively. The proposed policy is thus steady for the regular transmission of packets due to the high data rate. In comparison, other methods are not compatible due to the lower number of packet delivery.

Conclusion
This article investigated load balancing in the SDN-based networks where multiple servers are added, and numerous domains maintain them. Load balancing causes delay in the packet delivery, and at times packets may lose their way due to heavy load on some of the paths. Early path collision detection and decision on path analysis are the core contributions of this work. Load balancing and network congestion handling are ideal methods to handle the network load after implementing the load balancing server in coordination with the SDN controller and SDN OpenFlow switches. The load balancer continuously monitors the load on the SDN network. It directs the controller to change the packets' path to an alternate route in an undesired congestion situation until the load balancing issue resolves. It is an intelligent approach and is highly effective due to numerous reasons. Simulation results showed that the proposed LBoBS contributes significantly to load balancing through the handling of latency, bandwidth utilization, detection time ratio, and packet delivery ratio under different environments and scenarios. LBoBS also provides network congestion handling capabilities. In the future, we can extend this work by providing the load balancing and congestion over the ICN-based SDN, NDN-based SDN, and CCN-based SDN approaches.
Funding Statement: This research was supported by a Grant (21RERP-B090228-08) from Residential Environment Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.

Conflicts of Interest:
The authors declare that they have no interest in reporting regarding the present study.