Secure Data Sharing with Confidentiality, Integrity and Access Control in Cloud Environment

Cloud storage is an incipient technology in today’s world. Lack of security in cloud environment is one of the primary challenges faced these days. This scenario poses new security issues and it forms the crux of the current work. The current study proposes Secure Interactional Proof System (SIPS) to address this challenge. This methodology has a few key essential components listed herewith to strengthen the security such as authentication, confidentiality, access control, integrity and the group of components such as AVK Scheme (Access List, Verifier and Key Generator). It is challenging for every user to prove their identity to the verifier who maintains the access list. Verification is conducted by following Gulliou-Quisquater protocol which determines the security level of the user in multi-step authentication process. Here, RSA algorithm performs the key generation process while the proposed methodology provides data integrity as well as confidentiality using asymmetric encryption. Various methodological operations such as time consumption have been used as performance evaluators in the proposed SIPS protocol. The proposed solution provides a secure system for firm data sharing in cloud environment with confidentiality, authentication and access control. Stochastic Timed Petri (STPN) Net evaluation tool was used to verify and prove the formal analysis of SIPS methodology. This evidence established the effectiveness of the proposed methodology in secure data sharing in cloud environment.


Introduction
Cloud computing is the next-gen technology which finds its applications across different sectors for information storage and security concerns. In cloud computing model, data privacy and prevention of data loss are the major concerns to be addressed [1]. In this scenario, the current research work proposes a methodology to overcome data security challenges in cloud. There is a drastic growth experienced in cloud computing in the recent years, thanks to its wide range of applications, flexibility and cost-effective implementation. Most of the organizations that deploy cloud technology handle their operations in a costeffective and flexible manner. It further reduces the total cost incurred by the ownership, a highly competitive advantage for emerging users and it provides time flexibility which is much needed to achieve market objectives [2]. In spite of the business benefits rendered by cloud technology, it still poses few challenges [3]. Data residency and security of the deployed data are key concerns raised upon cloud computing. The main concerns with data residency are as follows; who holds the authority to manage data, who can access the data and in case of data breach, alternative options for data storage and rule of law to recover from data breach [4].
Data encryption and limited access rights are the key solutions to overcome data residency concerns. Data encryption is a mathematical process that converts clear text data into cipher text so that the ciphered text cannot be read by anyone other than the intended user [5]. Access rights act as a protector against external threats and the clear text data can only be accessed by the user who has the permission to access the cloud database. Encryption protects the data from internal and external threats. The proposed methodology i.e., Secure Interactional Proof System provides secure deployment of technology in any organizations to improve business performance and the collaborative solution provider for secure data sharing in cloud environment. Secure International Proof System (SIPS) focuses on security concern in cloud environment. The proposed SIPS methodology has four basic objectives as given herewith. (1) Key agent (2) Access list (3) GQ authentication protocol and (4) Key pair. The architecture proposed system is pictorial represented in Fig. 1.
The resource owner provides a data list to key agent. Then, the key agent forwards the list as access list where a user's access rights are generated and maintained during Access listing process. The list provided by the key agent to access list tend to have a forwarded copy to key pair database as well since it helps in maintaining the key pair or security manners towards the data.
The generated prototypes of access list and key pair are exchanged to ensure the data integrity of the users. A multistep authentication protocol is followed for the users to have several verifications so as to maintain data security. These verification processes also ensure authentication and authorization of the users. Key management is also derived and monitored by Guillou-Quisquarter protocol. The direct access of the user is first ensured through multistep authentication protocol. After passing multistep

Related Works
A number of methodologies has been proposed and implemented earlier to over the challenge i.e., to enable data security while sharing data in cloud. Ali et al. [6,7] proposed CL-PRE certificate-less proxy re-encryption scheme which is a worthy approach in those domain. In this study, the data owner shares the data to cloud in which they are mentioned as recipients. At first, the file (or) data is encrypted using symmetric data encryption key DEK by the owner itself. Then the data is stored in cloud with Access Control List (ACL) [8]. ACL contains the access rights and the names in recipient group who can access the data. In the second step, the major and important task i.e., re-encryption occurs in which the DEK is encrypted again using public key and this process enables high security for the data. The encrypted DEK is also stored in public cloud [9]. The recipient holds a private key which is developed in the form of a proxy server [10]. Proxy server in cloud considers the re-encrypted data which is sent by the data owner. Then, the re-encryption algorithm is applied to the encrypted DEK so that the decrypted recipients' private key is converted. With the help of private key, a user can download the encrypted data from cloud. For each recipient group, different DEK keys are produced to ensure confidentiality. The major advantage of this work is re-encryption key which is generated from data owner's private key and recipient's public key. Certificate-less based encryption security properties such as unidirectionality, noninteractivity, non-transistive and single use were obtained in this research that paved the way for gaining data security in cloud.
Seo et al. [11,12] conducted a research with regards to mediated certificateless encryption (or) double encryption scheme. This work was applied to achieve confidentiality and security performance in cloud. Authorization has played a vital role in increasing the applicability and success of this scheme. The researchers proposed CL_PKE scheme to overcome the existing certificateless based encryption schemes which are not only expensive in pairing operation, but also were vulnerable to decryption attacks [13]. The proposed scheme works without pairing operation for sensitive information shared in cloud. Based on access control policies, the sensitive data is encrypted using clod generated user's public keys and the data is uploaded to cloud. The cloud performs partial decryption and encryption for the authorized users [14]. In subsequent process, the user fully encrypts or decrypts the data using their own private keys. This method proved to be an efficient approach in overcoming the pairing operations. Further, certificateless cryptography was also applied with several theorems and explanations. This scheme was established as an efficient and practical method in achieving the intended outcome.
To overcome certain drawbacks in the past two approaches, the study conducted earlier [15] implemented a special feature for advent users on cloud security through another proposal which introduced identity-based auditing for data sharing in cloud. This method promoted identity-based auditing scheme with information hiding. The method was promising in terms of hiding information to provide security. It is a different method since the scheme allows the user to share their plaintext without any encryption with researchers and makes the sensitive data go invisible [16]. To overcome the failure of previously-constructed approaches, this method implemented an identity-based auditing scheme to hide the sensitive information from malicious attackers. Integrity and authenticity were heavily achieved in this method [17,18]. A novel mechanism for sensitive information hiding was proposed with unique signing which is unique to the user. The responsibility of the manager remains the same alike computer network gateway and they possess the rights to check whether the file contains content with sensitive information. In the study conducted earlier [19], an efficient identity-based auditing scheme was proposed for shared data model to achieve high concurrency. The main aim of this approach was not to show the sensitive data of the organization to both senders as well as the receivers. This was achieved by centralized computing tasks, which are redundant to manager and are distributed to the users. A portion of the user's private key is used to hide sensitive information, instead of selecting a random variable. The author implemented Herss's efficient identity-based signature scheme to overcome some disadvantages in this method especially during signature algorithm process [20,21]. Data processing and integrity are the major disadvantages found in this approach.
As per the review of literature, some disadvantages are found in earlier methods and are yet to be overcome such as security, integrity confidentiality, access control and authorization. The current study proposes a novel method and implements the same to overcome the challenges faced in this domain. The experimentation procedure is conducted with performance data sets and the output is discussed in detail. Following section details about the advantages of the proposed scheme.

SIPS Methodology
The proposed methodology that supports authentication is briefly discussed in this section. The method has the ability to store the encrypted data before it reaches the cloud and perform secure data sharing in cloud environment.
The following realities are applied in SIPS methodology.

Realities Part: I
Cloud Storage: The storage service is provided by the cloud to users. All the stored information on cloud should be secured against internal and external threats [22]. Both confidentiality and integrity of the information should be secured by storing the encrypted data in cloud [23,24]. Cloud storage in SIPS methodology plays a vital role in basic cloud operations such as data uploading and data downloading during when both data integrity and data confidentiality are heavily accomplished. SIPS: SIPS remains the heart of the secure system that helps in bringing out the desired objectives, for instance, authentication (GQ Key Management, Key Generation and Key Pair Storage) whereas AL provides the access rights to the users. A user is required to register themselves with AL in order to obtain security service. The SIPS methodology ensures the accomplishment of secure reality for authentication. Authentication is mainly provided to avoid data loss and to ensure data integrity. SIPS can be implemented by any organization or can be maintained by a private trusted party too. However, the SIPS generates more trust in the system in organization setting.
Resource owner: Resource owner or data owner is the one who provides the data to user. The data provided by the resource owners are encrypted and stored in cloud storage. Access permission is given by the resource owner to cloud through access list. The access list contains the list of protocols for user who can access the data derived accordingly by the resource owner. The access list was maintained in SIPS methodology to qualify the access control with worthy users and to notify the user as a competent person and achieve owner satisfaction.
Asymmetric Key R k ; U k ½ : Two large primes P L and Q L , generated by key agent, are selected for each key request made by the resource owner. To be secure, the recommended size for each prime, P L or Q L , is 512 bits (almost 154 decimal digits). This makes the size of T, the modulus 1024 bits (309 digits) to calculate (R k ; U k ) in a step-by-step process. In first step, two unique large prime numbers such as P L and Q L of length 512 bits are selected in such a way that P L is not equal to Q L . In the next step, T is obtained by multiplying two prime numbers (P L and Q L ) while the output is 1024 bit and the equation ' This asymmetric key encryption for securing the data.
Key Agent/Key generation (R ki ; U ki ): For each of the users in the group, the key agent generates (R ki ; U ki such that {R ki ; U ki g ¼ 0; 1 f g 512. R ki ; U ki serves as the portion of key agent and is used to compute R; U ð Þ, whenever a key request is obtained by the key agent. Furthermore, it is ensured by contrasting the distinct values (R ki ; U ki ) generated for every key request.

Algorithmic Representation of SIPS Methodology
Algorithm: key generation process Input: AL, Key req, 512 bits; COMPUTE:

File Upload
There should be a secure way to protect sensitive data. This data further needs to be stored and shared among several users or in group. A key request is sent by resource owner to the key agent (KA). Figs. 2 and 3 show the processed involved in uploading a file.
AL database contains the key request and access list that are granted by resource file access of the user. There are different types of access rights used by the user to access the file. There are many other constraints also can be set to get the access control over data. The key agent generates the key according to the process defined in section (iii). To generate the ACL for respective data, AL is used by KA. Resource owner, after receiving the encryption key, encrypts the data and stores the same in cloud. For each file, ACL is separately maintained. ACL holds some major information about file such as file ID, size, Owner Information (ID) and the list of user IDs with other metadata.
Decryption key is stored in keypair database. Subsequently, the key agent generates R ki and U ki for every user and the information is stored into AL database for later use.

File Download
The authorized user requests the key to decrypt the file. Before that, the user must prove its identity to SIPS. GQ (Guillou-Quisquater protocol) selects two numbers for every user Such as 'PU' i.e., public and 'SE' i.e., secret. However, in this case, the relationship between 'PU' and 'SE' is different i.e., SE r * PU = 1 mod T. The GQ constitutes three exchanges. Verification is repeated several times at a random  value of challenge between 1 and r. The user must pass several rounds of tests for verification. If a user fails in single round authentication, the process is aborted and user is not authenticated. After user authentication, they receive the session key and decryption key (U). Now, the user can download the file from cloud storage and decrypt it.

File Update
The method of updating the data is similar alike uploading the file in cloud. The peak difference between them is when you update the access list-related activities, the key generation activities are not carried out. When a resource owner downloads the file and make any changes, they have to encrypt the file again and store it in cloud. If the resource owner of the asset wants to change the access list, they can ask the key agent to re-generate the key pair and update the access list. Ultimately, the resource owner has the rights to add (or) delete the user against the access rights in the file.

Discussion on Sips
The SIPS methodology is proposed in this study to provide the following services for electronic records.
• Authorization and Integrity • High Confidentiality • Secure data sharing among the group • Secure data from unauthorized access • Provide Access control to the user.
The following discussion briefly describes the working principle of SIPS methodology and how the service are achieved. The proposed methodology has a few main components such as Access List, Key Agent and GQ Protocol. These components act as Secure Interactional Proof system that enables its users to interact securely in cloud.

Access List
Access control is provided to the user based on the access list. This access list plays an important role by mutually interacting with Gulliou-Quisquater Protocol and ensuring the access of data for the user in cloud. The access list is generated and provided by the resource owner who shows the users' authorization. The ultimate goal of access list is to provide the access to cloud information only to correct users (or) authorized users. Access rights are provided by data owner to the authorized users.

Key Agent/Generator
The goal of the key agent is to generate keys. A pair of keys is generated using the key pair data that is encrypted and stored securely in cloud. Data confidentiality and data integrity are achieved through this encryption method.

GQ Protocol
GQ protocol is a multi-authentication protocol which verifies the user in multiple steps. Through multistep authentication, the fraudulent users can be get rid of. It is an identification protocol that provides authentication by processing 'n' number of rounds.

Authentication system through GQ Protocol
For user authentication process, GQ protocol enables numerous rounds in SIPS.
One-time setup: SIPS chooses two unique primes i.e., S and R and generates a T = SR module. SIPS specifies a public variable i.e., P U > 4, with gcd ½ P U ; S À 1 ð Þ R À 1 ð Þ ð ¼1 in order to allow SIPS to measure the security S ¼ P U À1 mod S À 1 ð Þ R À 1 ð Þ The parameters are defined by SIPS.

Selection of parameters for each user Each user has a unique identifier ID(A) that can be used in the determination of value J(A) = f(Id(A)) mod n. [Redundant identity]
SIPS offers private data to each user which can be determined using Protocol: The user proves their identification to SIPS using 'N' rounds. Each of them is composed of the following elements.
i) A user chooses a random private R P and sends ID (a) and X ¼ R P P U mod T to SIPS ii) SIPS chooses a randomized challenge in 1; 2 . . . . . . . . . . . . :r f g iii) The user calculates and replies to SIPS: Y ¼ R P private user ð Þ e mod T: iv) SIPS collects, Y, constructs J(User) = f(id (user) mod T calculates SET public variable; Gcd

Formal Analysis
Time Net is a software which is used in modelling and analysis of Stochastic Petri Nets (STPN). The following section briefly introduces STPN prior to discussion of the analysis.

Stochastic Petri Nets (STPN)
Time Net tool is used in the evaluation of STPNs in which the transition firing times can be exponentially distributed. Graphical User Interface (GUI) is used to specify the models and the results are defined with special purpose syntax. Both continuous and discrete time scale models are supported in this method.
The analysis is conducted based on Markov regenerated theory. The supplementary variable method is used for transient analysis. This tool provides different techniques for simulation experiments.

Analysis Theme of STPN's
It consists of five tuples STPN ¼ S; T; R; M 0 ; ð Þ where P denotes a set of states and is said to be places. T denotes a set of transitions, Þis a flow relation set called as arc. M 0 ; is denoted as initial marking. is the firing rate array which is associated with transition. The function m ð Þ denotes the firing rate of the random valuable for current marking. STPN's reach ability graph can be directly mapped to Markov properties. Each state of the graph is relatively mapped with the state of Markov process. Firing state of the graph is correspondingly equal to Markov state transition with probability.
→ Step 1: The key agent generates asymmetric key (i.e.,) K. The following formula is generated on transition gen_key to describe the process Step 2: This process is further carried out to next level of encryption. The data owner encrypts the file (F) which is then uploaded to the cloud in a secured manner.
Step 3: The key agent generates a pair of keys in which one key is shared to the user and another key is stored in key pair.

SIPS gen key
Step 4: At most of the times, authentication is the primary step in this approach. This is performed using Guillou Quisquarter protocol (GQP). This protocol conducts the multistep authentication process using a one-time setup by choosing several parameters such as User ID, file name etc., which creates data privacy. The steps involved and the procedure are discussed in detail under section (iii).

SIPS Authen user
Step 5: After uploading the data and successfully achieving the authentication, next step is processed on user side i.e., file downloading which is often referred to decryption process. The following formula relates the downloading process. The key generation receives a decryption request from the user. After verifying the authentication and authorization status of the user by key generation using GQP, the key is figured out based on predefined steps. Key generation decrypts the data and replies the user. This is to ensure privacy and to secure the generated keys, it is deleted subsequently.

Properties for Verification
Unauthorised users are not allowed An authorized user from cloud cannot generate a valid key by acting as another user and granting a random key.
An authorized user can access the data by generating a valid key which is contributed by key manager (or) generator. A malicious party cannot access the data since the proposed methodology is highly secure and authenticated.
6 Performance Evaluation

Experimental Setup
To specify the performance of the proposed SIPS methodology, the curren approach was implemented using code Dx which provides a set of correlated results. The main goal of code Dx is to prioritize and manage attacks. It is an interactive visualization of the metrics which is highly required for the current scenario since it covers features such as system security, authentication and also the integrity of the data. The main protocol used in code Dx API uses a REST-full design built on HTTP such as GET, POST, and DELETE etc. HTTP 200 ok is used to communicate status in the server. Authentication relies on passing an API-Key whereas HTTP header is present in all API requests. HTTP 403 Forbidden is used as a request header for any invalid users or invalid point which is generally returned as an empty response.
E.g.: The output of the API-Key Header look as follows API-Key: 650e8300 -e286 -40d4 -a617 -557744550000 In general, the UUID's are used to generate API keys in code Dx. To upload data, a new analysis in code dx is as follows.
POST/api/project/:pid/analysis All the cryptographic operations were implemented in RSA Algorithm.

Result Analysis
The proposed SIPS methodology was evaluated under different scenarios.

1) Key generation
The asymmetric keys were generated for every file as discussed earlier. Key sharing was done separately for every user. The proposed SIPs methodology was evaluated with specific reference to time taken during key generation.
The researcher analyzed the consumption of time for different number of users. The set of users considered were 20, 40, 60, 80, and 100. Fig. 4 shows the results attained i.e., time consumed to generate keys. With increase in the number of users, the time consumed for key generation. It is to be noted that the increase in time consumption is not uniformly proportional to the increase in the number of users. The time consumption did not increase alike the increase in the number of users. A slight decline was observed at the time of data submission.

Encryption and Decryption
The researcher analyzed the time taken for encryption and decryption processes with varying data (or) file size.The file size used were 1, 10, 50, 100, 500 MB. As defined earlier, key generation plays a vital role in this methodology before encrypting and decrypting the data. The time required for key generation was compared with total encryption and decryption times. The main purpose is to check and maintain the overhead of key computation across the total number of encryption and decryption processes. Figs. 5 and 6 shows the results attained from the analysis of encryption and decryption. The figure shows the expected time for encryption and decryption processes with increase in file size. This shows that the proposed SIPS methodology was highly helpful in maintaining the computational time. The results inferred that the time was almost constant with negotiable change occurring during processing. The comparative analysis results infer that the small-sized file had high percentage of key computational time compared to total encryption time. As per the comparison made, 200 kb file size took 15% high computation time than the total encryption time. Though the file size increased to 2 MB, it reduced the time proportion to 10%. When the file size was increased up to 20 MB, the percentage of time consumption got reduced to 4%. With 1000 MB file size, the percentage of time consumption remained 0.54%. It is to be noted that the overall key computation time was in the ragne of 0.010 and 0.015 s. The decryption results were also in line with the trends observed in encryption process. The major percentage of key computation was in the range of 15% in case of 200 kB and 2% in case of 1000 MB file size.

File Upload/Download
The researcher evaluated the proposed SIPS methodology for total time consumed to upload and download a file from cloud. The following times were taken into consideration to perform the abovementioned scenario.   4 shows the results for time taken to upload the data. Fig. 5 shows the results for downloading the data from cloud followed by subsequent decryption process. The time consumed for both uploading and downloading the data was same. Tab. 1 represents the comparison of key generation times and Tab. 2 compares the turnaround times. The proposed SIPS methodology was compared and show in Fig. 4 for key generation, Fig. 6 for file uploading and Fig. 5 for file downloading. These comparisons were based on time consumption during key generation and turnaround time taken for both encryption and decryption processes. To conclude, the comparison reveals that the SIPS methodology performed far better than other techniques due to small overhead time.

Conclusion
The current study proposed and designed a novel methodology i.e., SIPS for secure data sharing in cloud. The proposed methodology has the ability to achieve data confidentiality, authentication, authorization, integrity and perform secure data sharing without double encryption process. The main aim of the proposed methodology is to ensure access control for the data so as to avoid malicious attackers. Moreover, the SIPS methodology assures the integrity of the data in case if it is unmodified. Both encryption and decryption processes were performed with the help of key generator that acted as a trusted third party in SIPS methodology. The proposed methodology can also be implemented in mobile cloud computing. The working of SIPS was formally analyzed using STPN and Code Dx. The performance was evaluated based on time consumption during three scenarios such as key generation, uploading and downloading the data from cloud. The results infer that the proposed SIPS methodology can be implemented in cloud for secure data sharing. In future, the proposed model can be incorporated in real time application areas. Besides, the presented model can be extended to the use of light weight cryptographic techniques.