Scholarly article on topic 'Access control as a service for the Cloud'

Access control as a service for the Cloud Academic research paper on "Computer and information sciences"

Share paper
Academic journal
J Internet Serv Appl
OECD Field of science

Academic research paper on topic "Access control as a service for the Cloud"

Fotiou etal. Journal of Internet Services and Applications (2015) 6:11 DOI 10.1186/s13174-015-0026-4

O Journal of Internet Services and Applications

a SpringerOpen Journal

RESEARCH Open Access

Access control as a service for the Cloud

Nikos Fotiou*, Apostolis Machas, George C Polyzos and George Xylomenos


Cloud computing has become the focus of attention in the computing industry. However, security concerns still impede the widespread adoption of this technology. Most enterprises are particularly worried about the lack of control over their outsourced data, since the authentication and authorization systems of Cloud providers are generic and they cannot be easily adapted to the requirements of each individual enterprise. An adaptation process requires the creation of complex protocols, often leading to security problems and "lock-in" conditions. In this paper we present the design of a lightweight access control solution that overcomes these problems. With our solution access control is offered as a service by a third trusted party, the Access Control Provider. Access control as a service enhances end-user privacy, eliminates the need for developing complex adaptation protocols, and offers data owners flexibility to switch among Cloud providers, or to use multiple, different Cloud providers concurrently. As a proof of concept, we have implemented and incorporated our solution in the popular open-source Cloud stack OpenStack. Moreover, we have designed and implemented a Web application that enables the incorporation of our solution into Google Drive.

Keywords: Authorization; Authentication; Delegation; Security; Policies

1 Introduction

Cloud computing is a technology that offers a cost-effective way for outsourcing data storage and computation. Nevertheless, despite its intriguing properties, enterprises are reluctant to fully adopt it, since they are concerned-among other things-about losing the governance of their outsourced assets, i.e., losing the ability to enforce their own, enterprise-specific, security policies. According to PwC's Global State of Information Security Survey 2012 [1], the largest perceived Cloud security risk is the "uncertain ability to enforce provider security policies," whereas according to the survey of Subashini and Kavitha [2] one of the biggest security challenges for providing Cloud-based services is the "adherence of the Cloud provider to the security policies of its clients," as well as "the administration of user authorization systems'! This mismatch between provider-enterprise security policies severely impedes Cloud adoption and further research on effective solutions for this problem is required. Indeed, "effective models for managing and enforcing data access policies, regardless of whether the data is stored in the Cloud or cached locally on client


Mobile Multimedia Laboratory, Department of Informatics, School of Information Sciences and Technology, Athens University of Economics and Business, Evelpidon 47A, 1 13 62 Athens, Greece

devices" was identified back in 2010 as a top research priority, by the European Network and Information Security Agency (ENISA) [3].

One question that may arise is how likely loss of governance of the outsourced data is, and what is its impact. According to ENISA's Cloud Computing Security Risk Assessment report [4], the loss of governance is a risk with very high probability and very high impact. The same report states that two of the vulnerabilities that may expose an enterprise to that risk are "unclear roles and responsibilities" and "poor enforcement of role definition." This outcome comes as no surprise, since the organizational structure and the security policies of an individual enterprise cannot be easily captured by a Cloud provider. Moreover, the interoperability between an enterprise and a Cloud provider requires the development of complex communication protocols; this, however, increases the chances of a security breach due to implementation errors, according to the Cloud Security Alliance [5]. Armando et al. [6] exploited such implementation errors in order to bypass the SAML-baseda single sign-on system of Google apps. Similarly, Somorovsky et al. [7] gained access to multiple SAML-based systems by exploiting implementation bugs. Nevertheless, even if the developed protocol is implemented correctly, it will be Cloud provider specific, thus hindering the migration


© 2015 Fotiou et al.; licensee Springer. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

of an enterprise to another Cloud provider; this condition is known as lock-in, and has been identified as a high probability risk by ENISA [4].

In this paper, we propose a novel solution that enables a trusted entity to store enterprise-specific security policies and take access control decisions on behalf of a Cloud provider: the Cloud provider then has only to respect the access control decision. This trusted entity, which is referred to as the Access Control Provider (ACP), may as well be provided by the enterprise itself, for example, by leveraging its user management system, or by a third party. Compared to existing systems, our solution offers better end-user privacy and requires a much simpler communication protocol.

This paper extends our previous work presented in [8], with a more detailed system description, an additional proof of concept implementation, more extensive overhead evaluation, and further comparison with existing systems. The paper is organized as follows. In Section 2 we discuss related work in this area. In Section 3 we detail our scheme. In Section 4 we present our prototype that implements a secure private Cloud file storage service using OpenStack, an open source Cloud stack, as well as a Web application that enables the incorporation of our solution in Google Drive. In Section 5 we evaluate the security properties of our solution and analyze its performance. Finally, in Section 6 we discuss further extensions to our solution and we conclude in Section 7.

2 Related work

Many legacy systems rely on Role Based Access Control (RBAC) for controlling access to resources stored by 3rd parties (e.g., Cloud providers, web servers). These systems (e.g., [9-12]) usually adopt one of the following approaches for enforcing access control policies: (a) they either employ an existing language (such as XACML [13]) or define their own to specify the access control policy, which is then interpreted and enforced by the Cloud, or (b) they use cryptographic solutions (such as attribute based encryption [14]) to encrypt data in such a way that only authorized users can decrypt them. RBAC has a role that is orthogonal to our system: RBAC policy definition languages and roles can be used by the ACPs, whereas data stored in the cloud can be encrypted based on roles. Our system is concerned with access control delegation, rather than access control enforcement, where an RBAC solution may be used.

Single Sign-On (SSO) systems-such as Kerberos and, more recently, OpenID 2.0 [15] and OAuth 2.0 [16]-have similar goals with our scheme. Kerberos has been widely used for controlling access to network resources. In a Kerberos system a Ticket Granting Service (TGS) provides a "ticket" to an authenticated user that enables her to use a resource. The TGS and the resource, however,

have to belong to the same administration domain or they should be pre-configured with a shared secret. Our system requires neither common administrative domains nor pre-shared secrets.

OpenID is an identity management system that allows identity management delegation to a third trusted party, known as the Identity Provider (IdP). IdPs authenticate users and provide them with an "authentication token", which they can use to access a resource. OpenID has been studied in the context of Cloud computing. Nunez et al. [17] used OpenID in conjunction with proxy re-encryption in order to provide Cloud based identity management services. Similarly, Khan et al. [18] have implemented OpenID based authentication mechanisms for the OpenStack platform. OpenID provides only user authentication, therefore, in an OpenID-based access control system, the Cloud provider is responsible for evaluating the access control policies. Moreover, the authentication token is unique per user, therefore user activity can be tracked. In our system access control policies are evaluated by ACPs and not by the Cloud providers. In addition, in our system tokens are ephemeral, therefore they cannot be used to track the long term activity of a specific user.

OAuth 2.0 is an IETF standard for authorizing access to resources over HTTP. OAuth 2.0 requires the resource owner to be online during the user authorization procedure (Section 1.2 of [16]), and requires implicitly the development of a communication protocol between the resource server and the authorization server in order to be able to exchange an access token whose form-as mentioned in Section 1.4 of [16]-is not specified. This vagueness impedes implementations of systems where the resource server and the authorization server belong to different administrative domains. An approach for implementing access control using OAuth 2.0 is the following: an access control policy based on attributes that can be provided by an authorization server (e.g., user age, as provided by a social network) is defined and stored in the Cloud, the Cloud provider accesses the required attributes using OAuth 2.0 and uses them to evaluate the access control policy. In this scenario, the Cloud provider not only learns some information about the user (in this example his age), but it is also able to interpret them. In our system, Cloud providers neither learn anything about users nor do they have to understand any enterprise-specific semantics.

Policy Based Admission Control [19] is a framework that allows a Policy Enforcement Point (PEP) to delegate access control policy decisions to a Policy Decision Point (PDP). Each Cloud provider can operate a PEP, whereas PDPs can be implemented by third trusted parties, or even the enterprises themselves. A PEP is responsible for collecting all the information required by a PDP, which includes information about the user that requests access.

Moreover, a PEP and a PDP should agree on a, usually complex, communication protocol (e.g., COPS [20]). With our solution, Cloud providers are completely oblivious about access control policies. Moreover, Cloud providers neither collect nor learn any information about users. Finally, our communication protocol is much simpler, therefore less prone to implementation errors.

The Security Assertion Markup Language (SAML) is an XML-based security assertion language [21], used for exchanging authentication and authorization statements about subjects. Being a language and not a system, SAML is orthogonal to our work. As a matter of fact, messages in our scheme can be exchanged via SAML, using the Authentication Request Protocol (Section 3.4 of [21]).

3 System design

3.1 Overview

Our scheme is composed of the following entities: the data owner (owner), the data consumer (consumer), the Cloud provider (CP), and the access control provider (ACP). The goal of an owner is to store some data in a CP and allow authorized consumers to perform operations over this data. Each operation is protected by an access control policy. An access control policy is stored in an ACP and maps the identity of a consumer to a boolean output (true, false). When the output of an access control policy is true, the consumer that provided the identification data is considered authorized.

In our scheme, the following trust relationships are considered: owners trust ACPs to authorize consumers, and

owners and consumers trust CPs to respect the decisions of ACPs. The first type of trust relationship can be trivially established if the ACP is implemented by the owner (e.g., the ACP leverages the enterprise's user management system). The second type of trust relationship is a relaxed form of the trust relationship that currently exists between an owner and a Cloud provider: in a contemporary Cloud system where access control is implemented in the Cloud, an owner trusts a Cloud provider to (i) securely store some enterprise-specific security policies (ii) to use these policies correctly, i.e., understand their semantics, and (iii) to enforce the outcome of the access control decision.

As illustrated in Figure 1 a typical transaction in our system takes place as follows. Initially, an owner stores an access control policy in an ACP (step 1) and obtains a URI for that policy (step 2). As a next step, she implements an operation over some data in a CP and stores the URI of the policy that protects this operation (step 3). When a consumer tries to perform a protected operation for the first time (step 4), she receives in response the URI of the access control policy that protects the operation and a unique token (step 5). Then, the consumer authenticates herself to a suitable ACP by providing some form of identification data and requests authorization for the access control policy specified in the obtained URI (step 6). If the consumer "satisfies" the access control policy, the ACP signs the token and sends it back to the consumer (step 7). The consumer repeats her request to the CP including this time the signed token (step 8). The CP checks the validity

of the token and if the token is valid it executes the desired operation and returns its output (step 9).

3.2 Goals

Our goal is to build a system in which the following properties hold:

• The system is secure: Provided that all system entities respect the trust relationships described previously, it shall not be possible for an unauthorized user to perform a protected operation.

• Consumer privacy is preserved: A CP shall gain minimal information about the identity of a consumer. Ideally it will only learn that a consumer can be authorized by a specific ACP. Moreover an ACP should not be able to tell the operation a consumer wants to perform or the data she accesses.

• Data can be easily migrated among different Cloud providers: The only entities that should be aware of an access control policy and its implementation details are the ACP and the owner. CPs shall be oblivious about the access control policy implementation details. Therefore, if two CPs

implement our solution, moving data from one CP to another shall be almost as trivial as copy-pasting it.

• An access control policy does notreveal anything about the data and the operations it protects: Access control policies should be decoupled from the data and the operations they protect. An access control policy should be defined taking into account solely consumer attributes.

• Access control policies are re-usable: An access control policy should not be bound to a particular operation. It should be possible to protect many and diverse data items, stored in multiple CPs.

• An access control policy can be easilymodified: The modification of an access control policy shall not involve CPs: the only entity involved in the modification of an access control policy should be the ACP where the policy is stored.

3.3 Detailed system description

We now detail our system design (Figure 2). We have made the following assumptions: (i) ACPs and CPs have a public-private key pair, (ii) ACP's and CP's public keys are known to the consumers and (iii) all messages are exchanged over a secure channel. Throughout this section the notation of Table 1 is used.

Figure 2 System procedures.

Table 1 Notation

PubCP The public key of a CP

PubACP The public key of an ACP

URIap The URI of an access control policy

SignACP (Y) The digital signature of plaintext Y generated using

the private key of an ACP

Our system consists of the following procedures:

3.3.1 Access control policy creation and data storage

With this procedure an owner creates and stores an access control policy in an ACP. The ACP in return provides a URIap. For each protected operation implemented in a CP, the owner defines the URIap of the policy that protects it and the PubACP of the ACP where the policy is stored. This information is maintained in the CP's Access Table that contains tuples of the form:

[ operation, URIap, PubACP]

A URIap is re-usable, i.e., it can be used for protecting multiple operations stored in various CPs. The mechanisms for creating an access control policy and for updating an Access Table are ACP specific and CP specific, respectively.

3.3.2 Unauthorized request

This procedure is executed by a consumer in order to perform an operation for the first time. The consumer sends an operation request message to the CP. Upon receiving the request the CP creates a unique token (i.e., an adequately large random number) and sends it back to the consumer, along with the corresponding URIap. Therefore, the following exchange of messages takes place:

(1) : Consumer ^ CP : operation request

(2) : CP ^ Consumer : URIap, Token

In order to keep track of the generated tokens, a CP maintains a Token Table that contains entries of the form:

[ Token, authenticated, expires, URIap]

When a new token is generated, a new entry is added to this table. The value of the authenticated field of this entry is set to false and the value of the expires field to the generation time plus a very small amount of time, sufficient to obtain an authorization.

3.3.3 Consumer authentication and authorization request

This procedure is executed by a consumer upon receiving a response to an unauthorized request. Firstly, the consumer sends her identification data, PubCP, URIap and the token to an ACP responsible for evaluating the access control policy identified by URIap. If the consumer satisfies URIap, the ACP creates an authorization message

that contains the token, the amount of time that the token should be valid (i.e., its lifetime), URIap, and PubCP. Then it signs this message and sends it back to the consumer. Therefore, the following messages are exchanged:

(3) : Consumer ^ ACP : ID, PubCP, URIap, Token

(4) : ACP ^ Consumer : auth, SignACP (auth)


auth = Token, Lifetime, URIap, PubCP 3.3.4 Authorized request

This procedure is executed by an ACP authorized consumer in order to perform an operation. The consumer sends a message that includes the operation request, the token, the token's lifetime and the signature of the authorization message (i.e., message (4)). Therefore the following message is sent:

(5) : Consumer ^ CP :

operation request, Token, Lifetime, SignACP(Auth)

Upon receiving this message, a CP should decide if the consumer is allowed to perform the requested operation. Therefore, it executes the following algorithm (Figure 3):

1. Retrieve the entry of the Token Table that contains the token and check if the token has expired. If it has expired, return an error

2. If the authenticated field of the corresponding record in the Token Table is falsethen

(a) Retrieve the PubACP that corresponds to the operation from the Access Table

(b) Retrieve the URIap that corresponds to the token from the Token Table

(c) Reconstruct the authorization message

(d) Verify SignACP(auth), using PubACP

(e) If the signature verification succeeds, update the Token Table entry as follows: set the expires field equal to the LifeTime field of the authorization message and set the authenticated field to true. Proceed to Step 3a below.

(f) If the signature verification fails, return an error

3. If the authenticated field of the corresponding record in the Token Table is true then

(a) Find the URIap that corresponds to the token from the Token Table

(b) Find the URIap of the requested operation from the Access Table

(c) Check if the retrieved values match. If they match return, else return an error

Operation request

J Token entry

J Table mam


Has token True expired?

Is token False Authenticated?



^ True

Update Token Table

entry Tables contain False

the same URIap?

True Success

Figure 3 Authorized request decision process.

If this procedure is successful then any subsequent authorized request may include only the token. Moreover, the same token can be used multiple times, even for invoking different operations protected by the same URIap.

3.4 Use case

Let us now illustrate our scheme through a use case. Enterprise A has outsourced sales records storage and analysis to Cloud provider CPA. The operations implemented in CPA are: update sales records, calculate statistics, and view statistics. Enterprise A has the following access control policies:

• Policy 1: All sales department employees can update sales records

• Policy 2: Only the sales department director can calculate statistics

• Policy 3: All shareholders can view the statistics

Enterprise A implements the above access control policies in an ACP owned by itself. The public key of this

ACP is denoted by PubACP. For each policy the ACP generates a URI, i.e.,, and CPA's Access Table is updated as shown in Table 2.

The sales department director issues an unauthorized request for the calculate statistics operation. CPA generates a token, namely Token1, and responds by sending the following message (, Token 1). CP's Token Table is then updated with the entry shown in Table 3.

As a next step, the sales department director authenticates himself to the ACP, which responds with

Table 2 CPA access table new entries

Operation URIap ACP public key

Update records PubACP

Calculate statistics PubACP

View statistics PubACP

Table 3 CPA token table new entries

Table 5 CPA access table using level extension

Token Authenticated Expires URIap Operation URIap Level

Tokeni false timestampi Update records 100

the following, digitally signed, authorization message: (Tokeni, timestamp2,, PubCPA). Then, the sales department director issues the following authorized request: ("calculate statistics', Token 1, timestamp2, SignACP(auth)). CPA checks if Token 1 has expired. Then, it reconstructs the authorization message by retrieving the URIap associated with the calculate statistics operation (i.e., from the Access Table and verifies SignACP (auth) using PubACP (also found in the Access Table). Finally, CPA checks if the URIap found in the Access Table matches the URIap included in the entry for Token1 in the Token Table. If all these steps are successful, CPA executes the calculate statistics operation and modifies the entry for Token 1 in the Token Table as shown in Table 4.

Since Token 1 is now marked as authenticated, the sales department director can use it in all subsequent requests, until it expires. Moreover, as long as Token 1 remains valid, SignACP(auth) does not have to be included in subsequent requests.

3.5 The "level" extension

In the above use case, it can be observed that if the sales department director wishes to invoke the update records operation, he has to re-authenticate himself, since this operation is protected by a different URIap. The level extension mitigates this shortcoming by adding a new field to an Access Table: the consumer level. The consumer level is a number that denotes the minimum level that a consumer should have in order to invoke an operation. Using this extension, the Access Table of the Cloud provider considered in the use case of Section 3.4 can be modified as shown in Table 5.

With this extension, an ACP has to include the consumer level in the authorization messages. Moreover, a CP now takes part in the access control decision, since it has to check if the level included in the authorization message is greater or equal to the level included in the Access Table. Finally, if the level extension is used, Token Tables should, additionally, include the level that corresponds to a token.

Suppose that the level of the sales department director in the previous use case is 200. Then, he would be able to successfully invoke the update records operation, using Token 1, without re-authenticating himself.

Table 4 CPA token table modified entry

Calculate statistics

View statistics

The ACP public key column is not shown.

4 Implementation

As a proof of concept we implemented a secure file storage service using a popular open source Cloud stack, the OpenStack [22], as well as a Web application that allows the incorporation of our solution in Google Drive [23]. The ACP and the consumer software used in both implementations are the same. Our implementation supports the level extension. As a public-key encryption system we use RSA. Public keys are encoded in JSON format using the keyCzar [24] python library. The keyCzar library is also used for generating digital signatures.

4.1 ACP and consumer software

The ACP of our proof of concept is implemented as a PHP application hosted in an Apache web server. An SQLite database is used for storing username-password pairs, as well as username to URIap-level mappings. User-names are unique and a username can be mapped to many URIap-level pairs (e.g., Table 6). The consumer software implements the authentication and authorization request, by encoding the username, the password and the request parameters in a JSON object and by POSTing this object to a particular URL, using HTTPS. The response to this request is again encoded in a JSON object. The consumer software has been pre-configured with the public keys of the CP and the ACP components.

Table 6 An instance of the user managements system

Username Password

fotiou 12345

machas 12345

polyzos 12345

xylomenos 12345

Username URIap Level

fotiou mmlab/Policy1 100

fotiou mmlab/Policy2 200

machas mmlab/Policy1 200

machas mmlab/Policy3 300

polyzos mmlab/Policy3 100

polyzos mmlab/Policy4 200





Token1 true timestamp2



4.2 OpenStack-based implementation

For our OpenStack-based CP (Figure 4), we leveraged the functionality of the OpenStack component Swift, which is used for building object storage systems. A Swift-based object storage system is composed of two networks: the internal (private) network that consists of storage nodes, and the external (public) network that consists of a proxy server and (optionally) an authentication server. The proxy server accepts HTTP(S) requests and processes them using a Web Server Gateway Interface. The parameters used in each request are encoded in HTTP headers. Each request is pipelined through a number of add-ons, each of which may transform it, forward it, or respond on behalf of the system to the user.

Objects stored in a Swift-based system are organized in a three level hierarchy. The topmost level of this hierarchy is the accounts level, followed by the containers level (second level) and the objects level (third level). The accounts level contains user accounts. Each user account is associated with many containers from the containers level. A container is used for organizing objects, therefore a container is associated with many objects from the objects level. An object may be a file or a folder (that contains other objects). Every object within a container is identified by a container-unique name. Each request for an operation over an object contains a URI that denotes the account, the container and the name of the object in question, i.e., it is of the form "https://CPHostName/ accountname/containername/objectname".

We implemented our system as a Swift add-on added in the pipeline of the add-ons that process incoming

requests. This add-on replaces Keystone; the default OpenStack component that handles user authentication. Our implementation allows file storage and retrieval, as well as the following operations over the stored files: organizing files in containers, listing the files of a container, copying a file, moving a file and deleting a file. Token and Access Tables are implemented as SQLite tables. An owner hard codes in the Access Table records of the form: [path,URIap,level,PubACP]. A path may be account-wide, container-wide, or object-wide.

Initially, the consumer software sends an unauthorized GET/POST request over HTTPS. The desired operation is specified in a HTTP header and the URL of the request denotes the object (or the container, or the account) that will be used as input to the operation. When an unauthorized request is pipelined through our add-on, the add-on checks if a URIap exists in the Access Table for the URL specified in the request: if such a URIap exists, the addon generates a new token, using the token generation mechanism provided by Swift, and creates a response (as described in Section 3.3); each part of the response is encoded in a HTTP header. The add-on then creates a new entry in the Token Table. The initial expiration time of a token is set equal to the current time plus 10 sec. Upon receiving the response, the consumer software initiates the authentication and the authorization process described in Section 4.1. As a next step, the consumer software sends an authorized request, encoding all request parameters in HTTP headers. The add-on executes the authorized request decision algorithm and produces the appropriate output.

Figure4 OpenStack-based implementation.

4.3 Google drive-based web application

Google Drive is a popular Cloud based storage service. Google Drive provides a rich API that can be used for building applications that interact with the service over HTTPS. In our implementation we used this API and built a Web application that extends (part of) the Google Drive API, thus providing support for our protocol (Figure 5). Our application is built using the Google App Engine [25] and the Python language. Access Tables and Token Tables have been implemented using the Google App Engine Datastore. Currently, our application supports operations for uploading and downloading files. Each operation can be invoked by making an HTTPS call to the operation-specific URL. All call parameters are encoded in HTTP headers.

Our application has been configured with a Google Drive account which is kept secret. Instead of interacting with the "drive" directly, the consumer software interacts with the application, which acts as a middleware, ensuring that only an authorized consumer can perform the implemented operations. The consumer learns no information about the Google Drive account.

The owner hard codes in the web application a URIap that controls who can invoke the upload file operation. A consumer initially performs an unauthorized request for uploading a file (the file is not included in this request). The web application generates a token using the UUID Python function, it responds to the consumer by encoding the token in an HTTP header and updates the Token Table. The consumer software initiates the authentication and the authorization process described in Section 4.1. Then, it issues an authorized request, by encoding the request parameters in HTTP headers and the file as raw POST data. The web application executes the authorized

request decision algorithm and if the consumer is allowed to upload the file, it stores it in the Google Drive. When uploading files, consumers are able to specify a URIap that controls who can invoke the download file operation for that specific file.

5 Evaluation

5.1 Security evaluation

It can be easily observed that our system enhances consumer privacy. The only information that a CP learns about a consumer is his trust relationship with a particular ACP; if the level extension is used, the CP also learns his level. Of course, the latter can be encoded in a way that reveals no meaningful information. Any other sensitive information is stored in a (trusted) ACP. Moreover, regardless of the lifetime of a token, a consumer may drop it and request a new one in order to avoid CP profiling. Finally, an ACP gains no information about the operations a consumer invokes and the data he accesses: the only information that an ACP learns is the public key of the CP with which the consumer interacts.

Another security feature of our system is that access control policies can be easily modified. Access control policies are stored in a single point (i.e., the ACP) and all CPs have pointers to policies. Therefore, the modification of an access control policy does not involve communication with any CP. When an access control policy is modified, all new consumers will be authorized using the new policy, whereas all already authorized consumers will be re-authorized with the new policy when their token expires.

We now proceed to the security analysis of our system using the threat model proposed by Wang at al. [26], adapted to our system. In our analysis we consider

Figure 5 Google drive-based web application.

three different attack scenarios. In all scenarios we assume that messages are exchanged over a secure channel and communication endpoints cannot lie about their identity. We do not consider the case in which a malicious entity acts as an ACP and steals the credentials of a consumer, since this attack is out of the scope of our system.

5.1.1 Malicious entity acting as a consumer

In this attack scenario a malicious entity, ConM, tries to perform an operation protected by an access control policy URIleg stored in ACPa. ConM can only be authorized for the access control policy URImal, also stored in ACPa. ConM's goal is to obtain an authorization message of the form (Token, Level, Lifetime, URIieg, PubCP). By following our protocol ConM will receive an authorization message of the form (Token, Level, Lifetime, URImai, PubCP). If ConM includes the signature of this message in his authenticated request, the authorized request decision algorithm will result in an error, since the CP will generate a different authorization message for which this signature is not valid (Figure 6). The only way for ConM to obtain a valid signature is to include URIieg in the authentica-tion and authorization request, i.e., ConM should send to ACPa an authentication and authorization request of the following form: (ID, PubCP, URIieg, Token). However, since ConM does not abide by URIleg this message will result in an error.

5.1.2 Malicious entity acting as a CP

In this attack scenario the attacker's goal is to perform an operation in CPA, protected by an access control policy URIa stored in ACPa. The attacker is able to pretend to be a Cloud provider, CPmal, as well as to lure a consumer ConL that can be authorized for URIA, to perform this operation. Therefore, this is a man-in-the-middle type of attack.

The attacker initially sends an unauthorized request to CPA and receives TokenA and URIA. In order for this attack to be successful the attacker has to obtain an authorization message of the form (TokenA, Level,Lifetime, URIA, PubcpA). ConL is lured to send an unauthorized request to CPmal (i.e., to the attacker), which responds with a message of the form: (URIA, TokenA). Subsequently, ConL sends an authentication and authorization request to ACPa of the following form: (ID, PubCPmaI, URIa, TokenA), and receives the following authorization message (TokenA, Level, Lifetime, URIA, PubCPmal). If the attacker sends an authorized request using the signature of the previous message the authorized request decision algorithm will result in an error, since CPA will generate an authorization message that includes PubCPA and not PubCPmaI (Figure 7).

5.1.3 Malicious entity co-located with a consumer

This attack scenario is applicable when a CP maintains a user management system and associates operations over

Signature verification failed

Figure 6 Malicious entity acting as a consumer.

Signature verification failed

Figure 7 Malicious entity acting as a CP.

protected data with particular users (e.g., for charging reasons). In tis scenario a CP also maintains in its Token Table the identifier of the (CP) user for whom the token has been generated. The goal of an attacker in this scenario is to make a CP believe that a consumer ConL wants to perform a protected operation. In this scenario the attacker is a valid CP user and he is eligible to perform the same operations as ConL. Moreover, the attacker is able to inject messages on behalf of ConL.

In this attack scenario, the attacker requests to perform an operation OPA and proceeds through all steps until he receives the authorization message. At this point, instead of sending an authorized request on behalf of himself, he sends it on behalf of ConL. It can be easily observed that this attack is trivially mitigated since the CP also maintains the identifiers of the users that correspond to each token, therefore this message will be rejected (Figure 8). It should be noted, however, that this is possible due to our design choice to have the CP generating the tokens, which is not always the case in other similar systems. This attack, for example, was successfully exploited by Wang at al. [26] against three popular websites that were using Facebook Connect and Twitter OAuth for associating their user accounts with their corresponding Facebook and Twitter profiles.

5.2 Overhead

In our implementation, HTTP methods are used for invoking the desired operation. As a public-key

encryption system we use RSA. The size of an RSA public key is 2048 bits, whereas the size of a JSON encoded public key is 400 bytes. Tokens are encoded in 32 byte hex-strings, digital signatures in 512 byte hex-strings and token lifetimes in 8 byte hex-strings. Finally, a single byte is used for representing access levels. When a consumer wants to invoke an operation in a CP, protected by a URIap, a number of messages has to be exchanged. If an ACP has already generated for the consumer an authorization message for URIap and the corresponding token has not expired, then a single message from the consumer to the CP has to be sent. In any other case five messages have to be exchanged: three between the consumer and the CP, and two between the consumer and the ACP.

It can therefore be observed that an ACP and a consumer have a strong motive to use long-lasting tokensb: the longer the duration of a token, the less the communication overhead for an ACP and a consumer. On the other hand, long-lasting tokens increase the state that a CP has to maintain in its Token Table. In order to illustrate this tradeoff, we simulate the following scenario: we consider a CP that hosts files of 100 different enterprises. Each enterprise has defined a single protected operation. Moreover, each enterprise has 100 employees who invoke the operation stored in the CP following a Poisson process with rate 0.1/min. We simulate a usage period of 8 hours and every 5 min we measure the average network load of each enterprise (caused by the messages exchanged with the ACP), as well as the size of the CP's Token Table (the measured


Cloud Provider

Packet Injecto

OPa, tokened 'signacp(tokenattacker,

URIap, TokenAttacker


;ker, Lifetime, etime, PubCP, URIap)

This Token has not been generated for this user

Figure 8 Malicious entity co-located with a consumer.

size is the average value of all the sizes the Token Table had within the 5 min measurement period). We consider two types of tokens: a token with short lifetime (20 min) and a token with long lifetime (2 hours). Figure 9 illustrates the average Token Table size of the CP throughout the simulation period, whereas Figure 10 illustrates the average number of messages transmitted inside each enterprise's network, throughout the simulation period.

5.3 Comparison with existing systems

We now compare our solution with two popular related systems: Google Drive and Amazon S3.

5.3.1 Google drive

The Google Drive Cloud-based storage service, enables users to access, share, and organize their files in the Cloud. The Google Drive API provides a limited set of policies,

namely, "full access", "read only access", "metadata only access', and "specific file access'! These policies are not applied per stored item, instead they are granted in the form of "permissions" to applications that want to access a specific drive. Before using a "drive', an application requests from the drive owner one of the aforementioned permission types; the drive owner authenticates himself using a Google account and grants permissions using OAuth2.0. In most cases, the user that executes the application that requests permissions and the owner of the drive are the same entity. Permissions are granted in the form of a token that never expires: in order for a drive owner to remove permissions for a specific application, she has to revoke the token manually. Google Drive does not support integration with enterprise specific authentication and authorization systemsc.

In order for an application to perform an operation the following messages have to be exchanged (here we consider that the user executing the application is the drive owner, referred to as the consumer):

1. Consumer ^ Google Auth: Request permission

2. Consumer ^ Google Auth: Authenticate

3. Consumer ^ Google Auth: Grant permission

4. Google Auth ^ Consumer: Token

5. Consumer ^ Google Drive: Operation, Token

Compared to our system the same number of messages is required. Nevertheless, messages 1 to 4 are usually sent once, since tokens never expire. It should be also noted that the entity that performs the authorization is the drive owner herself (the consumer), therefore authorization is a manual process.

5.3.2 Amazon S3

Amazon Simple Storage Service, or S3 for short, is a well-known Cloud-based file storage service. S3 provides Web services that allow users to store and organize their files in the Cloud. Files are organized in "buckets". A user may set Access Control Lists (ACLs) that define the permissions that a user or a group of users have over a specific bucket, or over a specific file. ACLs are encoded in XML and the permissions that can be granted are "read", "write", "read ACL', "modify ACL', "full control'. For more fine grained access control, S3 provides an "access control policy language', that allows users to create bucket-specific policies. These policies can control the access to a bucket, and its objects, based on user identities, source IP addresses, time and date, and some other parameters.

S3 provides an API that allows users (consumers) to be authenticated using their own (enterprise specific) identity provider. In order for an operation to be performed the following messages have to be exchanged:

1. Consumer ^ Identity Provider: Authenticate

2. Identity Provider ^ Amazon Token Service: Request Token

3. Amazon Token Service ^ Identity Provider: Token

4. Identity Provider ^ Consumer: Token

5. Consumer ^ Amazon S3: Operation, Token

It can be seen that the same number of messages is required, as in our system. Nevertheless, in the S3 system the authorization is performed by Amazon and not by the identity provider, therefore access control policies have to be stored in an Amazon server. This, combined with the fact that policies are defined using Amazon's specific policy definition language, creates a "lock-in" risk.

Moreover, all the users who are identified by their own identity provider are considered to have the same role (i.e., "federated users"), limiting the flexibility of the access control policies. Finally, a secret has to be shared between the identity provider of the user and Amazon's token service, in order for steps 2 and 3 to take place successfully.

6 Discussion

So far we have explored the possibilities that our solution offers in a "traditional" usage model: an enterprise that uses Cloud computing for outsourcing data storage and computations. However, the introduction of a new role, that of the ACP, and the decoupling of the data storage and access control assessment functions creates many new business opportunities.

One area that can benefit from our solution is that of B2B applications. Suppose that enterprise A wants to offer access to some of its (Cloud-based) services to a department of enterprise B. Enterprise B can expose a URIap that authenticates and authorizes the users of that particular department. Enterprise A can use this URIap in order to protect the shared services. With this, enterprise A can perform access control without learning anything about the internal user management system of enterprise B. Enterprise A may also offer services for the customers of enterprise B using a similar approach.

Our solution also creates a new business opportunity. We envision that a new market can arise due to our solution, that of the access control providers. In addition to the enterprise specific ACPs there can be independent ACPs that offer security services to end-users. Existing security companies can utilize their expertise to offer cutting edge access control services without investing in the Cloud market. Moreover, existing social networks may leverage their services and act as ACPs. To this end, future work for our scheme includes support for ACP federations and support for multiple URIACP definitions per single data item.

7 Conclusions

In this paper we proposed a solution to a thorny problem that prevents Cloud technology adoption: that of access control. The proposed solution enables data owners to outsource data storage and computation, without losing governance of their assets. In our solution access control is provided as a service by a new entity, the Access Control Provider (ACP). Access control as a service relieves Cloud providers from the burden of implementing complex security solutions and enables enterprises to deploy their own specific access control mechanisms. We demonstrated the feasibility of our scheme through proof of concept implementations. In particular, we implemented our system as an add-on for the open source Cloud stack OpenStack and we developed a Web application that

allows the incorporation of our system in Google Drive. We show that our scheme is secure and has significant privacy properties. The proposed system adds minimal overhead, does not require any particular Cloud implementation or ACP structure and, therefore, it constitutes a realistic solution to the problem. Finally, we believe that the proposed solution can open the floor for new exciting applications and business opportunities.


aSAML is a generic XML language used for security assessments between different entities.

bProvided that this does not jeopardize the security of the scheme.

cGoogle provides a SAML based SSO system that can be used to integrate enterprise specific authentication systems, but only in Web applications.

Competing interests

The authors declare that they have no competing interests. Authors' contributions

NF, AM, GCP and GX have made substantial contributions to the conception and design of the access control as a service architecture, and reviewed the final manuscript version. All authors read and approved the final manuscript.


This research was supported in part by a grant from the Greek General Secretariat for Research and Technology, financially managed by the Research Center of AUEB.

Received: 29 October 2014 Accepted: 8 April 2015 Published online: 01 June 2015


1. PwC: Global State of Information Security Survey (2012)

2. Subashini S, Kavitha V (2011) A survey on security issues in service delivery models of cloud computing. J Netw Comput Appl 34(1):1-11

3. GorniakS (ed) (2010) Priorities for research on current and emerging network trends. ENISA.

4. Catteddu D, Hogben G (eds) (2009) Cloud Computing Benefits, risks and recommendations for information security. ENISA. https://downloads. The_Notorious_Nine_Cloud_Computing_Top_Threats_in_2013.pdf

5. Cloud Security Alliance (2013) The Notorious Nine Cloud Computing Top Threats in 2013.

6. Armando A, Carbone R, Compagna L, Cuellar J, Tobarra L (2008) Formal analysis of SAML 2.0 web browser single sign-on: breaking the SAML-based single sign-on for google apps. In: Proc. of the 6th ACM Workshop on Formal Methods in Security Engineering. ACM, New York, NY. pp 1-10

7. Somorovsky J, Mayer A, Schwenk J, Kampmann M, Jensen M (2012) On breaking SAML: Be whoever you want to be. In: Proc. of the 21st USENIX Security Symposium. USENIX Association, Berkeley, CA Vol. 12. pp 21-21

8. Fotiou N, Machas A, Polyzos GC, Xylomenos G (2014) Access control delegation for the cloud. In: Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference On. IEEE, Canada. pp 13-18

9. Wang G, Liu Q, Wu J (2010) Hierarchical attribute-based encryption for fine-grained access control in cloud storage services. In: Proceedings of the 17th ACM Conference on Computer and Communications Security. CCS '10. ACM, New York, NY, USA. pp 735-737

10. Zhou L, Varadharajan V, Hitchens M (2011) Enforcing role-based access control for secure data storage in the cloud. Comput J. doi:10.1093/comjnl/bxr080, early/2011/09/02/comjnl.bxr080.abstract

11. Li J, Zhao G, Chen X, Xie D, Rong C, Li W, Tang L, Tang Y (2010) Fine-grained data access control systems with user accountability in cloud computing. In: Cloud Computing Technology and Science (CloudCom), 2010 IEEE Second International Conference On. IEEE Computer Society, Washington, DC. pp 89-96

12. Yu S, Wang C, Ren K, Lou W (2010) Achieving secure, scalable, and fine-grained data access control in cloud computing. In: INFOCOM, 2010 Proceedings IEEE. IEEE Press, Piscataway, NJ. pp 1-9

13. OASIS (2013) extensible Access Control Markup Language (XACML) Version 3.0.22.

14. Goyal V, Pandey O, Sahai A, Waters B (2006) Attribute-based encryption for fine-grained access control of encrypted data. In: Proceedings of the 13th ACM Conference on Computer and Communications Security. CCS '06. ACM, New York, NY, USA. pp 89-98

15. Recordon D, Reed D (2006) OpenID 2.0: a platform for user-centric identity management. In: Proc. of the 2nd ACM Workshop on Digital Identity Management. ACM, New York, NY. pp 11-16

16. Hardt D (ed) (2012) The OAuth 2.0 authorization framework. RFC 6749.

17. Nunez D, Agudo I, Lopez J (2012) Integrating OpenID with proxy re-encryption to enhance privacy in cloud-based identity services. In: Proc of the IEEE 4th International Conference on Cloud Computing Technology and Science. IEEE Computer Society, Washington, DC, USA

18. Khan RH, Ylitalo J, Ahmed AS (2011) OpenID authentication as a service in OpenStack. In: Proc. of the 7th International Conference on Information Assurance and Security. IEEE. pp 372-377. (doi://10.1109/ISIAS.2011. 6122782)

19. Yavatkar R, Pendarakis D, Guerin R (2000) A framework for policy-based admission control. RFC 2753.

20. Durham D (ed) (2000) The COPS (Common Open Policy Service) Protocol. RFC 2748.

21. Cantor S, Kemp J, Philpott R, Maler E (eds) (2005) Assertions and protocols for the OASIS Security Assertion Markup Language (SAML) v2.0. OASIS.

22. Openstack homepage., last accessed 27 Apr. 2015

23. Google Drive homepage., last accessed 27 Apr. 2015

24. Google Keyczar homepage., last accessed 27 Apr. 2015

25. Google App Engine homepage. appengine/, last accessed 27 Apr. 2015

26. Wang R, Chen S, Wang X (2012) Signing me onto your accounts through facebook and google: A traffic-guided security study of commercially deployed single-sign-on web services. In: Proc. of the IEEE Symposium on Security and Privacy. IEEE Computer Society, Washington, DC, USA.

pp 365-379

Submit your manuscript to a SpringerOpen journal and benefit from:

7 Convenient online submission 7 Rigorous peer review 7 Immediate publication on acceptance 7 Open access: articles freely available online 7 High visibility within the field 7 Retaining the copyright to your article

Submit your next manuscript at 7