Scholarly article on topic 'ACDP: Prediction of Application Cloud Data center Proficiency Using Fuzzy Modeling'

ACDP: Prediction of Application Cloud Data center Proficiency Using Fuzzy Modeling Academic research paper on "Computer and information sciences"

Share paper
Academic journal
Procedia Engineering
OECD Field of science
{"Data cente" / Manageme / Fuzzification / Defuzzification / "Cloud Computing" / Optimization}

Abstract of research paper on Computer and information sciences, author of scientific article — M. Jaiganesh, A. Vincent Antony kumar, R. Sivasankari

Abstract Cloud computing, the notion of outsourcing hardware and software to Internet service providers is showing the classic signs of constructive technology -it's good enough for the masses, and it has clear potential to shake things up. The Cloud computing enables the clients to utilize services by on demand techniques. Data centre is a sophisticated high definition server which runs applications virtually. We propose a novel approach to find proficiency of the data center in cloud computing. The goal is to optimize data center utilization with big three factors like Bandwidth, Memory and Virtual Machine (VM). We construct a fuzzy model for some of factors and obtain Application Cloud Datacenter Proficiency (ACDP) in cloud computing environments. The benefit of ACDP is providing estimation of application cloud architecture considerations. In this, fuzzy modeling optimization proceeds maximum gain in the Application cloud controlled by data centre proficiency diagonally a huge diversity of workloads using a fuzzy tool box kit.

Academic research paper on topic "ACDP: Prediction of Application Cloud Data center Proficiency Using Fuzzy Modeling"

Available online at

SciVerse ScienceDirect

Procedía Engineering 38 (2012) 3005 - 3018


ACDP: Prediction of Application Cloud Data center Proficiency using Fuzzy modeling

M.Jaiganesha, A. Vincent Antony kumar0, R.Sivasankaric a*

aAssociate Professor, PSNA College of Engineering and Technology, Dindigul, Tamilnadu,India. bProfessor, PSNA College of Engineering and Technology, Dindigul, Tamilnadu,India, _cStudent, PSNA College of Engineering and Technology, Dindigul, Tamilnadu,India,_


Cloud computing, the notion of outsourcing hardware and software to Internet service providers is showing the classic signs of constructive technology - it's good enough for the masses, and it has clear potential to shake things up. The Cloud computing enables the clients to utilize services by on demand techniques. Data centre is a sophisticated high definition server which runs applications virtually. We propose a novel approach to find proficiency of the data center in cloud computing. The goal is to optimize data center utilization with big three factors like Bandwidth, Memory and Virtual Machine (VM). We construct a fuzzy model for some of factors and obtain Application Cloud Datacenter Proficiency (ACDP) in cloud computing environments. The benefit of ACDP is providing estimation of application cloud architecture considerations. In this, fuzzy modeling optimization proceeds maximum gain in the Application cloud controlled by data centre proficiency diagonally a huge diversity of workloads using a fuzzy tool box kit.

©2012 Published by Elsevier Ltd. Selection and/or peer-review under responsibility of Noorul Islam Centre for Higher Education

Key words: Data cente; Manageme; Fuzzification; Defuzzification; Cloud Computing; Optimization.

1. Introduction

Cloud Computing is an evolving paradigm [1] to access assortment of data pool through internet by any connective device like PDA, work station [4], Mobile, etc.. It is a utility based computing method that has the capability to deliver services over the internet on demand. It provides on demand access without the entail of human intervention. Cloud Computing services [14] are pooled to supply for using a multiple tunings models with many virtual resources. It refers to hold both the applications delivered as services over the internet. The hardware and system software provides those services in the data center. Cloud

* Corresponding author. Tel.: +91 9894025367. E-mail address:

1877-7058 © 2012 Published by Elsevier Ltd. doi:10.1016/j.proeng.2012.06.351

computing is an emerging technology [11] that increases the efficiency, capability, scalability and reduces the cost that favors the customers. Cloud Computing incorporates virtualization [12], on demand deployment Internet delivery and open source services. The standard deployment object that is used in Cloud Computing is virtual machines (VM). It enhances flexibility and enables a dynamic data center. It has the ability to hire a server or thousand servers can be run in a geophysical [18] modeling application in anywhere. It can store and serve immense amount of data that can be accessed only by authorized users. These applications are used over the internet to store and protect data while providing a service. A main future of Cloud Computing system is the capability to utilize variety of physical machines called virtual machines. This needful is depend upon the task or application run by the system. Cloud computing is managing these tasks and applications by altering the software, platform, infrastructure and organizing third party data centers like Yahoo!, [1,6]] Amazon, Google. Google already launched Google Apps Google document can be accessed by their clients virtually. Virtual client requires services from Cloud service providers (e.g.: Amazon). This phenomenon's made reduction in the [3, 8] client capital expenditures for components, computational expenditures and maintenance. Cloud computing is classified into three major service model. They are Private, Public and Hybrid. Companies make a number of considerations which cloud computing model they may choose to employ and they may select to use more than one model to solve the problems of different types.

Private Cloud: Private clouds are executed for the exclusive use for one client providing the control over the data, security and quality of service. It can be built and organized by a company's own. Here the process is controlled by the enterprisers. They are implemented and managed by the internal resources therefore cost can be reduced. The disadvantage of this model is that the company could not provide provisions for the worst case across all the applications that share the infrastructure.

Public Cloud: Public cloud is processed by third parties and applications from different customers that are likely to be mixed together on the cloud servers. They provide a task to reduce customer risk and cost. One of the highlight of Public cloud is they can be much larger than a company's private cloud. A public cloud provides services to many customers [5] and is typically deployed at a collocation facility.

Hybrid Cloud: Hybrid cloud is combining both Private and Public cloud models. They provide on demand, externally provisioned scale. It can also be used to handle "Surge Computing". It can be used to perform periodic tasks that can be employed easily. It determines how to distribute applications across both a public and private cloud [5]. It has to be considered about the relationship between data and processing resources [12]. Implementing Hybrid cloud requires coordination between the private and public service management systems. Hybrid cloud can be more successful than if large amount of data must be transferred into a public cloud for minimum processing. In addition to that it is effective when both the clouds are located in the same facility.

Different Levels of Cloud Computing: There are three levels of service ordering:

1) Software as a Service (Saas)

2) Platform as a Service (Paas)

3) Infrastructure as a Service (Iaas)

Software as a Service (SaaS): SaaS provider hosts and manages a given application in their data center and provides to multiple clients and users over the internet. It is also run on other Cloud service providers like Oracle, CRM On demand,

Platform as a Service (PaaS): PaaS is an application development and deployment platform delivered as services to users over the web. It consists of infrastructure, database, and middleware and development tool. The basis for this infrastructure software is visualized and clustered Grid computing. Google application engine is PaaS where developer writes in Python or Java programming languages.

Infrastructure as a Service (IaaS): It is the delivery hardware and associated software [5] as a service. It does not require any long term commitment and allows users to make use of the resources on demand. It does the minimum level managing process that is they would in their own data center. Example for IaaS is Amazon Web Services [19].

Data center is a collection [10, 13] of data pool for storing and retrieving. It is distributed data information. Data center deployed as an individual server. It hosted in the organization and running an application in a single server. In cloud computing, the data center providing more service level, which covers maximum users. So, Cloud service providers are prepared in better tolerance to manage and update the data centers. The sizes of the data center are varied TIER1, TIER2, TIER3, and TIER4. Cloud computing needs Myriad of services, so data center is too costly to built and manage. The challenges of Data centers are,

1) Irrefutable cost

2) Workload Utilization

3) Optimization of services

Irrefutable cost: Construction of low cost mode data center is unaffordable for a single compound. We built a centralized data center requires increasing cost in servers and storage.

Workload Utilization: Cloud computing needs new servers to be installed in data center. Virtualization made many applications run in a single server or couple of severs. Some of few faces up to factors of utilization are storage, power, cooling, response time, and capacity.

Optimization of services: Numerous data center service provides scrupulous services performance. On the other hand optimizing datacenter services is a set aside task.

Fuzzy Logic is introduced by Zadah [25, 26]. It is a problem solving system methodology that lends itself to survive systems ranging from simple to sophisticated systems. It likes embedded, networked, distributed multi channel personal computer to work station based data acquisition and control systems. Fuzzy set is a common set that have collection of elements measuring improbability in the set. It is varying degrees of membership in the set. Fuzzy set is a common set that have collection of elements measuring improbability in the set. It is varying degrees of membership in the set [0, 1]. Let X a discrete universal crisp set. Set denoted by A- Finite.

A= {na(Xl)/ X1+ na(X2)/ X2+.....................ii A (Xi)/ Xi)}

A is a continuous crisp input. Each element of fuzzy set is mapped to a universal of membership values using function theoretic form. It is having an element in the universe say X, is a member of fuzzy set A then the mapping is given by p.a(X) € [0, 1], |ia(X) is called grade of membership.

In this paper focuses upon the implementation of fuzzy modeling to optimize the crisp parameter values for finding Application Cloud Datacenter Proficiency (ACDP) in cloud computing environment. This paper is organized as follows. In Sect. 2, Background details of cloud computing is given. In sect. 3, Prediction of factors that affecting data center performance. In sect. 4, proposed system of fuzzy modeling is presented. The Final Results and evaluation section demonstrates the Application Cloud Datacenter Proficiency (ACDP) of the method.

2. Background

The typical web application deployed into the cloud computing has all of the potential capacity constraints such as the bandwidth into the load balancer, CPU and RAM of the load balancer. The ability of the load balancer is depending upon

a. Bandwidth [20] between the load balancer with Application server.

b. CPU and RAM of the application server.

c. Bandwidth between application server and network storage devices.

d. Data storage and Disk I/O of database server.

Venison's cloud computing platform [22], computing as a Service (Caas) delivers a highly resilient computing infrastructure and tackling rough data center challenges. Data centers are the hub of any organization. Keeping them running efficiently is important to the success of the organization. With the right group, processes, and technologies in place, the data center servers as an enabling force ready to gear the next big challenge. Verizon Business recognizes that data centers aren't mere place with servers and rack space. They represent the epicenter of the organization. Maintaining them running at peak efficiency requires a keen understanding of more than just technology. They proposed a distinctive model of data center management and rationalization helps organizations to control data center costs, improve efficiency and agility, strengthen security, and become better environmental stewards.

Cloud infrastructures have the potential to introduce unpredictable performance behaviors [23]. While sharing a large infrastructure it is difficult to predict the exact performance characteristics of application at any particular time. Like any shared infrastructure, varying individual workloads can impact available CPU, Network and Disk I/O resources resulting in unpredictable performance behavior of the combined applications. Data center must leverage wide area network which can introduce bandwidth and latency issues. Multi-peered networks, encryption loading, and compression are necessary design considerations. To overcome many of these challenges, Cloud can leverage proactive scaling of resources to increase capacity in anticipation of loads. Cloud computing promises to boost the velocity with which applications are deployed, increase innovation, and lower costs, all while increasing business agility. Sun takes a complete view of cloud computing that allows it to support every feature, including the server, storage, network, and virtualization technology. Sun cloud computing environments runs in virtual appliances [2] that can be used to assemble applications in minimal time.

Data processing management is difficult to get as many machines as an application needs. To perform large scale job on different machines running process, distribute and co ordinate is a difficult one. Cloud Architectures [9] solve such difficulties. Cloud administrators usually worry about hardware procuring (when they run out of capacity) and better infrastructure utilization (when they have excess and idle capacity). The lower network bandwidth and the inherent lower hardware dependability force enterprises to reorganize cloud application architecture [17]. The cloud providers give the customers not only direct control on data movement and storage but also flexibility in composition, so that the application can achieve higher performance. Cloud development to take into account memory allocation, communications delays, VM overhead costs, and licensing costs of software replicas appears to be feasible and is the subject of cloud ongoing project. The cloud computing environment may have complex expectations of a virtual machine's behavior such as compliance with network access control criteria or limits on the type and quantity of network traffic generated by the virtual machine. These varied requirements are too often specified, communicated and managed with non-portable, site specific, loosely coupled, and out-of-band processes. Virtual Machine Contracts (VMC)are a trouble-free idea but also crucial step towards automated control and management of large data centers and cloud computing environments VMCs help enable VMs can migrate safely and naturally within a data center, between on and off premise capacity and between multiple cloud providers. There is a need to investigate new techniques for allocation of resources to cloud computing applications depending on quality of service expectations of users and service contracts established between consumers and providers. Several challenges need to be addressed to realize this vision.

3. Prediction of factors affecting Data Center Performance

The objective of our work is to maximize the data center utilization, when more number of clients and several requests are running on the same server. Cloud computing includes private party services into data center. This created difficulty in storage and other addiction to the optimization task [21]. We propose some major factors in cloud computing and they are as follows.

a) High network traffic( Bandwidth)

b) Disk Usage (Memory)

c) Virtual Client Establishment (VM Machine)

a) High Network Traffic:

The cloud system that works with any application using large input datasets The process in which a user prepares an analysis virtual machine and submits batch jobs to a central scheduler (Hypervisor) then it runs the job and returns the O/P to the user. Usually the network traffic increases with time. The jobs with high I/O rate have high network traffic.

The abnormal increase in network traffic at a certain period of time shows the unauthorized activity running in the network. Unusual high volume of outgoing bandwidth/traffic then it is promising that virtual system has been negotiated. Prediction of high outgoing bandwidth is an important factor, the hypervisor is able to note down.

b) Disk Usage and CPU utilization:

The virtual system runs in the application on the host or Virtual Client. It is allotted by data channel components of the disk utility and activity. These components organize their disk utilization with the hypervisor. An intruder is inside means, they run huge scan for these data [15] channels components and root directories to find any files containing passwords or logging, E payment accounts. Some eave droppers search the disk for file containing mail addresses to utilize for broadcast. The storage area needed for the process increases with more no of access. More no of bytes - data with respect to time. It varies with time but at a certain point of time .It is must to monitor CPU utilization with peek rate shows that the unauthorized person in the network.

c) Virtual Client establishment:

Third, In internet a massive amount of clients require cloud services delivered by virtual machine provisioning. It is essential to maintain such a multiple images running in a data center are an important factor. Virtualization is a aptitude to deploy more operating system on single physical machine. It distributes the underlying hardware also. IBM system/370 mainframe structured these concept at the architecture rate of 32 X 86, 64 X 86 in 1972. The essential elements of virtualization are 1: Hyper visor(S/W) 2: Computer 3: Operating system and Applications. 4: Virtual Machine. Hyper visor is a software used to install and initiate new virtual machine. Single computer is having physical hardware, operating system and applications. Example: A College construct a poster designing competitions to load a software in a single system networked and can be run on any computer by which the hyper visor is installed. Simply we should have more systems each committed to a individual application.

4. Prediction of Application Cloud Data center Proficiency using Fuzzy modeling

This model is proposed to identifying relevant input variable such that Bandwidth, Memory, Virtual machine and output variable such that Application Cloud data center proficiency of the Fuzzy system and ranges of their values. We have to select significant linguistic states for each variable and express them by appropriate fuzzy sets. The crisp input is converted into fuzzy by using fuzzification [7] method. After fuzzification the rule base is formed, the rule base and database are jointly referred as the knowledge base. Defuzzification is used to convert fuzzy value to the real world value which is the output.

In general, fuzzy theory [16] provides a mechanism for representing linguistic construction such as low, medium, high and small medium, large. Assume that ranges of input variable x and y are [-a, a] and [-b, b] respectively and the range of output variable z is [-c, c].

Linguistic Value Notation


Medium M

High H


Medium M

High LA

Virtual machine

Medium M

High H

Application Cloud

Data center Proficiency

Minimum MN

Moderate MD

Maximum MX

BEiy 35 Li an iui H7

ntfOFW |N0HTJHPiii0:i

Fig2. Fuzzy trapezoid view of Memory

II^MAfin Mir llinllt*

M. Jaiganesh et al. / Procedia Engineering 38 (2012) 3005 - 3018 Fig3. Fuzzy trapezoid view of Virtual Machine

4.1. Fuzzification

In this first step fuzzification function is introduced for each input variable to state the associated measurements uncertainty. The use of the fuzzification function is to interpret measurements of input variables, each expressed by a real number, as more realistic fuzzy approximations of the respective real numbers. Consider, as an example, a fuzzification function f X applied to variable x. Then, the fuzzification function has the form

Where R denotes the set of all fuzzy numbers, and fx (xo) is a fuzzy number chosen by fx as a fuzzy approximations of measurement x=xo. A possible definition of this fuzzy number for any x belongs to [-a,a], where denotes a parameter that has to be determined in the context of each particular application. It is obvious that, if desirable, other shapes of membership functions may be used to represent the fuzzy number fx(x0). For each measurement x=x0 the fuzzy set fx(xO) enter into the inference process as a fact.

4.2. Fuzzy inference rule

The knowledge pertaining to find Data centre efficiency is formulated in terms of a set of fuzzy inference rules. There are two principal methods in which relevant inference rules can be determined. One is to elicit them from experienced human operators. The other method is to obtain them from empirical data by suitable learning methods. We propose latter method, in our example with variable x, y and z, the inference rules have the canonical form

Where A, B, C are fuzzy numbers chosen the set of fuzzy numbers that represent the linguistic states Low, Medium, High, Small, Large, Minimum, Moderate, Maximum. The total number of possible no conflicting fuzzy inference rules is 32=9. A small subset of all possible fuzzy inference rules is often sufficient to obtain acceptable performance of the fuzzy controller. Appropriate pruning of the fuzzy rule base may be guided.

To determine proper fuzzy inference rules experimentally, we need a set of input output data

{[pk-ih-nk)/kE K} (4.3)

Where Rk is a desirable value of the output variable v for given values Pk and Qk of the input variables x and y, respectively, and k is an appropriate index set. Let AfP*), B(Q^,C(R^denote the largest membership grades in fuzzy sets representing the linguistic states of variables x, y, z respectively. Then, it is reasonable to define a degree of relevance of the rule by the formula

Where i2 are t-norms. This degree, when calculated for all rules activated by the input-output data (4.2), allows us to avoid conflicting rules in the fuzzy rule base. Amount rules that conflict with one another, we select the one with the largest degree of relevance (4.3).We convert the fuzzy inference rules

of the form if x = A and y = B, then z = C into equivalent simple fuzzy conditional propositions of the

if (:i\ j/) is A x B. then. Z in C

For all x e [-a, a] and y e [-b, b] similarly we express the fiizzified input measurements fe(x0) and fe(y0) as a single joint measurement,

Then the problem of inference regarding the output variable z becomes the problem of approximate reasoning with several conditional fuzzy propositions, when fuzzy rule base consists of n fuzzy inference rules, the reasoning schema has, in our case, the form

Rule 1 : IF(x, y) is Ai x Bh THEN z is Q Rule 2: IF(x, y) is A2 x B2, THEN z is C2

Rule n: IF(x, y) is An x Bn, THEN z is C, From 4.3,

Fact: (x, y) is fe(x0) x fe(y0)

Conclusion: z is C

The symbols Aj, Bj_ Cj (j = 1,2...n) denotes fuzzy sets that represents the linguistic states of variable x, y, z respectively.

In our example let us consider x to be bandwidth (BW) and y to be Memory (MEM) from the Fuzzy inference rule,

Rule 1: IF (BW, MEM) is A, x Bh THEN z is Ci

Let x to be number of CPU (CPU) and y to be memory (MEM) Fuzzy inference rule is Rule 2: IF (CPU, MEM) is A2 x B2, THEN z is C2, Finally we arrive at

Rule n: IF(x, y) is An x Bn, THEN z is Cn

From 4.3, Fact: (x, y) is fe(x0) x fe(y0)...................................................

Conclusion: z is C

For each rule in the fuzzy rule base, there is a corresponding relation Rj. since the rules are interpreted as disjunctive, we may use to conclude that the state of variable v is characterized by the fuzzy set.

4.3. Defuzzification

In this last step of the design process, the designer of a fuzzy controller must select a suitable defuzzification method. The purpose of defuzzification is to alter each conclusion obtained by the inference fuzzy. This is expressed in terms of a fuzzy set, to a distinct real number. The set to be defuzzifed in our example is, for any input measurements x=Xo and y=Y0, the set C defined by (4.4)

Ûty«i triHSIttirffl t

Fig.4. Degree of membership

5. Results and Evaluation

The experimental research for our proposed system using MATLAB 6. 0. We simulate the system by passing the crisp input values to the fuzziffier. Using fuzzy inference system we formulate the mapping from the input to an output using fuzzy logic. From this mapping the optimized decision is made to estimate Application Cloud Datacenter Proficiency through passing membership functions, logical operations and If then rules. We implement this fuzzy inference system through mamdani type.

ACDP: AFpLiiaiio.i ( [lacK^Lir ricjUtoci

Fig.5 .Graphl

GraphV. Bandwidth / Memory / Application Cloud Datacenter Proficiency indicates Memory is constant at the lower values (O.T).The data center load is almost the same till it reaches 0.4 and it increases gradually above 0.4 Memory at mid values Data center load proficiency increases up to 0:2bandwidth and it is constant after that Memory at higher values Data center load proficiency raises to the peak point till the bandwidth is 0:3and after tat it reaches peak point and it is maintained.

ACDF" Aipiic-UMI L iJt.-l .er .JTU■JjL■¡LDL■^,

Fig.6 .Graph 2

Graph2 Memory / Virtual machine / Application Cloud Datacenter Proficiency displays keeping no of Virtual machines between 0 to 0.2. Memory for lower values (0 to 0.4) there is no abrupt change in data center load. Memory for values > 0:4 data center load starts to increase gradually. Keeping no of virtual machines above 0.4 .Application Data center load is at the peak for all the values of memory and it is maintained.

Graph's: Bandwidth / virtual machine / data center load displays Virtual machine=0.Data center load increases steadily till it reaches Bandwidth 0.4. After that Data center load reaches peak pt and it is maintained. Virtual machine> 0.4 as virtual machine increases the data center load decreases for the lower values of bandwidth and for higher values of bandwidth it is decreases by 1 unit and maintained.

6. Conclusion

The most important task in the successful service of the internet access through maximum Application Cloud Data Center Proficiency. This proposed system is designed according to the service layers of cloud computing, Cloud service provider estimates these strategy. Datacenter maintains a chart to monitor the Big three factors suggested in this work. This work is extended in the way of security to the data center and clod computing environment.

7. References

[1] Francesco Maria Aymerich, Gianni Fenu and Simone Surcis, An Approach to a Cloud Computing Network, International Conf. onAppln. of Digital Information and Web Technologies proceedings, IEE Explore (2008), 113- 118.

[2] George Reese, Cloud Application Architectures Building Applications and Infrastructure in the Cloud, O'Reilly Media Publications, USA April (2009).

[3] Huan Liu, Dan Orban, Grid Batch: Cloud Computing for Large-Scale Data-Intensive Batch Applications, IEEE International Symposium on Cluster Computing and the Grid (CCGRID), IEEExplore (2008), 295-305.

[4] Ian Foster,Yong Zhao, loan Raicu and Shiyong Lu, Cloud Computing and Grid Computing 360-Degree Compared, Grid Computing Environments Workshop, IEEE xplore (2008).

[5] Jason Carolan and Steve Gaede, Introduction to the Cloud Computing Architecture- A Sun micro systems White paper, SUN

WXJi': ".[vritarmii ciri;id Cjil1il£:cnitt.|- r'idfiicr'.iv

Fig.7 .Graph 3

Micro systems inc., USA (2009), 1-33.

[6] Jinesh Varia, Cloud Architectures, White paper by Amazon web services, Amazon Company (2008), 1-14.

[7] Mamdani,EH. and Assilian, An experiment in linguistic synthesis with a fuzzy logic controller, International Journal Man-Machine Studies, (1975), 1-13.

[8] Marco Descher, Philip Masser and Thomas Feilhauer, Retaining data control to the client in the infrastructure clouds, Journal of Information System Security,5(2009), 27 - 46.

[9] Marin Litoiu, Murray Woodside, Johnny Wong, Joanna Ng and Gabriel Iszla, Proceedings of the ACM Symposium on Applied Computing, ACM New York, USA (2010), 632-638.

[10] Matt Beckert, Bradley A. Ellison and Shesha Krishnapura, Intel IT Data Center Solutions: Strategies to Improve Efficiency, IT@ Intel White Paper, Intel Information Technology, Intel corporation, USA,(2009), 1-11.

[11] Michael Armbrust, Armando Fox and Rean Griffih, Above the Clouds: A Berkeley View of Cloud Computing, /Pubs/TechRpts/2009IEEC?>-2№9-2%tem[ (2009).

[12] Nezih Yigitbasi, Alexandru Iosup and Dick Epema, C-Meter: A Framework for Performance Analysis of Computing Clouds, International Symposium on Cluster Computing and the Grid, IEEE /ACM (2009), 472-477.

[13] Paul Stryer, Understanding Data Centers and Cloud Computing, Expert Reference Series of White Papers, Global Knowledge Training LLC, USA (2010), 1-7.

[14] Rajkumar Buyya, Market-Oriented Cloud Computing: Vision, Hype and Reality of Delivering Computing as the 5th Utility, International Symposium on Cluster Computing and the Grid, IEEE /ACM (2009), 1-13.

[15] Ryan Stutsman, The Case for RAM Clouds: Scalable High-Performance Storage Entirely in DRAM. Newsletter ACM SIGOPS Operating Systems Review, Volume 43(4) .ACM New York, USA (2010), 1-14.

[16] Saleem Khan.M, Fuzzy Time Control Modeling Of Discrete Event Systems, Proceedings of the World Congress on Engineering and Computer Science, International Association of Engineers (IAENG), (2008), 683-688.

[17] Shih-wei Liao, Tzu-Han Hung, Hucheng Zhou, Donald Nguyen, Chinyen Chou,Chiaheng Tu, Optimizing

Memory System Performance for Data Center Applications via Parameter Value Prediction, International Conf. On Supercomputing, Yorktown Heights, ACM New York, USA (2009).

[18] Steve Bennett, Mans Bhuller, Robert Covington, Architectural Strategies for Cloud Computing, An Oracle White Paper in Enterprise Architecture, Oracle Corporation, USA (2009), 1-17.

[19] Tim Dornemann, Ernst Juhnke, Bemd Freisleben, On Demand Resource Provisioning for BPEL Workflows

Using Amazon's Elastic Compute Cloud, International Symposium on Cluster Computing and Grid, Germany(2009).

[20] Thomas. Apaine, Tyler J.griggs, Directing traffic: Managing Internet Bandwidth Fairly, Educause Quarterly,

Volume 31 (3) EDUCAUSE, USA. (2008), 66-70.

[21 ] M.Tsangaris, G.Kakaletris, HKllapi, G.Papanikos, F. Pentaris, P. Polydoras, E. Sitaridi, V. Stoumpos, Y. Ioan-

nidis, Dataflow Processing and Optimization on Grid and Cloud Infrastructures, Bulletin of the IEEE Computer Society Technical Committee on Data Engineering, Volume 32, (1), IEEE (2009), 67-74.

[22] Verizon, The Case for Cloud Computing is becoming more

computing /is/ clearenxg/pdf.

[23] Wlodzislaw Duch, Rafal Adamczak and Krzyszt of Grabczewski, A new methodology of extraction,

optimization and application of crisp and fuzzy logical rules, (2000), 277- 306.

[24] Yogesh Simmhan, Ed Lazowska, Alex Szalay, On Building Scientific Workflow Systems for Data Management

in the Cloud, IEEE, (2008).

[25] Zadeh L.A, Fuzzy sets and Systems, International Journal of general systems, Taylor Francis, USA (1990),