cloud computing security from single to multi clouds

68
CLOUD COMPUTING SECURITY FROM SINGLE TO MULTI CLOUDS 1 Chapter 1 INTRODUCTION The use of cloud computing has increased rapidly in many organizations small and medium companies use cloud computing services for various reasons, including because these services provide fast access to their applications and reduce their infrastructure costs. Cloud providers should address privacy and security issues as a matter of high and urgent priority. Dealing with “single cloud” providers is becoming less popular with customers due to potential problems such as service availability failure and the possibility that there are malicious insiders in the single cloud. In recent years, there has been a move towards “multi-clouds”, “inter-cloud” or “cloud-of-clouds”. This project focuses on the issues related to the data security aspect of cloud computing. As data and information will be shared with a third party, cloud computing users want to avoid an untrusted cloud provider. Protecting private and important information, such as credit card details or a patient’s medical records from attackers or malicious insiders is of critical importance. In addition, the potential for migration from a single cloud to a multi- cloud environment is examined and research related to security issues in single and multi-clouds in cloud computing is surveyed. NMAMIT, Nitte Department of MCA 2014

Upload: shivananda-rai

Post on 23-Nov-2015

105 views

Category:

Documents


0 download

DESCRIPTION

project documents

TRANSCRIPT

CLOUD COMPUTING SECURITY FROM SINGLE TO MULTI CLOUDS 46

Chapter 1INTRODUCTIONThe use of cloud computing has increased rapidly in many organizations small and medium companies use cloud computing services for various reasons, including because these services provide fast access to their applications and reduce their infrastructure costs.Cloud providers should address privacy and security issues as a matter of high and urgent priority. Dealing with single cloud providers is becoming less popular with customers due to potential problems such as service availability failure and the possibility that there are malicious insiders in the single cloud. In recent years, there has been a move towards multi-clouds, inter-cloud or cloud-of-clouds.This project focuses on the issues related to the data security aspect of cloud computing. As data and information will be shared with a third party, cloud computing users want to avoid an untrusted cloud provider. Protecting private and important information, such as credit card details or a patients medical records from attackers or malicious insiders is of critical importance. In addition, the potential for migration from a single cloud to a multi-cloud environment is examined and research related to security issues in single and multi-clouds in cloud computing is surveyed.

1.1 ObjectiveThe Objective of the system is to Block the attackers in cloud servers automatically using automatic protocol, computing the cloud securely, secret sharing with Byzantine failure and proving the data integrity and batch auditing by the data owners.

1.2 Organization ProfileAn impact technology is an IT solution provider for a dynamic environment where business and technology strategies converge. Their approach focuses on new ways of business combining IT innovation and adoption while also leveraging an organizations current IT assets. Their work with large global corporations and new products or services and to implement prudent business and technology strategies in todays environment.Range of Expertise Includes Software Development Services Engineering Services Systems Integration Customer Relationship Management Product Development Electronic Commerce Consulting IT Outsourcing

We apply technology with innovation and responsibility to achieve two broad objectives: Effectively address the business issues our customers face today. Generate new opportunities that will help them stay ahead in the future.

This Approach Rest On A strategy where we architect, integrate and manage technology services and solutions - we call it AIM for success. A robust offshore development methodology and reduced demand on customer resources. A focus on the use of reusable frameworks to provide cost and times benefits.

They combine the best people, processes and technology to achieve excellent results - consistency. We offer customers the advantages of:SpeedThey understand the importance of timing, of getting there before the competition. A rich portfolio of reusable, modular frameworks helps jump-start projects. Tried and tested methodology ensures that we follow a predictable, low - risk path to achieve results. Our track record is testimony to complex projects delivered within and evens before schedule.

ExpertiseOur teams combine cutting edge technology skills with rich domain expertise. Whats equally important - they share a strong customer orientation that means they actually start by listening to the customer. Theyre focused on coming up with solutions that serve customer requirements today and anticipate future needs.A Full Service PortfolioThey offer customers the advantage of being able to Architect, integrate and manage technology services. This means that they can rely on one, fully accountable source instead of trying to integrate disparate multi-vendor solutions.ServicesImpact Solutions is providing its services to companies which are in the field of production, quality control etc. with their rich expertise and experience and information technology they are in best position to provide software solutions to distinct business requirements.

Chapter 2LITERATURE SURVEYLiterature survey is the most important step in software development process. Before developing the tool it is necessary to determine the time factor, economy and company strength. Once these things are satisfied, ten next steps are to determine which operating system and language can be used for developing the tool. Once the programmers start building the tool the programmers need lot of external support. This support can be obtained from senior programmers, from book or from websites. Before building the system the above consideration are taken into account for developing the proposed system.We have to analysis the Knowledge and Data Engineering and Cloud: 2.1 Data & Knowledge Engineering (DKE)Data & Knowledge Engineering (DKE) is a journal in database systems and knowledge base systems. It is published by Elsevier. It was founded in 1985, and is held in over 250 academic libraries. The editor-in-chief is P.P. Chen (Dept. of Computer Science, Louisiana State University, USA) This particular journal publishes 12 issues a year. All articles from the Data & Knowledge Engineering journal can be viewed on indexing services like Scopus and

2.2 Knowledge engineering (KE)KE is an engineering discipline that involves integrating knowledge into computer systems in order to solve complex problems normally requiring a high level of human expertise. At present, it refers to the building, maintaining and development of knowledge-based systems. It has a great deal in common with software engineering, and is used in many computer science domains such as artificial intelligence, including databases, data mining, expert systems, decision support systems and geographic information systems. Knowledge engineering is also related to mathematical logic, as well as strongly involved in cognitive science and socio-cognitive engineering where the knowledge is produced by socio-cognitive aggregates (mainly humans) and is structured according to our understanding of how human reasoning and logic works.Various activities of KE specific for the development of a knowledge-based system: Assessment of the problem Development of a knowledge-based system shell/structure Acquisition and structuring of the related information, knowledge and specific preferences (IPK model) Implementation of the structured knowledge into knowledge bases Testing and validation of the inserted knowledge Integration and maintenance of the system Revision and evaluation of the system. Knowledge engineering principlesSince the mid-1980s, knowledge engineers have developed a number of principles, methods and tools to improve the knowledge acquisition and ordering. Some of the key principles are: There are different: Types of knowledge each requiring its own approach and technique. Types of experts and expertise, such that methods should be chosen appropriately. Ways of representing knowledge, which can aid the acquisition, validation and re-use of knowledge. Ways of using knowledge, so that the acquisition process can be guided by the project aims (goal-oriented). Structured methods increase the efficiency of the acquisition process. Knowledge Engineering is the process of eliciting Knowledge for any purpose be it Expert system or AI development 2.3 Introduction to Data Mining and CloudData mining (also known as Knowledge Discovery in Databases - KDD) has been defined as "The nontrivial extraction of implicit, previously unknown, and potentially useful information from data" It uses machine learning, statistical and visualization techniques to discover and present knowledge in a form which is easily comprehensible to humans. As data and information will be shared with a third party, cloud computing users want to avoid an untrusted cloud provider. Protecting private and important information, such as credit card details or a patients medical records from attackers or malicious insiders is of critical importance. In addition, the potential for migration from a single cloud to a multi-cloud environment is examined and research related tosecurity issues in single and multi-clouds in cloud computing are surveyed.

2.4 System Architecture

Figure 2.4.1: system Architecture2.5 Project MethodologyThe different phases of project development that have actually been put to use are as follows: Analysis Design Coding TestingAnalysis PhaseThe analysis phase denies the requirements of the system, independent of how these requirements will be accomplished. We gain thorough understanding of objectives, determine available options and determine how the new system will integrate into existing systems and workflow. This is very critical phase in development of project and will serve as the blueprint in the development of your system. The deliverable result at the end of this phase is a requirement document.Design PhaseWe transform the information obtained in the analysis phase (System specification) into a detailed technical design for a new system. This phase requires further study of needed functionality and the graphical user interface. The design has been done keeping one thing in mind i.e. it should be user friendly. The design of the project is robust and any further changes or improvements can be done easily. The output of the design phase is a design document.Coding PhaseDuring this phase, we construct and develop your system, including integration with your existing technology. The code written for development of this application follows the rules and guidelines mentioned by the company.Testing PhaseTesting is the most important phase to identify and recovery of the bugs that occurred at the time of coding phase. This phase includes both unit and acceptance testing. Since the project requirements have been defined, and the system design is underway, test objectives and strategies are identified and included in the project scope document, project plane, and project cost estimate. 2.6 Overview on toolsTechnology DescriptionJava technology is used both a programming language and a platform.2.6.1 The Java Programming Language Java is a high-level programming language that is all of the following Simple Architecture-neutralObject-orientedPortableDistributed High-performanceInterpretedmultithreadedRobust DynamicSecureJava is also unusual in that each Java program is both compiled and interpreted. With a compile you translate a Java program into an intermediate language called Java byte codes the platform-independent code instruction is passed and run on the computer.Compilation happens just once; interpretation occurs each time the program is executed. The figure illustrates how this works.

Java ProgramCompilersInterpreterMy Program

Fig 2.6.1.1 : java interpreterYou can think of Java byte codes as the machine code instructions for the Java Virtual Machine (Java VM). Every Java interpreter, whether its a Java development tool or a Web browser that can run Java applets, is an implementation of the Java VM. The Java VM can also be implemented in hardware.Java byte codes help make write once, run anywhere possible. You can compile your Java program into byte codes on my platform that has a Java compiler. The byte codes can then be run any implementation of the Java VM. For example, the same Java program can run Windows NT, Solaris, and Macintosh.

2.6.2 THE JAVA PLATFORMA platform is a hardware or software environment in which a program runs. The Java platforms differs from other platforms in that it is a software only platform that runs on the top of the other hardware based platforms.The Java platform has two components:1. The Java Virtual Machine2. The Java Application Programming Interface(API) Java Virtual Machine is the base of the Java platform and it is ported on to various hardware based platforms.The API is a large collection of readymade software components that provide many useful capabilities. It is grouped into libraries of related classes and interfaces. These libraries are known as packages.The API and JVM insulate the program from the underlying hardware. As a platform independent environment a java platform can be a bit slower than native code.

2.6.3 Java Database Connectivity (JDBCTM): Provides uniform access to a wide range of relational databases. The Java platform also has APIs for 2D and 3D graphics, accessibility, servers, collaboration, telephony, speech, animation, and more. The following figure depicts what is included in the Java 2 SDK.

Fig 2.5.3.1 : java 2 SDK 2.6.4 Eclipse IDE For JavaThe Eclipse IDE for Java Developers provides superior Java editing with validation, incremental compilation, cross-referencing, code assist, XML Editor and much more.

Fig 2.5.4.1 : Eclipse IDE (Helios)

The Eclipse JDT (Java Development Tools) provides the tool plug-ins that implements a Java IDE, supporting the development of any Java application, including Eclipse plug-ins adds a Java project nature and Java perspective to the Eclipse Workbench as well as a number of views, editors, wizards, builders and code merging and refactoring tools.2.6.5 SQL ServerA database management, or DBMS, gives the user access to their data and helps them transform the data into information. Such database management systems include dBase, paradox, IMS, SQL Server and SQL Server. These systems allow users to create, update and extract information from their database.A database is a structured collection of data. Data refers to the characteristics of people, things and events. SQL Server stores each data item in its own fields. In SQL Server, the fields relating to a particular person, thing or event are bundled together to form a single complete unit of data, called a record (it can also be referred to as raw or an occurrence). Each record is made up of a number of fields. No two fields in a record can have the same field name.During an SQL Server Database design project, the analysis of your business needs identifies all the fields or attributes of interest. If your business needs change over time, you define any additional fields or change the definition of existing fields.2.6.6 Tomcat 6.0 web server Tomcat is an open source web server developed by Apache Group. Apache Tomcat is the servlet container that is used in the official Reference Implementation for the Java Servlet and JavaServer Pages technologies. The Java Servlet and JavaServer Pages specifications are developed by Sun under the Java Community Process. Web Servers like Apache Tomcat support only web components while an application server supports web components as well as business components (BEAs Weblogic, is one of the popular application server).To develop a web application with jsp/servlet install any web server like JRun, Tomcat etc to run your application.

Fig: 2.5.6.1 Tomcat Webserver

Chapter 3HARDWARE AND SOFTWARE REQUIREMENTS 3.1 SOFTWARE REQUIREMENTS Operating System : Windows95/98/2000/XP Application Server : Tomcat5.0/6.X/7.0 Front End : HTML, Java, Jsp Scripts : JavaScript. Server side Script : Java Server Pages. Database Connectivity: Mysql.

3.2 HARDWARE REQUIREMENTS RAM - 4GB Processor - Pentium III & above Speed - 1.1 Ghz RAM - 256 MB(min) Hard Disk - 20 GB Floppy Drive - 1.44 MB Key Board - Standard Windows Keyboard Mouse - Two or Three Button Mouse Monitor - SVGA

Chapter 4SOFTWARE REQUIREMENT SPECIFICATIONA software requirement specification (SRS) is a comprehensive description of the intended purpose and environment forsoftwareunder development. The SRS fully describes what the software will do and how it will be expected to perform.The introduction of the SRS provides an overview of the entire SRS with purpose, scope, definitions, acronyms, abbreviations, and references. The aim of the document is to gather and analyze and give an in depth insight of the complete Employee Location Tracker by defining the problem statement in detail.4.1 SRS for Single to Multi CloudFunctional Control the file access at cloud server, Data Integrity Proof at TPA. File Privacy Management

Non- Functional Cloud servers never monitors and controls the TPA

External interface LAN , Routers

Performance Finding File Hacker Information, File Sharing efficiency fairness

Attributes File Management,tpa,cloud server,owner,Remote Users, Blocked Users,Multi Cloud

Table: 3.1 Summaries of SRS4.1.1 Functional Requirements Functional Requirement defines a function of a software system and how the system must behave when presented with specific inputs or conditions. These may include calculations, data manipulation and processing and other specific functionality. In this system following are the functional requirements:- The Owner will divide the file into N number of blocks and has to upload the each block the individual cloud servers.

The Cloud server has to authorize the valid remote users. if the Remote user is hacker then he has to block in the cloud server. The data should be integrated by the cloud server. The Third party auditor has to maintain the error localization and has to monitor the Cloud Server Activities. The Remote user has to user correct Secret key and file name. If anyone is wrong then he is detected as attacker. The Attributes are File Management, tpa, cloud server, owner, Remote user and blocked user.

4.1.2 Non Functional Requirements Non Functional requirements, as the name suggests, are those requirements that are not directly concerned with the specific functions delivered by the system. They may relate to emergent system properties such as reliability response time and store occupancy. Alternatively, they may define constraints on the system such as the capability of the Input Output devices and the data representations used in system interfaces. Many non-functional requirements relate to the system as whole rather than to individual system features. This means they are often critical than the individual functional requirements. The following non-functional requirements are worthy of attention. The key non-functional requirements are: Security: The system should allow a secured communication between Cs and TPA, User and File Owner Energy Efficiency: The Energy consumed by the Users to receive the File information from the cloud server Reliability: The system should be reliable and must not degrade the performance of the existing system and should not lead to the hanging of the system.

4.1.3 Performance The network performance can be determined by few terms such as the cloud busy time, File utilization level, efficiency, fairness and imbalance. The amount of the time the cs allocated for File transmission and reception is called cs busy time. Similarly channel is sometimes being idle during communication. The unit of time which makes delay to transmit a packet is called channel access delay time. The channel or medium utilization level can be defined as average rate of reliable packets delivered through the channel. The MAC layer utilization level can be determined by noticing whether the medium is busy or idle. The binary values are used for indicating MAC layer utilization level. 1,0 are used for indicating channel is now busy or idle respectively. The main factor deciding buffer overflow is interface queue length when the queue length is limited in the network. The main terms that are to be calculated to determine the network performance are efficiency, fairness and imbalance. The efficiency of the communication is calculated by number hops the successful packets travelled to the total number of packets placed (dropped and retransmitted also included) in the Network.

4.1.4 Problem DefinitionThe problem of the system incorporates the previous system advantages and extends to find the unauthorized user, to prevent the unauthorized data access for preserving data integrity. The proposed system monitors the user requests according the user specified parameters and it checks the parameters for the new and existing users. The system accepts existing validated user, and prompts for the new users for the parameter to match requirement specified during user creation for new users. If the new user prompts parameter matches with cloud server, it gives privileges to access the Audit protocol author wise the system automatically blocks the Audit protocol for specific user.

4.1.5 Objective The Objective of the system is to Block the attackers in cloud servers automatically using automatic protocol, computing the cloud securely, secret sharing with Byzantine failure and proving the data integrity and batch auditing by the data owners

Chapter 5SYSTEM DEFINITION5.1 UML Diagram5.1.1 Use case diagramUse case diagram mainly captures the actor who interacts with system, namely the UPAA software. An actor is a person, organization or external system that plays a role in one or more interactions with the system. A use case diagram is a graphical notation for summarizing actors and use cases. The first step in a typical development effort is to analyze the description of the system and produce a model of the systems requirements.It consists of system, actor and use case. System: The system is depicted as a rectangle. Actor: Each actor is shown as a stick man. Use Case: each use case is shown as a solid bordered oval labeled with the name of the use case. Figure 5.1.1.1: Usecase Diagram of multi cloud5.1.2 Activity DiagramThe Activity Diagram captures the dynamic behavior of the system. Activity is a particular operation of the system. Activity diagrams are not only used for visualizing dynamic nature of a system but they are also used to construct the executable system by using forward and reverse engineering techniques. The only missing thing in activity diagram is the message part.

Figure 5.1.2.1: Activity Diagram for multi cloud

5.3 Sequence diagramSequence Diagrams are used primarily to design, document and validate the architecture, interfaces and logic of the system by describing the sequence of actions that need to be performed to complete a task or scenario. UML sequence diagrams are useful design tools because they provide a dynamic view of the system behavior which can be difficult to extract from static diagrams or specifications.

Figure 5.1.3.1: Sequence Diagram for multi cloud

5.4 Class DiagramThe purpose of the class diagram is to model the static view of an application. The class diagrams are the only diagrams which can be directly mapped with object oriented languages and thus widely used at the time of construction. It is the most popular UML diagram in the coder community.

Figure 5.1.4.2: Class Diagram for multi cloud

5.2 SOFTWARE DEVELOPMENT LIFE CYCLE The six stages of the Software Development Life Cycle (SDLC) are designed to build on one another, taking the outputs from the previous stage, adding additional effort, and producing results that leverage the previous effort and are directly traceable to the previous stages. This top-down approach is intended to result in a quality product that satisfies the original intentions of the customer.

Fig 5.2.1 : SDLC Phases

5.2.1 Planning Phase Planning Phase defines what, when and how the project will be carried out. This phase expands on the high level project online and provides a specific and detailed project definition. The most critical section of the project plan is a listing of high-level product requirements, also referred to as goals. All of the software product requirements to be developed during the requirements definition stage flow from one or more of these goals.

5.2.2 Requirement Phase The requirement gathering process takes as its input the goals identified in the high-level requirements section of the project plan. Each goal will be refined into a set of one or more requirements. These requirements define the major functions of the intended application, define operational data areas and reference data areas, and define the initial data entities. Major functions include critical processes to be managed, as well as mission critical inputs, output s and reports.5.2.3 Design Phase The design stage takes as its initial input the requirements identified in the approved requirements document. For each requirement, a set of one or more design elements will be produced as a result of interviews, workshops, and/or prototype efforts. Design elements describe the desired software features in detail, and generally include functional hierarchy diagrams, screen layout diagrams, tables of business rules, business process diagrams, pseudo code, and a complete entity-relationship diagram with a full data dictionary.5.2.4 Development Phase The development stage takes as its primary input to the design elements described in the approved design document. For each design element, a set of one or more software artifacts will be produced. Software artifacts include but are not limited to menus, dialogs and data management forms, data reporting formats, and specialized procedures and functions. Appropriate test cases will be developed for each set of functionally related software artifacts, and an online help system will be developed to guide users in their interactions with the software.5.2.5 Integration and Test Phase During the integration and test stage, the software artifacts, online help, and test data are migrated from the development environment to a separate test environment. At this point, all test cases are run to verify the correctness and completeness of the software. Successful execution of the test suite confirms a robust and complete migration capability.

5.2.6 Installation and Acceptance Phase During the installation and acceptance stage, the software artifacts, online help and initial production data are loaded onto the production server. At this point, all test cases are run to verify the correctness and completeness of the software. Successful execution of the test suite is a prerequisite to acceptance of the software by the customer. After customer personnel have verified that the initial production data load is correct and the test suite has been executed with satisfactory results, the customer formally accepts the delivery of the software.The primary outputs of the installation and acceptance stage include a production application, a completed acceptance test suite, and a memorandum of customer acceptance of the software.

Conclusion The structure imposed by this SDLC is specifically designed to maximize the probability of a successful software development effort. To accomplish this, the SDLC relies on four primary concepts: Scope restriction. Progressive Enhancement Pre-defined structure Incremental PlanningThese four concepts combine to mitigate the common risks associated with software development efforts.Software engineering paradigm refers to the development strategy that encompasses the process, methods and tools applied by the software engineer or a team of engineers. These also term as process models.

Chapter 6DETAILED DESIGN Detailed design of the system is the last design activity before implementation begins. The hardest design problems must be addressed by the detailed design or the design is not complete. The detailed design is still an abstraction as compared to source code, but should be detailed enough to ensure that translation to source is a precise mapping instead of a rough interpretation. Detailed design artifacts are going to contain a large amount of details which, if included in full, would obscure the point of this page. The detailed design should represent the system design in a variety of views where each view uses a different modeling technique. By using a variety of views, different parts of the system can be made clearer by different views. Some views are better at elaborating a systems state whereas other views are better at showing how data flows within the system. Other views are better at showing how different system entities relate to each through class taxonomies for systems that are designed using an object-oriented approach. A template for detailed design would not be of much use since each detailed design is likely to be unique and quite different from other designs.

6.1 Input Design Input design encompasses internal and external program interfaces and the design of the user interfaces. Internal and external interface designs are guided by the information obtained from the analysis model. This defines user tasks and actions either an elaborative or object oriented approach. Various input forms are designed categorically according to the particular need of the user, which fulfills the every need of the user. Inaccurate input data are the most common cause of errors in data processing. Errors found at the data entry can be controlled by proper input design. The input validations are performed at field level. The following are some constraints used in input design. Specifying maximum length for each field Specifying format for the data field, which are to be entered Specifying the field sequence

6.2 Output Design The output design minimizes the intellectual distance between the software and the problem, as it exists in the real world. The design is uniform and integrated. The output generated is clear and optimized. Output design builds coherent, well-planned representation of programs that concentrate on the inter relationships of parts at the higher level and logical operations involved at the lower level. The output is the most important and direct source of information to the user and help in decision making.6.3 Code Design The purpose of code is to facilitate the identification and retrieval of items of information. A code is an ordered collection of symbols designed to provide unique identification of an entity or an attribute. Codes are built with mutually exclusive features. Codes in all cases specify objects physical or on performance characteristics. Codes can show interrelationship among different items. Codes are used for identifying, accessing and matching records. The code ensure only one value of code with a single meaning is correctly applied to give entity or attribute

6.4 Data Flow DiagramData Flow Diagram (DFD and also called as Data Flow Graph) shows the flow of data through the system. It views the system as a function that transforms the input into desired output. DFD provides a mechanism for functional modeling. DFD may be partitioned into levels that represent increasing information flow and functional details. The context level DFD represents the entire software element process. The detailed DFDs are used in the design phase.NOTATIONDESCRIPTION

BUBBLE (PROCESS). It is the agent that forms the transformation of data from one state to another. The process is shown by named circle.

RECTANGLE. It represents a source or sink and is net originator or consumer of data.

ARROW .It represents the flow of data.

DOUBLE LINES. It represents the data store

Table 6.4.1: Basic DFD Diagram Notations

The special character * is used to represent (AND relationship) the need for multiple data flows by a process. + is used t represent OR relationship between dataflow.

6.4.1 Context Diagram A Context Flow Diagram is top Level (also known as level 0) data flow Diagram. It only contains one process node (Known as process 0) that generalizes the function of the entire system in relationship to external entities. There are only three symbols used in a context diagram: A Circle to represent the system in terms of a single process. Arrows to represent data flow. A rectangle to represent any external entities affecting the system, there can be numerous external entities. A double line represents the data store.

Figure 6.4.1: DFD Diagram for multi cloud

6.5 Entity-Relationship Diagram (ER Diagram)ER Modeling is widely used for designing databases. The main focus of ER modeling is the data items in the system and the relationship between them. It aims to create a conceptual schema (also called the ER model) for the data from the users perspective.SymbolMeaning

ENTITY TYPE. It defines a collection (or set) of entities that have the same attributes. Each entity type in the database is described by its name and attributes. The entity set(table) is usually referred to using the same name as the entity type.

ATTRIBUTE. It represents the structure of the entity type. If an attribute is composite (attributes having sub attributes) then its sub attributes are shown.

RELATIONSHIP. It represents the relationship between the entity types. Relationship types may also have attributes.

LINE. (Partial Participation)It represents the participating entity types of a relationship.

KEY ATTRIBUTE. It represents the structure of the key entity type

Table 6.5.1: Basic ER Diagram Notations

6.5.1 Key terms used in ER-Diagram:

Primary Key (Key Attribute):In ER-Diagrammatic notation each key attribute has its name underlined inside the oval.Degree of relationship type:The degree of a relationship type is the number of participating entity type. A relationship type of degree two is called Binary relationship and one of degree three is called Ternary.

Cardinality Ratio:It specifies the number of relationship instances an entity can participate in. One-to-one, one-to-many, many-to-many respectively. In ER-Modeling, the main focus is given on data in the problem and relationship between data items. Through ER model, the analyst can expect to get complete knowledge of all the data that exist in the system and how the data is related.

Figure 6.5.1: ER Diagram for multi cloud

6.6 Table DesignDatabase is collection of interrelated data stored with minimum redundancy to serve many users quickly and efficiently. Database designs are designed to manage large bodies of information and also for easy and flexible retrieval of data. Every system requires not only data, but also the structure of that data. A Database Management System (DBMS) collects the structure related files so that many users can retrieve, manipulate and store data. Here we will be using mysql Server as the DBMS. Table: Expense_Summery

Table: Expense_Summery

Table: Expense_Summery

Table: Expense_Summery6.7 Algorithm usedStep1: The username and password is entered it redirect to the admin welcome page.Step 2: User can register their details in the profile before logging in.Step 3: The user can create a user id, password and confirm password.Step 4: After user can upload the new file to multi cloud and download the files From multi cloud.Step5: The owner can verify the user file.Step 6: The employee can send the messages and received the messages to another employee.Step 7: After the user how to apply the job and the number of job vacancies to be viewed.Step 8: The user can be apply the job for online and then the user will be participated in the e-test.Step 9: Finally to view the e-test results and the new employee can register the particular details.

Chapter 7IMPLEMENTATIONThe implementation is one of the most important tasks in the project. It has one key activity: deploying the new system in its target environment. Supporting actions include training end-users and preparing to turn the system over to maintenance personnel. After this phase, the system enters the Operations and Maintenance Phase for the remainder of the systems operational life. Multiple-release projects require multiple iterations of the Implementation Phase one for each release.7.1 Implementation ModulesModule Description:1. Data Integrity2. Data Intrusion3. Service Availability4. DepSKy System ModelData Integrity: One of the most important issues related to cloud security risks is data integrity. The data stored in the cloud may suffer from damage during transition operations from or to the cloud storage provider. Cachinet al. gives examples of the risk of attacks from both inside and outside the cloud provider, such as the recently attacked Red Hat Linuxs distribution servers.One of the solutions that they propose is to use a Byzantine fault-tolerant replication protocol within the cloud. Hendricks et al. State that this solution can avoid data corruption caused by some components in the cloud. However, Cachinet al.Claim that using the Byzantine fault tolerant replication protocol within the cloud is unsuitable due to the fact that the servers belonging to cloud providers use the same system installations and are physically located in the same place.

Data Intrusion: According to Garfinkel, another security risk that may occur with a cloud provider, such as the Amazon cloud service, is a hacked password or data intrusion. If someone gains access to an Amazon account password, they will be able to access all of the accounts instances and resources. Thus the stolen password allows the hacker to erase all the information inside any virtual machine instance for the stolen user account, modify it, or even disable its services. Furthermore, there is a possibility for the users email(Amazon user name) to be hacked (see for a discussion of the potential risks of email), and since Amazon allows a lost password to be reset by email, the hacker may still be able to log in to the account after receiving the new reset password.Service Availability: Another major concern in cloud services is service availability. Amazon mentions in its licensing agreement that it is possible that the service might be unavailable from time to time. The users web service may terminate for any reason at any time if any users files break the cloud storage policy. In addition, if any damage occurs to any Amazon web service and the service fails, in this case there will be no charge to the Amazon Company for this failure. Companies seeking to protect services from such failure need measures such as backups or use of multiple providers.DepSKy System Model: The DepSky system model contains three parts: readers, writers, and four cloud storage providers, where readers and writers are the clients tasks. Bessani et al. explain the difference between readers and writers for cloud storage. Readers can fail arbitrarily (for example, they can fail by crashing, they can fail from time to time and then display any behavior) whereas, writers only fail by crashing.7.2 IMPLEMENTATION PROCESS The system is developed in such a way that the existing facilities are enough for implementation. The hardware facilities are made sufficient enough to implement the newly developed. The first step in implementation is the approval from the users.

The workflow of the developed application is as follows:

Welcome Page:

Client Register:

Client Login:

File Upload:

File Stored in Multi-Cloud:

File upload to Multi Cloud:

Cloud Owner Login:

User File:

File Verify Owner :

File Verified:

Provider Login:

File verify:

File verify:

Adding Information to Client File:

While verifying the File it Shown Error:

After Verify:

Client verify File with Key:

Client Verify Server 1:

Client Verify Server 2:

Client Verify Server 3:

View Original File and Download

Chapter 8TESTING AND RESULTThe purpose of testing is to discover errors. Testing is the process of trying to discover every conceivable fault or weakness in a work product. It provides a way to check the functionality of components, sub-assemblies, assemblies and/or a finished product It is the process of exercising software with the intent of ensuring that the Software system meets its requirements and user expectations and does not fail in an unacceptable manner. There are various types of test. Each test type addresses a specific testing requirement.8.1 TESTING METHODOLOGIES The entire process can be divided into 5 phases Functional testing System Testing Unit Testing Integrated Testing Acceptance Testing

8.1.1 Functional testingFunctional tests provide a systematic demonstration that functions tested are available as specified by the business and technical requirements, system documentation, and user manuals.Functional testing is centered on the following items:Valid Input: Identified classes of valid input must be accepted.Invalid Input: Identified classes of invalid input must be rejected.Functions: Identified functions must be exercised.Output: Identified classes of application outputs must be exercised.Systems/Procedures: Interfacing systems or procedures must be invoked. Organization and preparation of functional tests is focused on requirements, key functions, or special test cases. In addition, systematic coverage pertaining to identify Business process flows; data fields, predefined processes, and successive processes must be considered for testing. Before functional testing is complete, additional tests are identified and the effective value of current tests is determined.8.1.2.System TestingSystem testing ensures that the entire integrated software system meets requirements. It tests a configuration to ensure known and predictable results. An example of system testing is the configuration oriented system integration test. System testing is based on process descriptions and flows, emphasizing pre-driven process links and integration points8.1.3 Unit TestingUnit testing is usually conducted as part of a combined code and unit test phase of the software lifecycle, although it is not uncommon for coding and unit testing to be conducted as two distinct phases.8.1.4 Integration TestingSoftware integration testing is the incremental integration testing of two or more integrated software components on a single platform to produce failures caused by interface defects. The task of the integration test is to check that components or software applications.E.g. components in a software system or one step up software applications at the company level interact without error.8.1.5 Acceptance TestingUser Acceptance Testing is a critical phase of any project and requires significant participation by the end user. It also ensures that the system meets the functional requirements.8.2 Test strategy and approachField testing will be performed manually and functional tests will be written in detail.8.3 Test objectivesAll field entries must work properly. Pages must be activated from the identified link .The entry screen, messages and responses must not be delayed.8.4 Features to be testedVerify that the entries are of the correct format no duplicate entries should be allowed. All links should take the user to the correct page.Test ResultsAll the test cases mentioned above passed successfully. No defects encountered.Sl NoScenariosExpected ResultActual ResultStatus

1User RegistrationUser Registration SuccessfulUser Registration SuccessfulSuccess

2User LoginLogin SuccessfulLogin SuccessfulSuccess

3Provider RegistrationProvider Registration SuccessfulProvider Registration SuccessfulSuccess

4Provider LoginProvider SuccessfulProvider SuccessfulSuccess

5File UploadSuccessful uploadSuccessful uploadSuccess

6File verifySuccessfulFile verifySuccessful File verifiedSuccess

7View File StatusSuccessfulSuccessfulSuccess

8Download FiledownloadSuccessfully File downloaded Success

Chapter 9CONCLUSIONIt is clear that although the use of cloud computing has rapidly increased, cloud computing security is still considered the major issue in the cloud computing environment. Customers do not want to lose their private information as a result of malicious insiders in the cloud. In addition, the loss of service availability has caused many problems for a large number of customers recently. Furthermore, data intrusion leads to many problems for the users of cloud computing. The purpose of this work is to survey the recent research on single clouds and multi-clouds to address the security risks and solutions. We have found that much research has been done to ensure the security of the single cloud and cloud storage whereas multi-clouds have received less attention in the area of security. We support the migration to multi-clouds due to its ability to decrease security risks that affect the cloud computing user.

Chapter 10FUTURE ENHANCEMENTSWhen we develop a project, we try our level best to include all the options to make it work efficiently and to meet all the client requirements. But as the time goes on technology develops and also the client requirements change. So the application must be designed in such a way that we should be always be able to make the required changes whenever necessaryFor future work, we aim to provide a framework to supply a secure cloud database that will guarantee to prevent security risks facing the cloud computing community. This framework will apply multi-clouds and the secret sharing algorithm to reduce the risk of data intrusion and the loss of service availability in the cloud and ensure data integrity.

Chapter 11BIBLIOGRAPHY

11.1 Books Referred Software Engineering,Roger.S.Pressman Mc.Graw Hill The Unified Modeling Language User Guide, Grady Booch, James Rumbaugh, Ivar Jacobson. Sotware Project Management.Walker Rayce.

11.2 Websites http://java.sun.com http://www.sourcefordgde.com http://www.networkcomputing.com/ http://www.roseindia.com/ http://www.java2s.com/ http://stackoverflow.com/

NMAMIT, NitteDepartment of MCA2014