Collaboration with Cloud Computing || Cloud Computing

Download Collaboration with Cloud Computing || Cloud Computing

Post on 19-Feb-2017




8 download


<ul><li><p>15</p><p>Cloud Computing 2CHAPTER </p><p>INFORMATION INCLUDED IN THIS CHAPTER Protocols used Defining the cloud Software as a service Infrastructure as a service Platform as a service</p><p>INTRODUCTIONIts nothing new to suggest that more and more of life is revolving around our inter-actions with computers and more specifically with other computers and people over the Internet. We are finding more and more ways to get things done making use of the Internet connections that have become more and more ubiquitous. Coffee shops, fast-food places, more traditional restaurants, health professionals. What do they all have in common? Often, they have complimentary wireless access, provid-ing us with a way to get online. Additionally, people are getting so-called smart-phones which also provide them with a way to stay online from nearly anywhere they might be. We can complain about texting and driving but its not just SMS. How many people do you know who Facebook and drive?</p><p>So, what does all this have to do with the cloud? Well, we are more and more connected, and many of the ways we are connected these days is through some sort of in the cloud technology. Businesses are also finding ways to streamline their operations and reduce infrastructure, operations footprint, and overall cost. They are doing this in a number of ways because the cloud (and Ill stop putting it in quotes soon) has a number of facets. Its not simply a way to stick all of your iTunes library or a bunch of documents you want to share with someone else. It has become a lot more, and there are a number of technologies that have allowed all of this to happen, finally.</p><p>What makes me say finally? Well, the reality is that technology has been going back and forth between local and remote for a while now. Life started with </p><p> with Cloud Computing. DOI: 2014 Elsevier Inc. All rights reserved.2014</p><p></p></li><li><p>16 CHAPTER 2 Cloud Computing</p><p>everything centralized since all we had was very large systems. There was no way to have local storage so it had to be centralized. In the 1980s, the arrival of PCs allowed for local storage and a movement away from the exclusively centralized, mainframe approach, although it took some time for the local storage to get big enough and cheap enough to really be effective. Applications were a mix of local and centralized, though.</p><p>In the 1990s, there was again a movement toward centralized, remote applica-tions as the Web took off. There was a lot of talk about thin clients and dumb ter-minals making use of applications living on remote servers using Web interfaces or even remote procedures or methods. The thin client and dumb terminal approach never really took off, and there was still a preponderance of local storage and local applications. The mix of financials and functionality was never really able to make the push to using remote applications over local applications. In the late 1990s and early 2000s, I was working at a large network service provider and we started talking about offering services in the cloud which made it the first time I had heard that term. When the dotcom collapse really started taking everyone down, the potential clients and the resources necessary to create those services and applica-tions were just no longer available. Again, services and applications remained local.</p><p>Now, were into the 2010s and youll hear in the cloud everywhere you go. There is an enormous movement of applications, functionality to a remote model with remote functionality and remote storage. What is it that has made that possi-ble? Some of it has been the rise in virtual technology, some has been an improve-ment in the protocols, and there is also the economies of scale that come with enormous storage and processing power. On top of that, last mile network services like cable modem, Digital Subscriber Line, and cellular data networking have pro-vided the ability to have Internet services nearly everywhere you go. All of that storage and processing can be shared across a large number of clients, and it makes sense as a business opportunity as well as a way to cut costs and reduce internal risk to the organization, especially when you consider the large number of potential customers and their ability to consume services from anywhere.</p><p>The protocols that made it possibleCloud services didnt just happen overnight, of course, and there are a number of incremental advances that have taken place in how we offer services and applica-tions over networks that have made cloud services possible. While the origins of many of these concepts go back decades in at least their conceptual stages, the con-crete introduction of some of the really important ones began in the late 1980s, just as the Internet itself was really starting to come together with all of the disparate research and educational networks being pulled together into a larger, single net-work that was being called the Internet.</p></li><li><p>17The protocols that made it possible</p><p>HTTP and HTMLHypertext has been a concept since at least the 1960s, and it was made possible in the late 1980s with the introduction of a protocol, a language, and a set of applications that brought it all together. Initially, this was done to simply share information between scientists and researchers around the world. Eventually, the rest of the world started to figure out all of the cool and interesting things that could be done using a couple of very simple protocols. The implementation of hypertext in language form is HyperText Markup Language (HTML). A basic example of HTML would be something like</p><p> My page </p><p> Hello, everybody! </p><p>You can see HTML is a reasonably easy to read, structured formatting language. It uses tags to indicate elements, and each tag could potentially have parameters that would alter the behavior of the tag. An example is the body tag, which has two parameters. The first, text, sets the color of the text, while the second parameter, bgcolor, sets the color of the background of the page. Its not challenging to learn basic HTML, but basic HTML wasnt enough so that evolved into Dynamic HTML (DHTML) because static text wasnt particularly interesting and it was thought that people wanted movement and interaction in their Web pages. DHTML is a combi-nation of HTML with a client-side scripting language like JavaScript providing a more interactive experience with what was previously a static page.</p><p>Its important to note here that HTML continues to go through a lot of evolution and the latest, HTML5, is completely different from previous versions. The param-eters mentioned above are no longer supported in HTML5. HTML5 is part of the continued evolution toward more interactive Web pages. HTML5 has not yet sta-bilized as a standard at the time of this writing, though they are releasing candidate recommendations. The expectation is that a final, stable standard will be available in 2014. In addition to markup, HTML5 has support for application programming interfaces (APIs), which previous versions of HTML did not have.</p><p>The HTML is the visual representation but we still need a way to get the HTML to us. Certainly existing protocols like FTP could have been used but the existing protocols didnt offer the right level of information necessary to provide functions like detailed errors or redirection. When you are interacting with a Web server, you may generate errors on the server for a number of reasons including the page not being found or a more serious error with the server itself. You might also generate a server-side error if the programmatic components had an error in them on the server side. These types of interactions would be impossible with previous document retrieval protocols like FTP. Additionally, FTP was designed in a way that might </p></li><li><p>18 CHAPTER 2 Cloud Computing</p><p>cause problems on the application side. As a result of all of this, Tim Berners-Lee and his team at the European Organization for Nuclear Research (CERN), devel-oped a new protocol called the HyperText Transfer Protocol (HTTP), designed to transfer HTML documents from a server to a client. The initial version of HTTP was very simple, providing a way to make a request to the server and get a response back. Later, the protocol was altered to include more information like metadata, extended negotiation, and extended operations.</p><p>Accept: text/html,application/xhtml + xml,application/xml;q = 0.9,*/*;q = 0.8Accept-Encoding:gzip,deflate,sdchAccept-Language:en-US,en;q = 0.8Connection:keep-aliveHost:www.washere.comUser-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.93 Safari/537.36</p><p>The listing shows a sample HTTP request using a current version of the proto-col. The original published version was 0.9. 1.0 was published in 1996 followed by 1.1 in 1997 where we are today after some modifications in 1999. HTTP is one of the fundamental building blocks of the way applications are delivered today. The methods that HTTP allows in addition to the data the can be conveyed using HTTP makes it a very flexible protocol for the delivery of information and functionality in interacting with users. Reasonably simple and extensible has made it very useful.</p><p>XMLAt this point, we have a way of describing what a page of information should look like and a way of delivering the pages to clients so they can be displayed. There are even programming languages that can be used. One thing missing from what weve been talking about so far is a way of transmitting complex data from the client to the server and vice versa. The HTTP allows for passing parameters but, while you can provide a variable name for each parameter, you cant describe the data at all. The even bigger problem with just using plain HTTP and being stuck with form parameters is that you need user action in order to trigger the content to be sent off to the Web server in order to get data back.</p><p>In 1999, Microsoft came up with a way to handle interaction behind the scenes and send data to the server while loading the data back into the script rather than having the browser handle it and render as it saw fit. The script could update com-ponents in the page based on the Document Object Model (DOM), which pro-vided programmatic access to different parts of the page. You may have a frame </p><p></p></li><li><p>19The protocols that made it possible</p><p>on your page that you want to update with data youve received from the server. You could access the frame and elements within it using the DOM. The data being sent back and forth is formatted using the eXtensible Markup Language (XML), which descends from the Structured Generalized Markup Language (SGML), as does HTML. By way of a historical note of interest, SGML was preceded by the Generalized Markup Language developed in the late 1960s by researchers at IBM.</p><p>XML is written in such a way that you can present data while also describing it in a detailed and structured way. Data can be sent back and forth with either side not needing to know what comes next as long as they both sides understand the language used to describe the data. Lets take a look at some XML and then we can pull it apart from there.</p><p> Zoey Messier Somewhere Street Wubble WV 33992 </p><p>You can see from the code that each piece of data has the type of data associated with it. Also, the beginning and end are clearly marked with a beginning tag and an ending tag. XML also provides a way to describe the version of XML in use and comments, which you can see in the first and second lines of the listing. One advan-tage of using XML is the fact that each of the entries is described, meaning that it doesnt matter which order the data is presented in. Its not a question of getting a stream of data and needing it to be in the correct order so that it can be parsed cor-rectly. With XML, it doesnt matter what order the data is presented in. XML can provide something of a rudimentary database where data can be stored in a way that it can be retrieved by data field, and you can store a number of records in a single file with all of the records fully described.</p><p>The way XML is sent back and forth is through JavaScript. Google took the approach that Microsoft was using with XML and HTTP and changed it slightly. Rather than using an ActiveX control to send the data back and forth, Google opted to use JavaScript in order to implement their Gmail and Maps services. Microsoft had been using XML and HTTP to implement Outlook Web Access among other things for several years by that point, but since they were using ActiveX controls, it wasnt as standards compliant as the approach Google used since not all browsers or operating systems supported ActiveX controls.</p><p>RESTful servicesOne of the challenges with HTTP has always been that it was an entirely state-less protocol, meaning that it didnt have any internal mechanisms to keep track of </p></li><li><p>20 CHAPTER 2 Cloud Computing</p><p>whether a particular endpoint had previously requested anything from the server or not and who that endpoint actually was and whether they were trusted or not. All of those factors had to be sorted out outside of the HTTP. One of the early mecha-nisms for maintaining that sort of data was something called a cookie, which is a small piece of data that is stored on the client at the request of the server. When necessary, the client will transmit the data in the cookie to the server so that infor-mation about the client can be retrieved, including the list of items in a shopping cart, for example.</p><p>There was a need for an architectural model that allowed clients and servers to share state information, meaning they needed to both be aware of the current trans-actional state of their connection. In 2000, Roy Fielding defined the term represen-tational state transfer in his doctor dissertation, and it became the specification for an architecture called REST. REST specifies a way of developing an application in a general way by outlining some core concepts that are critical including scalability of interactions, generalization of interfaces, and independent deployment of com-ponents. A system using a RESTful architecture has the following constraints but allows the remainder of the system to be developed and deployed as needed.</p><p> Client serverClients and servers are defined, providing a clear delineation between responsibilities. Clients are responsible for the interface and servers are responsible for data storage.</p><p> StatelessThe server isnt to be responsible for storing any context information about the client. The client would be responsible for storing state information and providing the context to the server when it generated and transmitted a request.</p><p> CacheableThe client should be able to cache responses. This requires that any response from the server needs to be tagged as either cacheable or not cacheable.</p><p> Layered systemThis a...</p></li></ul>


View more >