method for reducing network traffic over low bandwidth network file system

Upload: amitkumar

Post on 09-Oct-2015

10 views

Category:

Documents


0 download

DESCRIPTION

Abstract: The main aim of this project is PACK supported a completely unique TRE technique, that permits the consumer to use recently received chunks to spot antecedently received chunk chains, that successively is used as reliable predictors to future transmitted chunks. Cloud computing offers its customers a cheap and convenient pay-as-you-go service model, noted conjointly as usage-based evaluation. Cloud customers1 pay just for the particular use of computing resources, storage, and information measure, consistent with their ever-changing desires, utilizing the cloud’s ascendable and elastic process capabilities. above all, knowledge transfer prices (i.e., bandwidth) is a crucial issue once attempting to reduce prices. The cloud customers, applying a considered use of the cloud’s resources, area unit motivated to use varied traffic reduction techniques, above all traffic redundancy elimination (TRE), for reducing information measure prices. Key Words: TRE, PACK, Cloud, TCP, TPA

TRANSCRIPT

  • International Journal of Ethics in Engineering & Management Education Website: www.ijeee.in (ISSN: 2348-4748, Volume 1, Issue 8, August 2014)

    55

    Method for reducing network traffic over low bandwidth network file system

    Mohammed Issaq Vangipuram Sravan Kiran M.Tech Scholar M.Tech(CSE), Assistant Professor

    Dept of CSE Dept of CSE TITS, JNTUH, AP, India TITS, JNTUH, AP, India [email protected] [email protected]

    Abstract: The main aim of this project is PACK supported a completely unique TRE technique, that permits the consumer to use recently received chunks to spot antecedently received chunk chains, that successively is used as reliable predictors to future transmitted chunks. Cloud computing offers its customers a cheap and convenient pay-as-you-go service model, noted conjointly as usage-based evaluation. Cloud customers1 pay just for the particular use of computing resources, storage, and information measure, consistent with their ever-changing desires, utilizing the clouds ascendable and elastic process capabilities. above all, knowledge transfer prices (i.e., bandwidth) is a crucial issue once attempting to reduce prices. The cloud customers, applying a considered use of the clouds resources, area unit motivated to use varied traffic reduction techniques, above all traffic redundancy elimination (TRE), for reducing information measure prices.

    Key Words: TRE, PACK, Cloud, TCP, TPA

    1. INTRODUCTION

    In this paper, we tend to gift PACK (Predictive ACKs), a completely unique end-to-end traffic redundancy elimination (TRE) system, designed for cloud computing customers. Cloud-based TRE has to apply a considered use of cloud resources so the information measure value reduction combined with the extra value of TRE computation and storage would be optimized. PACKs main advantage is its capability of offloading the cloud-server TRE effort to finish purchasers, so minimizing the process prices induced by the TRE algorithmic program. not like previous solutions, PACK doesn't need the server to endlessly maintain clients standing. This makes PACK terribly appropriate for pervasive computation environments that mix consumer quality and server migration to keep up cloud snap. PACK is predicated on a completely unique TRE technique, that permits the consumer to use recently received chunks to spot antecedently received chunk chains, that successively is used as reliable predictors to future transmitted chunks. we tend to gift a completely practical PACK implementation, clear to any or all TCP-based applications and network devices. Finally, we tend to analyze PACK advantages for cloud users, victimization traffic traces from varied sources.

    Traffic redundancy stems from common end-users activities, like repeatedly accessing, downloading, uploading (i.e., backup), distributing, and modifying identical or similar info

    things (documents, data, Web, and video). TRE is employed to eliminate the transmission of redundant content and, therefore, to considerably cut back the network value. In commonest TRE solutions, each the sender and therefore the receiver examine and compare signatures of knowledge chunks, parsed consistent with the information content, before their transmission. Once redundant chunks area unit detected, the sender replaces the transmission of every redundant chunk with its sturdy signature. Business TRE solutions area unit fashionable at enterprise networks, and involve the preparation of 2 or a lot of proprietary-protocol, state synchronous middle-boxes at each the computer network entry points of knowledge centers and branch offices, eliminating repetitive traffic between them. We tend to gift a completely unique receiver-based end-to-end TRE answer that depends on the ability of predictions to eliminate redundant traffic between the cloud and its end-users.

    2. PROPOSED SYSTEM

    In this paper, we tend to gift a completely unique receiver-based end-to-end TRE answer that depends on the ability of predictions to eliminate redundant traffic between the cloud and its end-users. during this answer, every receiver observes the incoming stream and tries to match its chunks with a antecedently received chunk chain or a bit chain of an area file. victimization the long-run chunks data info unbroken domestically, the receiver sends to the server predictions that embrace chunks signatures and easy-to-verify hints of the senders future knowledge. On the receiver aspect, we tend to propose a replacement computationally light-weight unitization (fingerprinting) theme termed PACK unitization. PACK unitization may be a new various for Rabin procedure historically employed by RE applications.

    Cloud computing offers its customers a cheap and convenient pay-as-you-go service model, noted conjointly as usage-based evaluation. Cloud customers1 pay just for the particular use of computing resources, storage, and information measure, consistent with their ever-changing desires, utilizing the clouds ascendable and elastic process capabilities. above all, knowledge transfer prices (i.e., bandwidth) is a crucial issue once attempting to reduce prices. The cloud customers, applying a considered use of the clouds resources, area unit motivated to use varied traffic reduction techniques, above all

  • International Journal of Ethics in Engineering & Management Education Website: www.ijeee.in (ISSN: 2348-4748, Volume 1, Issue 8, August 2014)

    56

    traffic redundancy elimination (TRE), for reducing information measure prices. The main aim of this project is PACK supported a completely unique TRE technique, that permits the consumer to use recently received chunks to spot antecedently received chunk chains, that successively is used as reliable predictors to future transmitted chunks.

    Advantages of Planned System:

    Our approach will reach processing speeds over3 Gb/s, a minimum of 2 hundredth quicker than Rabin procedure.

    The receiver-based TRE answer addresses quality issues common to quasi-mobile desktop/ laptops process environments.

    One of them is cloud snap thanks to that the servers area unit dynamically settled round the federate cloud, so inflicting purchasers to act with multiple ever-changing servers.

    We enforced, tested, and performed realistic experiments with PACK inside a cloud surroundings. Our experiments demonstrate a cloud value reduction achieved at an inexpensive consumer effort whereas gaining further information measure savings at the consumer aspect.

    Our implementation utilizes the TCP choices field, supporting all TCP-based applications like net, video streaming, P2P, e-mail, etc.

    We demonstrate that our answer achieves half-hour redundancy elimination while not considerably poignant the process effort of the sender, leading to a 2 hundredth reduction of the general value to the cloud client.

    3. FUNCTIONAL REQUIREMENTS

    Cloud Computing is that the long unreal vision of computing as a utility, wherever users will remotely store their knowledge into the cloud thus on get pleasure from the on-demand top quality applications and services from a shared pool of configurable computing resources. By knowledge outsourcing, users is mitigated from the burden of native knowledge storage and maintenance. However, the actual fact that users now not have physical possession of the probably massive size of outsourced knowledge makes the information integrity protection in Cloud Computing a awfully difficult and doubtless formidable task, particularly for users with unnatural computing resources and capabilities. Thus, facultative public auditability for cloud knowledge storage security is of vital importance so users will resort to associate degree external audit party to ascertain the integrity of outsourced knowledge once required. To firmly introduce an efficient Third Party Auditor (TPA), the subsequent 2 elementary necessities got to be met:

    1. TPA ought to be ready to expeditiously audit the cloud knowledge storage while not hard to please the native copy of knowledge, and introduce no further on-line burden to the cloud user.

    2. The Third Party Auditing method ought to herald no new vulnerabilities towards user knowledge privacy. During this paper we tend to area unit extending the previous system by victimization automatic blocker for privacy protective public auditing for knowledge storage security in cloud computing. We tend to utilize the general public key primarily based homomorphic critic and unambiguously integrate it with random mask technique and automatic blocker. To attain a privacy-preserving public auditing system for cloud knowledge storage security whereas keeping all on top of necessities in mind. Intensive security and performance analysis shows the planned schemes area unit demonstrably secure and extremely economical.

    Fig.1. Functional Requirements

    4. NON FUNCTIONAL REQUIREMENTS

    To modify privacy-preserving public auditing for cloud knowledge storage beneath the same model, our protocol style ought to come through the subsequent security and performance guarantee:

    1. Public auditability: to permit TPA to verify the correctness of the cloud knowledge on demand while not retrieving a duplicate of the entire knowledge or introducing further on-line burden to the cloud users.

    2. Storage correctness: to confirm that there exists no cheating cloud server which will pass the audit from TPA while not so storing users knowledge intact.

    3. Privacy-preserving: to confirm that there exists no method for TPA to derive users knowledge content from the data collected throughout the

    4. auditing method. 5. Batch auditing: to modify TPA with secure and

    economical auditing capability to address multiple auditing delegations from probably sizable amount of various users at the same time.

    6. Lightweight: to permit TPA to perform auditing with minimum communication and computation overhead.

  • International Journal of Ethics in Engineering & Management Education Website: www.ijeee.in (ISSN: 2348-4748, Volume 1, Issue 8, August 2014)

    57

    Fig.2. Non Functional Requirements

    5. IMPLEMENTATION

    5.1. Receiver Chunk Store: PACK uses a replacement chains theme. that chunks area unit connected to alternative chunks consistent with their last received order. The PACK receiver maintains a bit store, that may be a massive size cache of chunks and their associated data. Chunks data includes the chunks signature and a (single) pointer to the ordered chunk within the last received stream containing this chunk. Caching and compartmentalisation techniques area unit used to expeditiously maintain and retrieve the keep chunks, their signatures, and therefore the chains fashioned by traversing the chunk pointers. When the new knowledge area unit received and parsed to chunks, the receiver computes every chunks signature victimization SHA-1. At now, the chunk and its signature area unit another to the chunk store. Additionally, the data of the antecedently received chunk within the same stream is updated to purpose to this chunk. The unsynchronised nature of PACK permits the receiver to map every existing move into the native classification system to a sequence of chunks, saving within the chunk store solely the data related to the chunks.

    5.2. Receiver Algorithm: Upon the arrival of recent knowledge, the receiver computes the various signature for every chunk and appears for a match in its native chunk store. If the chunks signature is found, the receiver determines whether or not it's a vicinity of a at one time received chain, victimization the chunks data. If affirmative, the receiver sends a prediction to the sender for many next expected chain chunks. Upon a victorious prediction, the sender responds with a PRED-ACK confirmation message. Once the PRED-ACK message is received and processed, the receiver copies the corresponding knowledge from the chunk store to its TCP input buffers, inserting it consistent with the corresponding sequence numbers. At now, the receiver sends a traditional TCP ACK with succeeding expected TCP sequence variety. just in case the prediction is fake, or one or a lot of foreseen chunks area unit already sent, the sender continues with traditional operation, e.g., causing the data, while not causing a PRED-ACK message.

    5.3. Sender Algorithm: When a sender receives a PRED message from the receiver, it tries to match the received predictions to its buffered (yet to be sent) knowledge. for every prediction, the sender determines the corresponding TCP sequence vary and verifies the hint. Upon a touch match, the sender calculates the a lot of computationally intensive SHA-1 signature for the expected knowledge vary and compares the result to the signature received within the PRED message. Note that just in case the hint doesn't match, a computationally expansive operation is saved. If the 2 SHA-1 signatures match, the sender will safely assume that the receivers prediction is correct. during this case, it replaces the corresponding outgoing buffered knowledge with a PRED-ACK message.

    5.4. Wire Protocol: The existing firewalls and minimizes overheads; we tend to use the TCP choices field to hold the PACK wire protocol. it's clear that PACK may also be enforced on top of the management protocol TCP protocol communications protocol} level whereas victimization similar message sorts and control fields. The PACK wire protocol operates beneath the idea that the information is redundant. First, either side modify the PACK possibility throughout the initial TCP acknowledgment by adding a PACK allowable to the TCP choices field. Then, the sender sends the (redundant) knowledge in one or a lot of TCP segments, and therefore the receiver identifies that a presently received chunk is the image of a bit in its chunk store. The receiver, in turn, triggers a TCP ACK message and includes the prediction within the packets choices field. Last, the sender sends a confirmation message (PRED-ACK) exchange the particular knowledge.

    6. INPUT DESIGN

    The input style is that the link between the data system and therefore the user. It includes the developing specification and procedures for knowledge preparation individuals} steps area unit necessary to place group action knowledge in to a usable type for process is achieved by inspecting the pc to scan knowledge from a written or written document or it will occur by having people keying the information directly into the system. the planning of input focuses on dominant the number of input needed, dominant the errors, avoiding delay, avoiding further steps and keeping the method straightforward. The input is intended in such the way so it provides security and simple use with holding the privacy. Input style thought-about the subsequent things:

    What knowledge ought to be as input? However the information ought to be organized or

    coded? The dialog to guide the operative personnel in providing

    input. Methods for getting ready input validations and steps to

    follow once error occur.

  • International Journal of Ethics in Engineering & Management Education Website: www.ijeee.in (ISSN: 2348-4748, Volume 1, Issue 8, August 2014)

    58

    Objectives

    1. Input style is that the method of changing a user-oriented description of the input into a computer-based system. This style is very important to avoid errors within the knowledge input method and show the proper direction to the management for obtaining correct info from the processed system.

    2. it's achieved by making easy screens for the information entry to handle massive volume of knowledge. The goal of planning input is to create knowledge entry easier and to be free from errors. the information entry screen is intended in such the way that every one the information manipulates is performed. It conjointly provides record viewing facilities.

    3. When the information is entered it'll check for its validity. knowledge is entered with the assistance of screens. applicable messages area unit provided as once required so the user

    4. Wont be in maize of instant. so the target of input style is to form associate degree input layout that's straightforward to follow

    7. OUTPUT DESIGN

    A quality output is one that meets the necessities of the top user and presents the data clearly. In any system results of process area unit communicated to the users and to alternative system through outputs. In output style it's determined however the data is to be displaced for immediate would like and conjointly the text output. it's the foremost necessary and direct supply info to the user. economical and intelligent output style improves the systems relationship to assist user decision-making.

    1. planning pc output ought to proceed in associate degree organized, well thought out manner; the proper output should be developed whereas guaranteeing that every output component is intended so individuals can realize the system will use simply and effectively. once analysis style pc output, they must establish the particular output that's required to satisfy the necessities.

    2. Select strategies for presenting info. 3. Create document, report, or alternative formats that

    contain info made by the system.

    The output variety of associate degree data system ought to accomplish one or a lot of of the subsequent objectives.

    Convey info regarding past activities, current standing or projections of the

    Future. Signal necessary events, opportunities, problems, or

    warnings. Trigger associate degree action. Confirm associate degree action.

    8. CONCLUSION

    Cloud computing is anticipated to trigger high demand for TRE solutions because the quantity of knowledge changed between the cloud and its users is anticipated to dramatically increase. The cloud atmosphere redefines the TRE system necessities, creating proprietary middle-box solutions inadequate. Consequently, there's a rising would like for a TRE resolution that reduces the clouds operational value whereas accounting for application latencies, user quality, and cloud physical property.

    REFERENCES

    [1]. E. Zohar, I. Cidon, and O. Mokryn, The power of prediction: Cloud bandwidth and cost reduction, in Proc. SIGCOMM, 2011, pp. 8697.

    [2]. M. Armbrust, A. Fox, R. Griffith, A. D. Joseph,R.Katz, A. Konwinski, G. Lee, D. Patterson, A. Rabkin, I. Stoica, and M. Zaharia, A view of cloud computing, Commun. ACM, vol. 53, no. 4, pp. 5058, 2010.

    [3]. U. Manber, Finding similar files in a large file system, in Proc. USENIX Winter Tech. Conf., 1994, pp. 110.

    [4]. N. T. Spring and D. Wetherall, A protocol-independent technique for eliminating redundant network traffic, in Proc. SIGCOMM, 2000, vol. 30, pp. 8795.

    [5]. A. Muthitacharoen, B. Chen, and D. Mazires, A low-bandwidth network file system, in Proc. SOSP, 2001, pp. 174187.

    [6]. E. Lev-Ran, I. Cidon, and I. Z. Ben-Shaul, Method and apparatus for reducing network traffic over low bandwidth links, US Patent 7636767, Nov. 2009.

    [7]. S.Mccanne andM. Demmer, Content-based segmentation scheme for data compression in storage and transmission including hierarchical segment representation, US Patent 6828925, Dec. 2004.

    [8]. R. Williams, Method for partitioning a block of data into subblocks and for storing and communicating such subblocks, US Patent 5990810, Nov. 1999.

    [9]. Juniper Networks, Sunnyvale, CA, USA, Application acceleration, 1996 [Online]. Available: http://www.juniper.net/us/en/products-services/application-acceleration/

    [10]. Blue Coat Systems, Sunnyvale, CA, USA, MACH5, 1996 [Online]. Available: http://www.bluecoat.com/products/mach5

    About the authors:

    Mohammed Issaq pursuing hisM.Tech in Turbo Machinery Institute of Technology & sciences,JNTU Hyd. He was completed his B. Tech at HI point college of engineering, Moinabad,RRunder JNTUH in the year 2011. He is now M.Tech scholar inTurbo Machinery Institute of Technology & Sciences ,JNTU Hyderabad.

    VangipuramSravanKiranh as obtained his M. Tech. at Ramappa engineering college, Warangal, JNTU HYD. He is anAssistant Professor in Turbo Machinery Institute of Technology & Sciences,JNTU Hyderabad.His area of research isneural networks and Data Mining. He published couple of papers in image mining.