java and .net ieee 2012
DESCRIPTION
IEEE 2012 JAVA and .NET final year BE and ME projects with 100% concept implementation.TRANSCRIPT
IEEE 2012 Titles & Abstract
FOR REGISTER: www.finalyearstudentsproject.com
CONTACT NO.: 91-9176696486.
Address: No.73, Karuneegar street, Adambakkam, Chennai-88
Networks and network security
1. A Distributed Control Law for Load Balancing in Content Delivery Networks
ABSTRACT
In this paper, we face the challenging issue of defining and implementing an effective law
for load balancing in Content Delivery Networks (CDNs). We base our proposal on a formal
study of a CDN system, carried out through the exploitation of a fluid flow model
characterization of the network of servers. Starting from such characterization, we
derive and prove a lemma about the network queues equilibrium. This result is then
leveraged in order to devise a novel distributed and time-continuous algorithm for load
balancing, which is also reformulated in a time-discrete version. The discrete formulation of
the proposed balancing law is eventually discussed in terms of its actual implementation in a
real-world scenario. Finally, the overall approach is validated by means of simulations.
2. TAM: A Tiered Authentication of Multicast Protocol for Ad-Hoc Networks
ABSTRACT
Ad-hoc networks are becoming an effective tool for many mission critical applications such
as troop coordination in a combat field, situational awareness, etc. These applications are
characterized by the hostile environment that they serve in and by the multicast-style of
communication traffic. Therefore, authenticating the source and ensuring the integrity of the
message traffic become a fundamental requirement for the operation and management of
the network. However, the limited computation and communication resources, the large
scale deployment and the unguaranteed connectivity to trusted authorities make known
solutions for wired and single-hop wireless networks inappropriate. This paper presents a
new Tiered Authentication scheme for Multicast traffic (TAM) for large scale dense ad-
hoc networks. TAM combines the advantages of the time asymmetry and the secret
IEEE 2012 Titles & Abstract
FOR REGISTER: www.finalyearstudentsproject.com
CONTACT NO.: 91-9176696486.
Address: No.73, Karuneegar street, Adambakkam, Chennai-88
information asymmetry paradigms and exploits network clustering to reduce overhead and
ensure scalability. Multicast traffic within a cluster employs a one-way hash function chain in
order to authenticate the message source. Cross-cluster multicast traffic includes message
authentication codes (MACs) that are basedon a set of keys. Each cluster uses a unique
subset of keys to look for its distinct combination of valid MACs in the message in order to
authenticate the source. The simulation and analytical results demonstrate the performance
advantage of TAM in terms of bandwidth overhead and delivery delay.
3. Privacy- and Integrity-Preserving Range Queries in Sensor Networks
Abstract—The architecture of two-tiered sensor networks, where storage nodes serve as an
intermediate tier between sensors and a sink for storing data and processing queries, has
been widely adopted because of the benefits of power and storage saving for sensors as well
as the efficiency of query processing. However, the importance of storage nodes also makes
them attractive to attackers. In this paper, we propose SafeQ, a protocol that prevents
attackers from gaining information from both sensor collected data and sink issued queries.
SafeQ also allows a sink to detect compromised storage nodes when they misbehave. To
preserve privacy, SafeQ uses a novel technique to encode both data and queries such that a
storage node can correctly process encoded queries over encoded data without knowing
their values. To preserve integrity, we propose two schemes—one using Merkle hash trees
and another using a new data structure called neighborhood chains—to generate integrity
verification information so that a sink can use this information to verify whether the result of
a query contains exactly the data items that satisfy the query. To improve performance, we
propose an optimization technique using Bloom filters to reduce the communication cost
between sensors and storage nodes.
Wireless Networks
4. Adaptive Opportunistic Routing for Wireless Ad Hoc Networks
IEEE 2012 Titles & Abstract
FOR REGISTER: www.finalyearstudentsproject.com
CONTACT NO.: 91-9176696486.
Address: No.73, Karuneegar street, Adambakkam, Chennai-88
ABSTRACT
A distributed adaptive opportunistic routing scheme for multihop wireless ad hoc networks is
proposed. The proposed scheme utilizes a reinforcement learning framework to
opportunistically route the packets even in the absence of reliable knowledge about channel
statistics and network model. This scheme is shown to be optimal with respect to an
expected average per-packet reward criterion. The proposed routing scheme jointly
addresses the issues of learning and routing in an opportunistic context, where
the network structure is characterized by the transmission success probabilities. In
particular, this learning framework leads to a stochastic routing scheme that optimally
“explores” and “exploits” the opportunities in the network.
5. Local Greedy Approximation for Scheduling in Multihop Wireless Networks
ABSTRACT
In recent years, there has been a significant amount of work done in developing low-
complexity scheduling schemes to achieve high performance in multihop wireless networks.
A centralized suboptimal scheduling policy, called Greedy Maximal Scheduling (GMS) is a
good candidate because its empirically observed performance is close to optimal in a variety
of network settings. However, its distributed realization requires high complexity, which
becomes a major obstacle for practical implementation. In this paper, we develop simple
distributed greedy algorithms for scheduling in multihop wireless networks. We reduce the
complexity by relaxing the global ordering requirement of GMS, up to near zero. Simulation
results show that the new algorithms approximate the performance of GMS, and outperform
the state-of-the-art distributed scheduling policies.
Mobile computing
6. Toward Reliable Data Delivery for Highly Dynamic Mobile Ad Hoc Networks
IEEE 2012 Titles & Abstract
FOR REGISTER: www.finalyearstudentsproject.com
CONTACT NO.: 91-9176696486.
Address: No.73, Karuneegar street, Adambakkam, Chennai-88
ABSTRACT
This paper addresses the problem of
delivering data packets for highly dynamic mobile ad hoc networks in areliable and timely
manner. Most existing ad hoc routing protocols are susceptible to node mobility,
especially for large-scale networks. Driven by this issue, we propose an efficient Position-
based Opportunistic Routing (POR) protocol which takes advantage of the stateless property
of geographic routing and the broadcast nature of wireless medium. When a data packet is
sent out, some of the neighbor nodes that have overheard the transmission will serve as
forwarding candidates, and take turn to forward the packet if it is not relayed by the specific
best forwarder within a certain period of time. By utilizing such in-the-air backup,
communication is maintained without being interrupted. The additional latency incurred by
local route recovery is greatly reduced and the duplicate relaying caused by packet reroute
is also decreased. In the case of communication hole, a Virtual Destination-based Void
Handling (VDVH) scheme is further proposed to work together with POR. Both theoretical
analysis and simulation results show that POR achieves excellent performance even under
high node mobility with acceptable overhead and the new void handling scheme also works
well.
7. Distributed Throughput Maximization in Wireless Networks via Random
Power Allocation
ABSTRACT
We develop a distributed throughput-optimal power allocation algorithm in wireless
networks. The study of this problem has been limited due to the nonconvexity of the
underlying optimization problems that prohibits an efficient solution even in a centralized
setting. By generalizing the randomization framework originally proposed for input queued
switches to SINR rate-based interference model, we characterize the throughput-optimality
conditions that enable efficient and distributed implementation. Using gossiping algorithm,
we develop a distributed power allocation algorithm that satisfies the optimality conditions,
IEEE 2012 Titles & Abstract
FOR REGISTER: www.finalyearstudentsproject.com
CONTACT NO.: 91-9176696486.
Address: No.73, Karuneegar street, Adambakkam, Chennai-88
thereby achieving (nearly) 100 percent throughput. We illustrate the performance of our
power allocation solution through numerical simulation.
8. Hop-by-Hop Routing in Wireless Mesh Networks with Bandwidth Guarantees
ABSTRACT
Wireless Mesh Network (WMN) has become an important edge network to provide Internet
access to remote areas and wireless connections in a metropolitan scale. In this paper, we
study the problem of identifying the maximum available bandwidth path, a fundamental
issue in supporting quality-of-service in WMNs. Due to interference among links, bandwidth,
a well-known bottleneck metric in wired networks, is neither concave nor additive in wireless
networks. We propose a new path weight which captures the available path bandwidth
information. We formally prove that our hop-by-hop routing protocol based on the new path
weight satisfies the consistency and loop-freeness requirements. The consistency property
guarantees that each node makes a proper packet forwarding decision, so that a data
packet does traverse over the intended path. Our extensive simulation experiments also
show that our proposed path weight outperforms existing path metrics in identifying high-
throughput paths.
Wireless Sensor Networks
9. On the Throughput Capacity of Wireless Sensor Networks with Mobile Relays
ABSTRACT
In wireless sensor networks (WSNs), it is difficult to achieve a large data collection rate
because sensors usually have limited energy and communication resources. Such an issue is
becoming more and more challenging with the emerging of information-intensive
applications that require high data collection rate. To address this issue, in this paper, we
investigate the throughput capacity of WSNs where multiple mobile relays are deployed to
IEEE 2012 Titles & Abstract
FOR REGISTER: www.finalyearstudentsproject.com
CONTACT NO.: 91-9176696486.
Address: No.73, Karuneegar street, Adambakkam, Chennai-88
collect data from static sensors and forward them to a static sink. To facilitate the
discussion, we propose a new mobile relay assisted data collection (MRADC) model.
Based on this model, we analyze the achievable throughput capacity of largescale WSNs
using a constructive approach, which can achieve a certain throughput by choosing
appropriate mobility parameters. Our analysis illustrates that, if the number of relays is less
than a threshold, then the throughput capacity can be increased linearly with more
relays. On the other hand, if the number is greater than the threshold, then the throughput
capacity becomes a constant, and the capacity gain over a static WSN depends on two
factors: the transmission range and the impact of interference. To verify our analysis, we
conduct extensive simulation experiments, which validate the selection of mobility
parameters, and which demonstrate the same throughput behaviors obtained by analysis.
Knowledge and Data Mining
10. Publishing Search Logs—A Comparative Study of Privacy Guarantees
ABSTRACT
Search engine companies collect the “database of intentions,” the histories of their users'
search queries. These search logs are a gold mine for researchers. Search engine
companies, however, are wary of publishing search logs in order not to disclose sensitive
information. In this paper, we analyze algorithms for publishing frequent keywords,
queries, and clicks of a search log. We first show how methods that achieve variants of k-
anonymity are vulnerable to active attacks. We then demonstrate that the stronger
guarantee ensured by ε-differential privacy unfortunately does not provide any utility for this
problem. We then propose an algorithm ZEALOUS andshow how to set its parameters to
achieve (ε, δ)-probabilistic privacy. We also contrast our analysis of ZEALOUS with an
analysis by Korolova et al. [17] that achieves (ε',δ')-indistinguishability. Our paper concludes
with a large experimental study using real applications where we compare
ZEALOUS and previous work that achieves k-anonymity in search log publishing. Our results
IEEE 2012 Titles & Abstract
FOR REGISTER: www.finalyearstudentsproject.com
CONTACT NO.: 91-9176696486.
Address: No.73, Karuneegar street, Adambakkam, Chennai-88
show that ZEALOUS yields comparable utility to k-anonymity while at the same time
achieving much stronger privacy guarantees.
11. Efficient Multi-dimensional Fuzzy Search for Personal Information
Management Systems
ABSTRACT
With the explosion in the amount of semistructured data users access and store in personal
information management systems, there is a critical need for powerful search tools to
retrieve often very heterogeneous data in a simple and efficient way. Existing tools typically
support some IR-style ranking on the textual part of the query, but only consider structure
(e.g., file directory) and metadata (e.g., date, file type) as filtering conditions. We propose a
novel multidimensional search approach that allows users to perform fuzzy searches for
structure and metadata conditions in addition to keyword conditions. Our techniques
individually score each dimension and integrate the three dimension scores into a
meaningful unified score. We also design indexes and algorithms to efficiently identify the
most relevant files that match multidimensional queries. We perform a thorough
experimental evaluation of our approach and show that our relaxation and scoring
framework for fuzzy query conditions in noncontent dimensions can significantly improve
ranking accuracy. We also show that our query processing strategies perform and scale well,
making our fuzzy search approach practical for every day usage.
12. Prediction of User's Web-Browsing Behavior: Application of Markov Model
ABSTRACT
Web prediction is a classification problem in which we attempt to predict the next set of Web
pages that a user may visit based on the knowledge of the previously visited pages.
Predicting user's behavior while serving the Internet can be applied effectively in various
IEEE 2012 Titles & Abstract
FOR REGISTER: www.finalyearstudentsproject.com
CONTACT NO.: 91-9176696486.
Address: No.73, Karuneegar street, Adambakkam, Chennai-88
critical applications. Such application has traditional tradeoffs between modeling
complexity and prediction accuracy. In this paper, we analyze and study Markov
model andall-$K$th Markov model in Web prediction. We propose a new modified Markov
model to alleviate the issue of scalability in the number of paths. In addition, we present a
new two-tier prediction framework that creates an example classifier EC, based on the
training examples and the generated classifiers. We show that such framework can improve
the prediction time without compromising prediction accuracy. We have used standard
benchmark data sets to analyze, compare, and demonstrate the effectiveness of our
techniques using variations of Markov models and association rule mining. Our experiments
show the effectiveness of our modified Markov model in reducing the number of paths
without compromising accuracy. Additionally, the results support our analysis conclusions
that accuracy improves with higher orders of all-$K$th model.
13. A Probabilistic Scheme for Keyword-Based Incremental Query Construction
ABSTRACT
Databases enable users to precisely express their informational needs using structured
queries. However, database query construction is a laborious and error-prone process, which
cannot be performed well by most end users. Keyword search alleviates the usability
problem at the price of query expressiveness. As keyword search algorithms do not
differentiate between the possible informational needs represented by a keyword query,
users may not receive adequate results. This paper presents IQP - a novel approach to bridge
the gap between usability of keyword search and expressiveness of database queries.
IQP enables a user to start with an arbitrary keyword query and incrementally refine it into a
structured query through an interactive interface. The enabling techniques of IQP include: 1)
a probabilistic framework for incremental query construction; 2) a probabilistic model to
assess the possible informational needs represented by a keyword query; 3) an algorithm to
obtain the optimal query construction process. This paper presents the detailed design of
IQP, and demonstrates its effectiveness and scalability through experiments over real-world
data and a user study.
IEEE 2012 Titles & Abstract
FOR REGISTER: www.finalyearstudentsproject.com
CONTACT NO.: 91-9176696486.
Address: No.73, Karuneegar street, Adambakkam, Chennai-88
14. Ranking Model Adaptation for Domain-Specific Search
Abstract—With the explosive emergence of vertical search domains, applying the broad-
based ranking model directly to different domains is no longer desirable due to domain
differences, while building a unique ranking model for each domain is both laborious for
labeling data and time-consuming for training models. In this paper, we address these
difficulties by proposing a regularization based algorithm called ranking adaptation SVM (RA-
SVM), through which we can adapt an existing ranking model to a new domain, so that the
amount of labeled data and the training cost is reduced while the performance is still
guaranteed. Our algorithm only requires the prediction from the existing ranking models,
rather than their internal representations or the data from auxiliary domains. In addition, we
assume that documents similar in the domain-specific feature space should have consistent
rankings, and add some constraints to control the margin and slack variables of RA-SVM
adaptively. Finally, ranking adaptability measurement is proposed to quantitatively estimate
if an existing ranking model can be adapted to a new domain. Experiments performed over
Letor and two large scale datasets crawled from a commercial search engine demonstrate
the applicabilities of the proposed ranking adaptation algorithms and the ranking
adaptability measurement.
15. Slicing: A New Approach to Privacy Preserving Data Publishing
ABSTRACT
Several anonymization techniques, such as generalization and bucketization, have been
designed for privacy preserving microdata publishing. Recent work has shown that
generalization loses considerable amount of information, especially for high-dimensional
data. Bucketization, on the other hand, does not prevent membership disclosure and does
not apply for data that do not have a clear separation between quasi- identifying attributes
and sensitive attributes. In this paper, we present a novel technique called slicing, which
IEEE 2012 Titles & Abstract
FOR REGISTER: www.finalyearstudentsproject.com
CONTACT NO.: 91-9176696486.
Address: No.73, Karuneegar street, Adambakkam, Chennai-88
partitions the data both horizontally and vertically. We show that slicing preserves better
data utility than generalization and can be used for membership disclosure protection.
Another important advantage of slicing is that it can handle high-dimensional data. We show
how slicing can be used for attribute disclosure protection and develop an efficient algorithm
for computing the sliced data that obey the ℓ-diversity requirement. Our workload
experiments confirm that slicing preserves better utility than generalization and is more
effective than bucketization in workloads involving the sensitive attribute. Our experiments
also demonstrate that slicing can be used to prevent membership disclosure.
16. Data Mining Techniques for Software Effort Estimation: A Comparative
Study
A predictive model is required to be accurate and comprehensible in order to inspire
confidence in a business setting. Both aspects have been assessed in a software effort
estimation setting by previous studies. However, no univocal conclusion as to which
technique is the most suited has been reached. This study addresses this issue by
reporting on the results of a large scale benchmarking study. Different types of techniques
are under consideration, including techniques inducing tree/rule-based models like M5 and
CART, linear models such as various types of linear regression, nonlinear models (MARS,
multilayered perceptron neural networks, radial basis function networks, and least squares
support vector machines), and estimation techniques that do not explicitly induce a model
(e.g., a case-based reasoning approach). Furthermore, the aspect of feature subset selection
by using a generic backward input selection wrapper is investigated. The results are
subjected to rigorous statistical testing and indicate that ordinary least squares regression in
combination with a logarithmic transformation performs best. Another key finding is that by
selecting a subset of highly predictive attributes such as project size, development, and
environment related attributes, typically a significant increase in estimation accuracy can be
obtained.
17. Ranking and Clustering Software Cost Estimation Models through a Multiple
Comparisons Algorithm
IEEE 2012 Titles & Abstract
FOR REGISTER: www.finalyearstudentsproject.com
CONTACT NO.: 91-9176696486.
Address: No.73, Karuneegar street, Adambakkam, Chennai-88
Software Cost Estimation can be described as the process of predicting the most realistic
effort required to complete a software project. Due to the strong relationship of accurate
effort estimations with many crucial project management activities, the research community
has been focused on the development and application of a vast variety of methods and
models trying to improve the estimation procedure. From the diversity of methods emerged
the need for comparisons to determine the best model. However, the inconsistent results
brought to light significant doubts and uncertainty about the appropriateness of the
comparison process in experimental studies. Overall, there exist several potential sources of
bias that have to be considered in order to reinforce the confidence of experiments. In this
paper, we propose a statistical framework based on a multiple comparisons algorithm in
order to rank several cost estimation models, identifying those which have significant
differences in accuracy and clustering them in non-overlapping groups. The proposed
framework is applied in a large-scale setup of comparing 11 prediction models over 6
datasets. The results illustrate the benefits and the significant information obtained through
the systematic comparison of alternative methods.
18. Using Linked Data to Annotate and Search Educational Video Resources for
Supporting Distance Learning
Title and Guide
Abstract—Multimedia educational resources play an important role in education, particularly
for distance learning environments. With the rapid growth of the multimedia web, large
numbers of educational video resources are increasingly being created by several different
organizations. It is crucial to explore, share, reuse, and link these educational resources for
better e-learning experiences. Most of the video resources are currently annotated in an
isolated way, which means that they lack semantic connections. Thus, providing the facilities
for annotating these video resources is highly demanded. These facilities create the
semantic connections among video resources and allow their metadata to be understood
IEEE 2012 Titles & Abstract
FOR REGISTER: www.finalyearstudentsproject.com
CONTACT NO.: 91-9176696486.
Address: No.73, Karuneegar street, Adambakkam, Chennai-88
globally. Adopting Linked Data technology, this paper introduces a video annotation and
browser platform with two online tools: Annomation and SugarTube. Annomation enables
users to semantically annotate video resources using vocabularies defined in the Linked
Data cloud. SugarTube allows users to browse semantically linked educational video
resources with enhanced web information from different online resources. In the prototype
development, the platform uses existing video resources for the history courses from the
Open University (United Kingdom). The result of the initial development demonstrates the
benefits of applying Linked Data technology in the aspects of reusability, scalability, and
extensibility.
Cloud Computing
19. Scalable and Secure Sharing of Personal Health Records in Cloud
Computing using Attribute-based Encryption
ABSTRACT
Personal health record (PHR) is an emerging patient-centric model of health information
exchange, which is often outsourced to be stored at a third party, such as cloud providers.
However, there have been wide privacy concerns as personal health information could be
exposed to those third party servers and to unauthorized parties. To assure the patients'
control over access to their own PHRs, it is a promising method to encrypt the PHRs before
outsourcing. Yet, issues such as risks of privacy exposure, scalability in key management,
flexible access and efficient user revocation, have remained the most important challenges
toward achieving fine-grained, cryptographically enforced data access control. In this paper,
we propose a novel patient-centric framework and a suite of mechanisms for data access
control to PHRs stored in semi-trusted servers. To achieve fine-grained and scalable data
access control for PHRs, we leverage attribute based encryption (ABE) techniques to encrypt
each patient's PHR file. Different from previous works in secure data outsourcing, we
IEEE 2012 Titles & Abstract
FOR REGISTER: www.finalyearstudentsproject.com
CONTACT NO.: 91-9176696486.
Address: No.73, Karuneegar street, Adambakkam, Chennai-88
focus on the multiple data owner scenario, and divide the users in the PHR system into
multiple security domains that greatly reduces the key management complexity for
owners and users. A high degree of patient privacy is guaranteed simultaneously by
exploiting multi-authority ABE. Our scheme also enables dynamic modification of access
policies or file attributes, supports efficient on-demand user/attribute revocation and break-
glass access under emergency scenarios. Extensive analytical and experimental results are
presented which show the security, scalability and efficiency of our proposed scheme.
20. Enabling Secure and Efficient Ranked Keyword Search over Outsourced
Cloud Data
Cloud computing economically enables the paradigm of data service outsourcing. However,
to protect dataprivacy, sensitive cloud data have to be encrypted before outsourced to the
commercial public cloud, which makes effective data utilization service a very challenging
task. Although traditional searchable encryption techniques allow users to
securely search over encrypted data through keywords, they support only
Booleansearch and are not yet sufficient to meet the effective data utilization need that is
inherently demanded by large number of users and huge amount of data files in cloud. In
this paper, we define and solve the problem
of secureranked keyword search over encrypted cloud data. Ranked search greatly
enhances system usability by enablingsearch result relevance ranking instead of sending
undifferentiated results, and further ensures the file retrieval accuracy. Specifically, we
explore the statistical measure approach, i.e., relevance score, from information retrieval to
build a secure searchable index, and develop a one-to-many order-preserving mapping
technique to properly protect those sensitive score information. The resulting design is able
to facilitate efficient server-sideranking without losing keyword privacy. Thorough analysis
shows that our proposed solution enjoys “as-strong-as-possible” security guarantee
compared to previous searchable encryption schemes, while correctly realizing the goal
of ranked keyword search. Extensive experimental results demonstrate the efficiency of the
proposed solution.
IEEE 2012 Titles & Abstract
FOR REGISTER: www.finalyearstudentsproject.com
CONTACT NO.: 91-9176696486.
Address: No.73, Karuneegar street, Adambakkam, Chennai-88
21. An Efficient and
Secureynamic Auditing Protocol for Data Storage in Cloud Computing
ABSTRACT
In cloud computing, data owners host their data on cloud servers and users
(data consumers) can access thedata from cloud servers. Due to the data outsourcing,
however, this new paradigm of data hosting service also introduces new security challenges,
which requires an independent auditing service to check the data integrity inthe cloud.
Some existing remote integrity checking methods can only serve for static archive data and
thus cannot be applied to the auditing service since the data in the cloud can be
dynamically updated. Thus, an efficient and secure dynamic auditing protocol is desired to
convince data owners that the data are correctly stored in the cloud. In this paper, we first
design an auditing framework for cloud storage systems and propose an efficient and
privacy-preserving auditing protocol. Then, we extend our auditing protocol to support
the data dynamic operations, which is efficient and provably secure in the random oracle
model. We further extend our auditing protocol to support batch auditing for both multiple
owners and multiple clouds, without using any trusted organizer. The analysis and
simulation results show that our proposed auditing protocols are secure and efficient,
especially it reduce the computation cost of the auditor.
22. Towards Secure and Dependable Storage Services in Cloud Computing.
Cloud storage enables users to remotely store their data and enjoy the on-demand high
quality cloud applications without the burden of local hardware and software management.
Though the benefits are clear, such a service is also relinquishing users' physical possession
of their outsourced data, which inevitably poses new security risks toward the correctness of
the data in cloud. In order to address this new problem and further achieve a secure and
dependable cloud storage service, we propose in this paper a flexible distributed storage
integrity auditing mechanism, utilizing the homomorphic token and distributed erasure-
IEEE 2012 Titles & Abstract
FOR REGISTER: www.finalyearstudentsproject.com
CONTACT NO.: 91-9176696486.
Address: No.73, Karuneegar street, Adambakkam, Chennai-88
coded data. The proposed design allows users to audit the cloud storage with very
lightweight communication and computation cost. The auditing result not only ensures
strong cloud storage correctness guarantee, but also simultaneously achieves fast data error
localization, i.e., the identification of misbehaving server. Considering the cloud data are
dynamic in nature, the proposed design further supports secure and efficient dynamic
operations on outsourced data, including block modification, deletion, and append. Analysis
shows the proposed scheme is highly efficient and resilient against Byzantine failure,
malicious data modification attack, and even server colluding attacks.
23. Ensuring Distributed Accountability for Data Sharing in the Cloud
Abstract—Cloud computing enables highly scalable services to be easily consumed over
the Internet on an as-needed basis. A major feature of the cloud services is that users’ data
are usually processed remotely in unknown machines that users do not own or operate.
While enjoying the convenience brought by this new emerging technology, users’ fears of
losing control of their own data (particularly, financial and health data) can become a
significant barrier to the wide adoption of cloud services. To address this problem, in this
paper, we propose a novel highly decentralized information accountability framework to
keep track of the actual usage of the users’ data in the cloud. In particular, we propose an
object-centered approach that enables enclosing our logging mechanism together with
users’ data and policies. We leverage the JAR programmable capabilities to both create a
dynamic and traveling object, and to ensure that any access to users’ data will trigger
authentication and automated logging local to the JARs. To strengthen user’s control, we
also provide distributed auditing mechanisms. We provide extensive experimental studies
that demonstrate the efficiency and effectiveness of the proposed approaches.
Information Forensics and Security
24. A Novel Data Embedding Method Using Adaptive Pixel Pair Matching
IEEE 2012 Titles & Abstract
FOR REGISTER: www.finalyearstudentsproject.com
CONTACT NO.: 91-9176696486.
Address: No.73, Karuneegar street, Adambakkam, Chennai-88
This paper proposes a new data-hiding method based on pixel pair matching (PPM). The
basic idea of PPM is touse the values of pixel pair as a reference coordinate, and
search a coordinate in the neighborhood set of thispixel pair according to a given message
digit. The pixel pair is then replaced by the searched coordinate to conceal the digit.
Exploiting modification direction (EMD) and diamond encoding (DE) are two data-
hidingmethods proposed recently based on PPM. The maximum capacity of EMD is 1.161
bpp and DE extends the payload of EMD by embedding digits in a larger notational system.
The proposed method offers lower distortion than DE by providing more compact
neighborhood sets and allowing embedded digits in any notational system. Compared with
the optimal pixel adjustment process (OPAP) method, the proposed method always has
lower distortion for various payloads. Experimental results reveal that the
proposed method not only provides better performance than those of OPAP and DE, but also
is secure under the detection of some well-known steganalysis techniques.