efficient key management for threshold-multi signature in distributed system

112
Efficient Key Management for Threshold-Multi Signature in Distributed System 1

Upload: dinnuk49

Post on 18-Apr-2015

21 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Efficient Key Management for Threshold-Multi Signature in Distributed System

Efficient Key Management for Threshold-Multi Signature in

Distributed System

1

Page 2: Efficient Key Management for Threshold-Multi Signature in Distributed System

ABSTRACT

Threshold-multisignature schemes combine the properties of threshold group-

oriented signature schemes and multisignature schemes to yield a signature scheme

that allows a threshold (t) or more group members to collaboratively sign an arbitrary

message. In contrast to threshold group signatures, the individual signers do not

remain anonymous, but are publicly identifiable from the information contained in the

valid threshold-multisignature. The main objective of this paper is to propose such a

secure and efficient threshold-multisignature scheme. The paper uniquely defines the

fundamental properties of threshold multisignature schemes and shows that the

proposed scheme satisfies these properties and eliminates the latest attacks to which

other similar schemes are subject. The efficiency of the proposed scheme is analyzed

and shown to be superior to its counterparts. The paper also proposes a discrete

logarithm based distributed-key management infrastructure (DKMI), which consists

of a round optimal, publicly verifiable, distributed-key generation (DKG) protocol

and a one round, publicly verifiable, distributed-key redistribution/ updating (DKRU)

protocol. The round optimal DKRU protocol solves a major problem with existing

secret redistribution/updating schemes by giving group members a mechanism to

identify malicious or faulty share holders in the first round, thus avoiding multiple

protocol executions.

2

Page 3: Efficient Key Management for Threshold-Multi Signature in Distributed System

CONTENTS

INTRODUCTIONAbout project Benefits of Selfish Routing

SYSTEM ANALYSISExisting SystemLimitationsProposed SystemAdvantages of proposed system

PROBLEM FORMULATIONObjectivesHardware SpecificationSoftware SpecificationSoftware Descriptions

SYSTEM DESIGNDesign OverviewContext Analysis DiagramData Flow DiagramArchitecture Design

SYSTEM TESTINGUnit TestingIntegration TestingAcceptance Testing

SYSTEM IMPLMENTATIONCONCLUSIONFUTURE ENCHANCEMENTSBIBLIOGRAPHYAPPENDICESAPPENDIX A: SAMPLE SCREENS

3

Page 4: Efficient Key Management for Threshold-Multi Signature in Distributed System

INTRODUCTION

About Project

The main objective of this paper is to propose such a secure and efficient

threshold- multisignature scheme. The paper uniquely defines the fundamental

properties of threshold multisignature schemes and shows that the proposed scheme

satisfies these properties and eliminates the latest attacks to which other similar

schemes are subject.

Threshold-multisignature schemes combine the properties of threshold group-oriented

signature schemes and multisignature schemes to yield a signature scheme that allows

a threshold or more group members to collaboratively sign an arbitrary message.

1.2 Benefits of multi-signature

The various benefits of using Selfish Routing are listed below,

This routing is used to reduce to the latency

It increase the throughputs in network

Not cost oriented.

Reduced administrative costs.

Lower error rates.

Increased Productivity.

Better services to clients.

4

Page 5: Efficient Key Management for Threshold-Multi Signature in Distributed System

SYSTEM ANALYSIS

5

Page 6: Efficient Key Management for Threshold-Multi Signature in Distributed System

EXISTING SYSTEM:

In many applications, a threshold or more shareholders are required to

cooperatively generate a digital signature, in contrast to the conventional single

signer. This may also be seen as a distribution of trust since the shareholders must

collaborate and contribute equally to produce a valid multiparty signature.

Threshold-multisignature schemes combine the properties of threshold group-

oriented signature schemes and multisignature schemes. In the literature, threshold

multisignature schemes are also referred to as threshold signature schemes with

traceability. The combined properties guarantee the signature verifier that at least t

members participated in the generation of the group-oriented signature and that the

identities of the signers can be easily established. The majority of the existing

threshold-multisignature schemes belong to variants of the single signatory,

generalized ElGamal signatures extended to a group/multiparty setting.

6

Page 7: Efficient Key Management for Threshold-Multi Signature in Distributed System

PROPOSED SYSTEM:

This paper is to propose a new threshold-multisignature scheme without a

trusted third party (TTP), based on a round optimal, publicly verifiable DKG protocol.

The proposed scheme can be easily adapted to incorporate a TTP; a version of the

proposed scheme with the assistance of a TTP will therefore not be presented.

The proposed discrete logarithm-based threshold-multisignature scheme is

also proactively secure, allowing for DKR to a new access structure and periodic

DKU to mitigate attacks from an active/mobile adversary. The proposed discrete

logarithm-based threshold-multisignature scheme is made proactively secure by

periodically updating secret shares and facilitating changes in group membership by

allowing an authorized subset of existing group members to redistribute secret shares

to a new access structure.

PROBLEM FORMULATION

The secure threshold-multisignature schemes must satisfy the following five

main properties

Correctness:

All threshold-multisignatures on an arbitrary message m, generated by an

honest authorized subset of group members, forming subgroup P, can be verified by

any outsider V (with respect to the group). This implies that the group-oriented

signature is publicly verifiable.

Threshold property:

Only a threshold of t or more authorized group members are able to

collaboratively generate a valid threshold-multisignature. This property thus

incorporates enforceability.

7

Page 8: Efficient Key Management for Threshold-Multi Signature in Distributed System

Traceability:

Any outsider V can learn the identities of the individual signers belonging to P

from the threshold multisignature on m without interaction with any of the group

members and/or a group manager. This implies that the signers are publicly traceable

with public information. Traceability implies accountability; the individual signers

participating in the threshold-multisignature scheme can be held accountable for their

contribution to the group oriented signature.

Coalition-resistance: No colluding subset of group members can generate a valid

threshold-multisignature not satisfying the traceability property. Coalition-resistance

subsumes framing-resistance, i.e., no subset of group members can sign on behalf of

any other subset of group members.

Break-resistance:

An adversary in possession or control of the group secret key and/or the

individual secret shares of any number of group members cannot generate a valid

threshold-multisignature and/or partial/ individual signatures; thus, although the

underlying threshold cryptosystem has been broken, the threshold-multisignature

signature scheme should not be breakable.

8

Page 9: Efficient Key Management for Threshold-Multi Signature in Distributed System

HARDWARE SPECIFICATION

Processor : Any Processor above 500 MHz.

Ram : 128Mb.

Hard Disk : 10 GB.

Compact Disk : 650 Mb.

Input device : Standard Keyboard and Mouse.

Output device : VGA and High Resolution Monitor.

SOFTWARE SPECIFICATION

Operating System : Windows 2000 server Family.

Techniques : JDK 1.5

Data Bases : MS SQL-Server 2000

9

Page 10: Efficient Key Management for Threshold-Multi Signature in Distributed System

Software Environment

Java Technology

Initially the language was called as “oak” but it was renamed as “Java” in 1995. The

primary motivation of this language was the need for a platform-independent (i.e.,

architecture neutral) language that could be used to create software to be embedded in

various consumer electronic devices.

Java is a programmer’s language.

Java is cohesive and consistent.

Except for those constraints imposed by the Internet environment, Java

gives the programmer, full control.

Finally, Java is to Internet programming where C was to system

programming.

Importance of Java to the Internet

Java has had a profound effect on the Internet. This is because; Java expands the

Universe of objects that can move about freely in Cyberspace. In a network, two

categories of objects are transmitted between the Server and the Personal computer.

They are: Passive information and Dynamic active programs. The Dynamic, Self-

executing programs cause serious problems in the areas of Security and probability.

But, Java addresses those concerns and by doing so, has opened the door to an

exciting new form of program called the Applet.

Java can be used to create two types of programs

Applications and Applets : An application is a program that runs on our

Computer under the operating system of that computer. It is more or less like one

creating using C or C++. Java’s ability to create Applets makes it important. An

Applet is an application designed to be transmitted over the Internet and executed by a

Java –compatible web browser. An applet is actually a tiny Java program,

10

Page 11: Efficient Key Management for Threshold-Multi Signature in Distributed System

dynamically downloaded across the network, just like an image. But the difference is,

it is an intelligent program, not just a media file. It can react to the user input and

dynamically change.

Features of Java Security

Every time you that you download a “normal” program, you are risking a viral

infection. Prior to Java, most users did not download executable programs frequently,

and those who did scan them for viruses prior to execution. Most users still worried

about the possibility of infecting their systems with a virus. In addition, another type

of malicious program exists that must be guarded against. This type of program can

gather private information, such as credit card numbers, bank account balances, and

passwords. Java answers both these concerns by providing a “firewall” between a

network application and your computer.

When you use a Java-compatible Web browser, you can safely download Java applets

without fear of virus infection or malicious intent.

Portability

For programs to be dynamically downloaded to all the various types of platforms

connected to the Internet, some means of generating portable executable code is

needed .As you will see, the same mechanism that helps ensure security also helps

create portability. Indeed, Java’s solution to these two problems is both elegant and

efficient.

The Byte code

The key that allows the Java to solve the security and portability problems is that the

output of Java compiler is Byte code. Byte code is a highly optimized set of

instructions designed to be executed by the Java run-time system, which is called the

Java Virtual Machine (JVM). That is, in its standard form, the JVM is an interpreter

for byte code.

Translating a Java program into byte code helps makes it much easier to run a

program in a wide variety of environments. The reason is, once the run-time package

exists for a given system, any Java program can run on it.

11

Page 12: Efficient Key Management for Threshold-Multi Signature in Distributed System

Although Java was designed for interpretation, there is technically nothing about Java

that prevents on-the-fly compilation of byte code into native code. Sun has just

completed its Just In Time (JIT) compiler for byte code. When the JIT compiler is a

part of JVM, it compiles byte code into executable code in real time, on a piece-by-

piece, demand basis. It is not possible to compile an entire Java program into

executable code all at once, because Java performs various run-time checks that can

be done only at run time. The JIT compiles code, as it is needed, during execution.

Java Virtual Machine (JVM)

Beyond the language, there is the Java virtual machine. The Java virtual machine is an

important element of the Java technology. The virtual machine can be embedded

within a web browser or an operating system. Once a piece of Java code is loaded

onto a machine, it is verified. As part of the loading process, a class loader is invoked

and does byte code verification makes sure that the code that’s has been generated by

the compiler will not corrupt the machine that it’s loaded on. Byte code verification

takes place at the end of the compilation process to make sure that is all accurate and

correct. So byte code verification is integral to the compiling and executing of Java

code.

Overall Description:

Picture showing the development process of JAVA Program

Java programming uses to produce byte codes and executes them. The first box indicates that

the Java source code is located in a. Java file that is processed with a Java compiler called

javac. The Java compiler produces a file called a. class file, which contains the byte code. The

.Class file is then loaded across the network or loaded locally on your machine into the

execution environment is the Java virtual machine, which interprets and executes the byte

code.

Java Architecture

Java architecture provides a portable, robust, high performing environment for development.

Java provides portability by compiling the byte codes for the Java Virtual Machine, which is

then interpreted on each platform by the run-time environment. Java is a dynamic system,

able to load code when needed from a machine in the same room or across the planet.

12

Java Source Java byte code JavaVM

Java .Class

Page 13: Efficient Key Management for Threshold-Multi Signature in Distributed System

Compilation of code

When you compile the code, the Java compiler creates machine code (called byte code) for a

hypothetical machine called Java Virtual Machine (JVM). The JVM is supposed to execute

the byte code. The JVM is created for overcoming the issue of portability. The code is written

and compiled for one machine and interpreted on all machines. This machine is called Java

Virtual Machine.

Compiling and interpreting Java Source Code

During run-time the Java interpreter tricks the byte code file into thinking that it is running on

a Java Virtual Machine. In reality this could be a Intel Pentium Windows 95 or SunSARC

station running Solaris or Apple Macintosh running system and all could receive code from

any computer through Internet and run the Applets.

Simple

Java was designed to be easy for the Professional programmer to learn and to use effectively.

If you are an experienced C++ programmer, learning Java will be even easier. Because Java

inherits the C/C++ syntax and many of the object oriented features of C++. Most of the

confusing concepts from C++ are either left out of Java or implemented in a cleaner, more

approachable manner. In Java there are a small number of clearly defined ways to accomplish

a given task.

The Java Platform

13

Page 14: Efficient Key Management for Threshold-Multi Signature in Distributed System

A platform is the hardware or software environment in which a

program runs. We’ve already mentioned some of the most popular platforms

like Windows 2000, Linux, Solaris, and MacOS. Most platforms can be

described as a combination of the operating system and hardware. The Java

platform differs from most other platforms in that it’s a software-only platform

that runs on top of other hardware-based platforms.

The Java platform has two components:

The Java Virtual Machine (Java VM)

The Java Application Programming Interface (Java API)

You’ve already been introduced to the Java VM. It’s the base for the Java

platform and is ported onto various hardware-based platforms.

The Java API is a large collection of ready-made software components that

provide many useful capabilities, such as graphical user interface (GUI)

widgets. The Java API is grouped into libraries of related classes and

interfaces; these libraries are known as packages. The next section, What Can

Java Technology Do?, highlights what functionality some of the packages in

the Java API provide.

The following figure depicts a program that’s running on the Java

platform. As the figure shows, the Java API and the virtual machine insulate

the program from the hardware.

Native code is code that after you compile it, the compiled code runs

on a specific hardware platform. As a platform-independent environment, the

Java platform can be a bit slower than native code. However, smart compilers,

well-tuned interpreters, and just-in-time bytecode compilers can bring

performance close to that of native code without threatening portability.

What Can Java Technology Do?

The most common types of programs written in the Java programming

language are applets and applications. If you’ve surfed the Web, you’re

14

Page 15: Efficient Key Management for Threshold-Multi Signature in Distributed System

probably already familiar with applets. An applet is a program that adheres to

certain conventions that allow it to run within a Java-enabled browser.

However, the Java programming language is not just for writing cute,

entertaining applets for the Web. The general-purpose, high-level Java

programming language is also a powerful software platform. Using the

generous API, you can write many types of programs.

An application is a standalone program that runs directly on the Java platform.

A special kind of application known as a server serves and supports clients on

a network. Examples of servers are Web servers, proxy servers, mail servers,

and print servers. Another specialized program is a servlet. A servlet can

almost be thought of as an applet that runs on the server side. Java Servlets are

a popular choice for building interactive web applications, replacing the use of

CGI scripts. Servlets are similar to applets in that they are runtime extensions

of applications. Instead of working in browsers, though, servlets run within

Java Web servers, configuring or tailoring the server.

How does the API support all these kinds of programs? It does so with

packages of software components that provide a wide range of functionality.

Every full implementation of the Java platform gives you the following

features:

The essentials: Objects, strings, threads, numbers, input and output,

data structures, system properties, date and time, and so on.

Applets: The set of conventions used by applets.

Networking: URLs, TCP (Transmission Control Protocol), UDP (User

Data gram Protocol) sockets, and IP (Internet Protocol) addresses.

Internationalization: Help for writing programs that can be localized

for users worldwide. Programs can automatically adapt to specific

locales and be displayed in the appropriate language.

Security: Both low level and high level, including electronic

signatures, public and private key management, access control, and

certificates.

Software components: Known as JavaBeansTM, can plug into existing

component architectures.

Object serialization: Allows lightweight persistence and

communication via Remote Method Invocation (RMI).

15

Page 16: Efficient Key Management for Threshold-Multi Signature in Distributed System

Java Database Connectivity (JDBCTM): Provides uniform access to a

wide range of relational databases.

Object-Oriented

Java was not designed to be source-code compatible with any other language. This allowed

the Java team the freedom to design with a blank slate. One outcome of this was a clean

usable, pragmatic approach to objects. The object model in Java is simple and easy to extend,

while simple types, such as integers, are kept as high-performance non-objects.

Robust

The multi-platform environment of the Web places extraordinary demands on a program,

because the program must execute reliably in a variety of systems. The ability to create robust

programs was given a high priority in the design of Java. Java is strictly typed language; it

checks your code at compile time and run time.

Java virtually eliminates the problems of memory management and de-allocation, which is

completely automatic. In a well-written Java program, all run time errors can –and should –be

managed by your program.

Java ha two things: a programming language and a platform.

Java is a high-level programming language that is all of the

following

Simple Architecture-neutral

Object-oriented Portable Secure

Distributed High-performance

Interpreted Multithreaded

Robust Dynamic

Java is also unusual in that each Java program is both compiled and

interpreted. With a compile you translate a Java program into an

intermediate language called Java byte codes the platform-independent

code instruction is passed and run on the computer.

Compilation happens just once; interpretation occurs each time the

program is executed. The figure illustrates how this works.

16

Page 17: Efficient Key Management for Threshold-Multi Signature in Distributed System

You can think of Java byte codes as the machine code instructions for

the Java Virtual Machine (Java VM). Every Java interpreter, whether it’s a

Java development tool or a Web browser that can run Java applets, is an

implementation of the Java VM. The Java VM can also be implemented in

hardware.

Java byte codes help make “write once, run anywhere” possible. You

can compile your Java program into byte codes on my platform that has a

Java compiler. The byte codes can then be run any implementation of the

Java VM. For example, the same Java program can run Windows NT,

Solaris, and Macintosh.

Java Platform

A platform is the hardware of software environment in which a

program runs. The Java platform differs from most other platforms in that

it’s a software only platform that runs on the top of other, hardware-based

platform. Most other platforms are described as a combination of hardware

and operating system.

Java Program

Compilers

Interpreter

My Program

17

Page 18: Efficient Key Management for Threshold-Multi Signature in Distributed System

The Java platform has two components:

The Java Virtual Machine (Java VM)

The Java Application Programming Interface (Java API)

You’ve already been introduced to the Java VM. It’s the base for the

Java platform and is ported onto various hardware-based platforms.

The Java API is a large collection of ready-made software

components that provide many useful capabilities, such as graphical user

interface (GUI) widgets.

The Java API is grouped into libraries (package) of related

components. The next sections, what can Java do? Highlights each area of

functionally provided by the package in the Java API.

How does the Java API support all of these kinds of

programs? With packages of software components that provide a wide

range of functionality. The API is the API included in every full

implementation of the platform.

The core API gives you the following features:

The Essentials: Objects, Strings, threads, numbers, input and output, data

structures, system properties, date and time, and so on.

Applets: The set of conventions used by Java applets.

Networking: URL’s TCP and UDP sockets and IP addresses.

Internationalization: Help for writing programs that can be localized for users.

Worldwide programs can automatically adapt to specific locates and be

displayed in the appropriate language.

18

Page 19: Efficient Key Management for Threshold-Multi Signature in Distributed System

Java Program

Java API

Java Virtual Machine

Java Program

Hard Ware

API and Virtual Machine insulates the Java program from hardware

dependencies. As a platform-independent environment, Java can be a bit

slower than native code. However, smart compilers, well-tuned interpreters,

and Just-in-time-byte-code compilers can bring Java’s performance close to the

native code without threatening portability.

19

Page 20: Efficient Key Management for Threshold-Multi Signature in Distributed System

SWING:

The Swing toolkit includes a rich set of components for building GUIs and

adding interactivity to Java applications. Swing includes all the components you

would expect from a modern toolkit: table controls, list controls, tree controls,

buttons, and labels. Swing is far from a simple component toolkit, however. It

includes rich undo support, a highly customizable text package, integrated

internationalization and accessibility support. To truly leverage the cross-platform

capabilities of the Java platform, Swing supports numerous look and feels, including

the ability to create your own look and feel. The ability to create a custom look and

feel is made easier with Synth, a look and feel specifically designed to be customized.

Swing wouldn't be a component toolkit without the basic user interface primitives

such as drag and drop, event handling, customizable painting, and window

management.

Swing is part of the Java Foundation Classes (JFC). The JFC also include

other features important to a GUI program, such as the ability to add rich graphics

functionality and the ability to create a program that can work in different languages

and by users with different input devices.

The Swing toolkit includes a rich array of components: from basic components, such

as buttons and check boxes, to rich and complex components, such as tables and text.

Even deceptively simple components, such as text fields, offer sophisticated

functionality, such as formatted text input or password field behavior. There are file

browsers and dialogs to suit most needs, and if not, customization is possible. If none

of Swing's provided components are exactly what you need, you can leverage the

basic Swing component functionality to create your own.

Swing components facilitate efficient graphical user interface (GUI) development.

These components are a collection of light weight visual components. Swing

components contain a replacement for the heavyweight AWT components as well as

complex user-interface components such as trees and tables. Swing is a set of classes

20

Page 21: Efficient Key Management for Threshold-Multi Signature in Distributed System

that provides more powerful and flexible components than are possible with the

AWT. In addition to that the familiar components such as buttons, check box and

labels swings supplies several exciting additions including tabbed panes, scroll panes,

trees and tables. Even familiar components such as buttons have more capabilities in

swing. For example a button may have both an image and text string associated with

it. Also the image can be changed as the state of button changes. Unlike AWT

components swing components are not implemented by platform specific code instead

they are return entirely in JAVA and, therefore, are platform-independent. The term

lightweight is used to describe such elements. The number of classes and interfaces in

the swing packages is substantial. The Swing architecture is shown in the figure given

blow:

THE SWING COMPONENT CLASSES:

21

Application Code

JFCJava 2D

Drag & Drop

Accessibility

Swing

AWT

Page 22: Efficient Key Management for Threshold-Multi Signature in Distributed System

Class Description

Abstract Button Abstract super class for Swing Buttons

Button Group Encapsulates a mutually exclusive set of Buttons

ImageIcon Encapsulates an Icon

JApplet The Swing version of Applet

JButton The Swing Push Button Class

JCheckBox The Swing CheckBox class

JComboBOx Encapsulates a combobox

JLabel The swing version of a Label

JRadioButton The Swing version of a RadioButton

JScrollPane Encapsulates a scrollabel window

JTabbedPane Encapsulates a Tabbed window

JTable Encapsulates a Table-based control

JTextField The swing version of a text-field

JTree Encapsulates a Tree-based control

ADVANTAGES OF SWINGS:

Wide variety of Components

22

Page 23: Efficient Key Management for Threshold-Multi Signature in Distributed System

Pluggable Look and Feel

MVC Architecture

Keystroke Handling

Action Objects

Nested containers

Customized Dialogs

Compound Borders

Standard Dialog Classes

Structured Table and Tree Components

Powerful Text Manipulation

Generic Undo Capabilities

Accessibility Support

23

Page 24: Efficient Key Management for Threshold-Multi Signature in Distributed System

SOCKETS

Sockets In Java:

You use URLs and URL Connections to communicate over the network at relatively

highlevel and for a specific purpose accessing resources on the Internet. Sometimes your

programs require lower level network communications, For example, when you want to write

a client-server application.

In client-server applications, the server provides some service, such as processing database

requires or sending out current stock prices. The client uses the service provided by the server

to some end displaying database query results to the user or making stock purchase

recommendations to an investor. The communication that occurs between the client and

server must be reliable-no data can be dropped and it must arrive on the client side in the

same manner order that it was sent by the server.

TCP provides a reliable, point-to-point communication channel, which client –server

applications on the Internet use to communicate. The Socket and Server Socket classes in

java.net provides a system-independent communication channel using TCP.

A server application normally listens to a specific Port waiting for connection requests from a

client. When a connection request arrive, the client and server establish a dedicated

connection over which they can communicate. During the connection process, the client is

assigned a local port number, and binds a socket to it. The client talks to the server by writing

to the socket and gets information from the server by reading from it. Similarly, the server

gets a new-local port number (it needs a new port number so that it can continue to listen for

connection requests on the original port). The server also binds a socket to its local port and

communicates with the client by reading from and writing to it.

The client and the server must agree on protocol-that is, they must agree on the language of

the information transferred back and forth though the socket.

Definition: A Socket is one end-points of a two-way communicate link between two

programs running on the network.

The java.net package in the java development environment provides a class-socket-that

represents one end of a two-way connection between your java program and another program

on the network. The Socket class implements the client side of two-way link. If you are

writing server software, you will also be interested in the Server Socket class which

implementing the server side of the two-way link.

Server Socket:

Java.lang.object

+----------------------java.net. Server Socket

Public class ServerSocket Extends Object

24

Page 25: Efficient Key Management for Threshold-Multi Signature in Distributed System

This class implements serversockets. A server socket waits for requests to come in over the

network. It performs some operation based on that request, and then possibly returns a result

to the requester.

The actual work of the server is performed by an instance of the Socket Implement class. An

application can change the socket factory that implementation to configure itself to create

appropriate to the local firewall.

Socket:

Java.lang.Object

+-------java.net.Socket

Public class Socket extends Object.

This class implements client sockets (also called just “sockets”). A socket is an endpoint for

communication between two machines.

The actual work of the socket is performed by an instance of the Socket Implement class. An

application, by changing the socket factory that creates the socket implementation, can

configure itself to create sockets appropriate to the local firewall.

Database:

A database management system (DBMS) is computer software designed for the

purpose of managing databases, a large set of structured data, and run operations on

the data requested by numerous users. Typical examples of DBMSs include Oracle,

DB2, Microsoft Access, Microsoft SQL Server, Firebird, PostgreSQL, MySQL,

SQLite, FileMaker and Sybase Adaptive Server Enterprise. DBMSs are typically used

by Database administrators in the creation of Database systems. Typical examples of

DBMS use include accounting, human resources and customer support systems.

Originally found only in large companies with the computer hardware needed to

support large data sets, DBMSs have more recently emerged as a fairly standard part

of any company back office.

Description

25

Page 26: Efficient Key Management for Threshold-Multi Signature in Distributed System

A DBMS is a complex set of software programs that controls the organization,

storage, management, and retrieval of data in a database. A DBMS includes:

A modeling language to define the schema of each database hosted in the DBMS,

according to the DBMS data model.

The four most common types of organizations are the hierarchical,

network, relational and object models. Inverted lists and other methods are

also used. A given database management system may provide one or more

of the four models. The optimal structure depends on the natural

organization of the application's data, and on the application's

requirements (which include transaction rate (speed), reliability,

maintainability, scalability, and cost).

The dominant model in use today is the ad hoc one embedded in SQL,

despite the objections of purists who believe this model is a corruption of

the relational model, since it violates several of its fundamental principles

for the sake of practicality and performance. Many DBMSs also support

the Open Database Connectivity API that supports a standard way for

programmers to access the DBMS.

Data structures (fields, records, files and objects) optimized to deal with very

large amounts of data stored on a permanent data storage device (which

implies relatively slow access compared to volatile main memory).

A database query language and report writer to allow users to interactively

interrogate the database, analyze its data and update it according to the users

privileges on data.

It also controls the security of the database.

Data security prevents unauthorized users from viewing or updating the

database. Using passwords, users are allowed access to the entire database

or subsets of it called subschemas. For example, an employee database can

contain all the data about an individual employee, but one group of users

may be authorized to view only payroll data, while others are allowed

access to only work history and medical data.

If the DBMS provides a way to interactively enter and update the database,

as well as interrogate it, this capability allows for managing personal

26

Page 27: Efficient Key Management for Threshold-Multi Signature in Distributed System

databases. However, it may not leave an audit trail of actions or provide

the kinds of controls necessary in a multi-user organization. These controls

are only available when a set of application programs are customized for

each data entry and updating function.

A transaction mechanism, that ideally would guarantee the ACID properties, in

order to ensure data integrity, despite concurrent user accesses (concurrency

control), and faults (fault tolerance).

It also maintains the integrity of the data in the database.

The DBMS can maintain the integrity of the database by not allowing

more than one user to update the same record at the same time. The DBMS

can help prevent duplicate records via unique index constraints; for

example, no two customers with the same customer numbers (key fields)

can be entered into the database. See ACID properties for more

information (Redundancy avoidance).

The DBMS accepts requests for data from the application program and instructs the

operating system to transfer the appropriate data.

When a DBMS is used, information systems can be changed much more easily as the

organization's information requirements change. New categories of data can be added

to the database without disruption to the existing system.

Organizations may use one kind of DBMS for daily transaction processing and then

move the detail onto another computer that uses another DBMS better suited for

random inquiries and analysis. Overall systems design decisions are performed by

data administrators and systems analysts. Detailed database design is performed by

database administrators.

Database servers are specially designed computers that hold the actual databases and

run only the DBMS and related software. Database servers are usually multiprocessor

computers, with RAID disk arrays used for stable storage. Connected to one or more

servers via a high-speed channel, hardware database accelerators are also used in

large volume transaction processing environments.

27

Page 28: Efficient Key Management for Threshold-Multi Signature in Distributed System

DBMSs are found at the heart of most database applications. Sometimes DBMSs are

built around a private multitasking kernel with built-in networking support although

nowadays these functions are left to the operating system.

28

Page 29: Efficient Key Management for Threshold-Multi Signature in Distributed System

SQL

Structured Query Language (SQL) is the language used to manipulate relational

databases. SQL is tied very closely with the relational model.

In the relational model, data is stored in structures called relations or tables.

SQL statements are issued for the purpose of:

Data definition: Defining tables and structures in the database (DDL used to create,

alter and drop schema objects such as tables and indexes).

Data manipulation: Used to manipulate the data within those schema objects (DML

Inserting, Updating, Deleting the data, and Querying the Database).

A schema is a collection of database objects that can include: tables, views, indexes

and sequences

List of SQL statements that can be issued against an Oracle database schema are:

ALTER - Change an existing table, view or index definition (DDL)

AUDIT - Track the changes made to a table (DDL)

COMMENT - Add a comment to a table or column in a table (DDL)

COMMIT - Make all recent changes permanent (DML - transactional)

CREATE - Create new database objects such as tables or views (DDL)

DELETE - Delete rows from a database table (DML)

DROP - Drop a database object such as a table, view or index (DDL)

GRANT - Allow another user to access database objects such as tables or

views (DDL)

INSERT - Insert new data into a database table (DML)

No AUDIT - Turn off the auditing function (DDL)

REVOKE - Disallow a user access to database objects such as tables and

views (DDL)

ROLLBACK - Undo any recent changes to the database (DML -

Transactional)

29

Page 30: Efficient Key Management for Threshold-Multi Signature in Distributed System

SELECT - Retrieve data from a database table (DML)

TRUNCATE - Delete all rows from a database table (can not be rolled back)

(DML)

UPDATE - Change the values of some data items in a database table (DML)

30

Page 31: Efficient Key Management for Threshold-Multi Signature in Distributed System

Eclipse IDE

Eclipse is an open-source software framework written primarily in Java. In its default

form it is an Integrated Development Environment (IDE) for Java developers,

consisting of the Java Development Tools (JDT) and the Eclipse Compiler for Java

(ECJ). Users can extend its capabilities by installing plug-ins written for the Eclipse

software framework, such as development toolkits for other programming languages,

and can write and contribute their own plug-in modules. Language packs are available

for over a dozen languages.

Architecture

The basis for Eclipse is the Rich Client Platform (RCP). The following components

constitute the rich client platform:

OSGi - a standard bundling framework

Core platform - boot Eclipse, run plug-ins

the Standard Widget Toolkit (SWT) - a portable widget toolkit

JFace - viewer classes to bring model view controller programming to SWT,

file buffers, text handling, text editors

the Eclipse Workbench - views, editors, perspectives, wizards

Eclipse's widgets are implemented by a widget toolkit for Java called SWT, unlike

most Java applications, which use the Java standard Abstract Window Toolkit (AWT)

or Swing. Eclipse's user interface also leverages an intermediate GUI layer called

JFace, which simplifies the construction of applications based on SWT.

Eclipse employs plug-ins in order to provide all of its functionality on top of (and

including) the rich client platform, in contrast to some other applications where

functionality is typically hard coded. This plug-in mechanism is a lightweight

software componentry framework. In addition to allowing Eclipse to be extended

using other programming languages such as C and Python, the plug-in framework

allows Eclipse to work with typesetting languages like LaTeX, networking

applications such as telnet, and database management systems. The plug-in

architecture supports writing any desired extension to the environment, such as for

configuration management. Java and CVS support is provided in the Eclipse SDK.

31

Page 32: Efficient Key Management for Threshold-Multi Signature in Distributed System

The key to the seamless integration of tools with Eclipse is the plugin. With the

exception of a small run-time kernel, everything in Eclipse is a plug-in. This means

that a plug-in you develop integrates with Eclipse in exactly the same way as other

plug-ins; in this respect, all features are created equal.

The Eclipse SDK includes the Eclipse Java Development Tools, offering an IDE with

a built-in incremental Java compiler and a full model of the Java source files. This

allows for advanced refactoring techniques and code analysis. The IDE also makes

use of a workspace, in this case a set of metadata over a flat filespace allowing

external file modifications as long as the corresponding workspace "resource" is

refreshed afterwards. The Visual Editor project allows interfaces to be created

interactively, hence allowing Eclipse to be used as a RAD tool.

The following is a list of notable projects and plugins for the Eclipse IDE.

These projects are maintained by the Eclipse community and hosted by the Eclipse

Foundation.

1. Core projects

Rich Client Platform (Platform) is the core framework that all other Eclipse projects

are built on.

Java Development Tools (JDT) provides support for core Java SE. This includes a

standalone fast incremental compiler.

Tools projects

C/C++ Development Tools (CDT) adds support for C/C++ syntax highlighting, code

formatting, debugger integration and project structures. Unlike the JDT project, the

CDT project does not add a compiler and relies on an external tool chain.

Graphical Editing Framework (GEF) allows developers to build standalone graphical

tools. Example use include circuit diagram design tools, activity diagram editors and

WYSIWYG document editors.

Modelling projects

32

Page 33: Efficient Key Management for Threshold-Multi Signature in Distributed System

Eclipse Modeling Framework (EMF) a modeling framework and code generation

facility for building tools and other applications based on a structured data model,

from a model specification described in XMI.

Graphical Modeling Framework (GMF) is a generative component and runtime

infrastructure for developing graphical editors based on EMF and GEF.

Other projects

Test and Performance Tools Platform (TPTP) which provides a platform that allows

software developers to build test and performance tools, such as debuggers, profilers

and benchmarking applications.

Business Intelligence and Reporting Tools Project (BIRT), an Eclipse-based open

source reporting system for web applications, especially those based on Java

33

Page 34: Efficient Key Management for Threshold-Multi Signature in Distributed System

Networking

Introduction

This article is about a client/server multi-threaded socket class. The

thread is optional since the developer is still responsible to decide if

needs it. There are other Socket classes here and other places over the

Internet but none of them can provide feedback (event detection) to your

application like this one does. It provides you with the following events

detection: connection established, connection dropped, connection failed

and data reception (including 0 byte packet).

Description

This article presents a new socket class which supports both TCP

and UDP communication. But it provides some advantages compared to

other classes that you may find here or on some other Socket

Programming articles. First of all, this class doesn't have any limitation

like the need to provide a window handle to be used. This limitation is

bad if all you want is a simple console application. So this library

doesn't have such a limitation. It also provides threading support

automatically for you, which handles the socket connection and

disconnection to a peer. It also features some options not yet found in

any socket classes that I have seen so far. It supports both client and

server sockets. A server socket can be referred as to a socket that can

accept many connections. And a client socket is a socket that is

connected to server socket. You may still use this class to communicate

between two applications without establishing a connection. In the latter

case, you will want to create two UDP server sockets (one for each

application). This class also helps reduce coding need to create chat-like

applications and IPC (Inter-Process Communication) between two or

more applications (processes). Reliable communication between two

peers is also supported with TCP/IP with error handling. You may want

to use the smart addressing operation to control the destination of the

34

Page 35: Efficient Key Management for Threshold-Multi Signature in Distributed System

data being transmitted (UDP only). TCP operation of this class deals

only with communication between two peers.

Analysis of Network

TCP/IP stack

The TCP/IP stack is shorter than the OSI one:

TCP is a connection-oriented protocol; UDP (User Datagram Protocol) is

a connectionless protocol.

IP datagram’s

The IP layer provides a connectionless and unreliable delivery system. It

considers each datagram independently of the others. Any association

between datagram must be supplied by the higher layers. The IP layer

supplies a checksum that includes its own header. The header includes the

source and destination addresses. The IP layer handles routing through an

35

Page 36: Efficient Key Management for Threshold-Multi Signature in Distributed System

Internet. It is also responsible for breaking up large datagram into smaller

ones for transmission and reassembling them at the other end.

UDP

UDP is also connectionless and unreliable. What it adds to IP is a

checksum for the contents of the datagram and port numbers. These are used

to give a client/server model - see later.

TCP

TCP supplies logic to give a reliable connection-oriented protocol above

IP. It provides a virtual circuit that two processes can use to communicate.

Internet addresses

In order to use a service, you must be able to find it. The Internet uses an

address scheme for machines so that they can be located. The address is a 32

bit integer which gives the IP address. This encodes a network ID and more

addressing. The network ID falls into various classes according to the size of

the network address.

Network address

Class A uses 8 bits for the network address with 24 bits left over for

other addressing. Class B uses 16 bit network addressing. Class C uses 24 bit

network addressing and class D uses all 32.

Subnet address

Internally, the UNIX network is divided into sub networks. Building 11

is currently on one sub network and uses 10-bit addressing, allowing 1024

different hosts.

Host address

8 bits are finally used for host addresses within our subnet. This places a

limit of 256 machines that can be on the subnet.

36

Page 37: Efficient Key Management for Threshold-Multi Signature in Distributed System

Total address

The 32 bit address is usually written as 4 integers separated by dots.

Port addresses

A service exists on a host, and is identified by its port. This is a 16 bit

number. To send a message to a server, you send it to the port for that

service of the host that it is running on. This is not location transparency!

Certain of these ports are "well known".

Sockets

A socket is a data structure maintained by the system to handle network

connections. A socket is created using the call socket. It returns an integer

that is like a file descriptor. In fact, under Windows, this handle can be used

with Read File and Write File functions.

#include <sys/types.h>#include <sys/socket.h>int socket(int family, int type, int protocol);

Here "family" will be AF_INET for IP communications, protocol will be

zero, and type will depend on whether TCP or UDP is used. Two processes

wishing to communicate over a network create a socket each. These are

similar to two ends of a pipe - but the actual pipe does not yet exist.

37

Page 38: Efficient Key Management for Threshold-Multi Signature in Distributed System

JFree Chart

JFreeChart is a free 100% Java chart library that makes it easy for developers

to display professional quality charts in their applications. JFreeChart's extensive

feature set includes:

a consistent and well-documented API, supporting a wide range of chart types;

a flexible design that is easy to extend, and targets both server-side and client-

side applications;

support for many output types, including Swing components, image files

(including PNG and JPEG), and vector graphics file formats (including PDF,

EPS and SVG);

JFreeChart is "open source" or, more specifically, free software. It is

distributed under the terms of the GNU Lesser General Public Licence

(LGPL), which permits use in proprietary applications.

1. Map Visualizations

Charts showing values that relate to geographical areas. Some examples

include: (a) population density in each state of the United States, (b) income per

capita for each country in Europe, (c) life expectancy in each country of the world.

The tasks in this project include:

sourcing freely redistributable vector outlines for the countries of the world,

states/provinces in particular countries (USA in particular, but also other

areas);

creating an appropriate dataset interface (plus default implementation), a

rendered, and integrating this with the existing XYPlot class in JFreeChart;

Testing, documenting, testing some more, documenting some more.

2. Time Series Chart Interactivity

Implement a new (to JFreeChart) feature for interactive time series charts ---

to display a separate control that shows a small version of ALL the time series data,

with a sliding "view" rectangle that allows you to select the subset of the time series

data to display in the main chart.

38

Page 39: Efficient Key Management for Threshold-Multi Signature in Distributed System

3. Dashboards

There is currently a lot of interest in dashboard displays. Create a flexible

dashboard mechanism that supports a subset of JFreeChart chart types (dials, pies,

thermometers, bars, and lines/time series) that can be delivered easily via both Java

Web Start and an applet.

4. Property Editors

The property editor mechanism in JFreeChart only handles a small subset of

the properties that can be set for charts. Extend (or reimplement) this mechanism to

provide greater end-user control over the appearance of the charts.

39

Page 40: Efficient Key Management for Threshold-Multi Signature in Distributed System

SYSTEM DESIGN

40

Page 41: Efficient Key Management for Threshold-Multi Signature in Distributed System

Design Overview

Design involves identification of classes, their relationships as well as their

collaboration. In objectory, classes were divided into Entity classes ,interface classes

and the control classes. The Computer Aided Software Engineering tools that are

available commercially do not provide any assistance in this transition. Even research

CASE tools take advantage of Meta modeling are helpful only after the construction

of class diagram is completed. In the Fusion method, it used some object-oriented

approaches like Object Modeling.

Technique(OMT),Class_Responsibility_Collaborator(CRC) and

Objectory,used the term Agents to represent some of the hardware and software

systems .In Fusion method, there was no requirement phase ,where in a user will

supply the initial requirement document. Any software project is worked out by both

analyst and designer. The analyst creates the Use case diagram. The designer creates

the Class diagram. But the designer can do this only after the analyst has created the

Use case diagram. Once the design is over it is need to decide which software is

suitable for the application

Data Flow Diagram / Use Case Diagram / Flow Diagram

The DFD is also called as bubble chart. It is a simple graphical

formalism that can be used to represent a system in terms of the input data

to the system, various processing carried out on these data, and the output

data is generated by the system.

41

Page 42: Efficient Key Management for Threshold-Multi Signature in Distributed System

Data Flow Diagram

Signature Generation

.

UserELGAMAL Algorithm

Process

Generating Key pairs

(Private and Public key)

Providing private key as Input and get the hash valueUsing (SHA)

From that hash value

We generate the Signature

42

Page 43: Efficient Key Management for Threshold-Multi Signature in Distributed System

Signature Verification

User

ELGAMAL Algorithm

Process

Rehashing the

SignatureAnd get the Hash Value

From the hash value we get our sender

Private Key

Compare the Private Key

with the received Key

43

Page 44: Efficient Key Management for Threshold-Multi Signature in Distributed System

Encryption

Decryption

User Message

Using private key to encrypt

the message

To send the encrypted message

to the receiver

Encrypt Message

Using public key to

Decrypt the message

View the original Message

44

Page 45: Efficient Key Management for Threshold-Multi Signature in Distributed System

Message Transmission

Unicasting

MultiCasting

Transfer Encrypted Message with our

Signature to the receiver

Client

Transfer Encrypted Message with our Signature to the

receivers

Client-1

Client-2

Client -

Client-N

45

Page 46: Efficient Key Management for Threshold-Multi Signature in Distributed System

Architecture Diagram:

A Fully Distributed Proactively Secure Threshold-Multisignature Scheme

Signature Generation Cryptography

Signature Generation

Decryption

Encryption

Unicasting

Multicasting

Signature verification

Message Transmission

46

Page 47: Efficient Key Management for Threshold-Multi Signature in Distributed System

UML Diagram

Unified Modeling Language:

The Unified Modeling Language allows the software engineer to

express an analysis model using the modeling notation that is

governed by a set of syntactic semantic and pragmatic rules.

A UML system is represented using five different views that describe

the system from distinctly different perspective. Each view is

defined by a set of diagram, which is as follows.

User Model View

i. This view represents the system from the users

perspective.

ii. The analysis representation describes a usage

scenario from the end-users perspective.

Structural model view

i. In this model the data and functionality are

arrived from inside the system.

ii. This model view models the static structures.

Behavioral Model View

It represents the dynamic of behavioral as parts of the

system, depicting the interactions of collection between

various structural elements described in the user model

and structural model view.

Implementation Model View

In this the structural and behavioral as parts of the

system are represented as they are to be built.

Environmental Model View

In this the structural and behavioral aspects of the

environment in which the system is to be implemented

are represented.

47

Page 48: Efficient Key Management for Threshold-Multi Signature in Distributed System

UML is specifically constructed through two different domains they

are:

UML Analysis modeling, this focuses on the user model and

structural model views of the system.

UML design modeling, which focuses on the behavioral

modeling, implementation modeling and environmental model

views.

Use case Diagrams represent the functionality of the system from a

user’s point of view. Use cases are used during requirements

elicitation and analysis to represent the functionality of the system.

Use cases focus on the behavior of the system from external point

of view.

Actors are external entities that interact with the system. Examples

of actors include users like administrator, bank customer …etc., or

another system like central database.

48

Page 49: Efficient Key Management for Threshold-Multi Signature in Distributed System

Use Case Diagrams:

User

Signaturegeneration

SignatureVerification

Encryption

Decryption

MsgTransmission

Unicasting

MultiCasting

<<include>>

<<include>>

49

Page 50: Efficient Key Management for Threshold-Multi Signature in Distributed System

User

ELGAMAL Algorithm

Generating Key pairs

providing i/ p

GenerateSignature

<<include>>

-----------------------------------------------------------------------------

Elagamal processing

Rehashing

senderPrivateKey

ComparePrivateKey

user

50

Page 51: Efficient Key Management for Threshold-Multi Signature in Distributed System

Activity Diagram:

User

elagamalAlgorithemProcessGeneratingPrivateKeys

ProvidingPrivate Key as i/p

Generatethe signature

51

Page 52: Efficient Key Management for Threshold-Multi Signature in Distributed System

SignatureVerification

ElagamalProcess

ReHashing

GetSenderPrivateKey

Compareprivate key with recievbed Key

User

52

Page 53: Efficient Key Management for Threshold-Multi Signature in Distributed System

encrypotion Reciever

UserMessage

UsingUsermsg with pvtkey

destination

sendtoReciever

ack

Activity Diagram For Encryption

53

Page 54: Efficient Key Management for Threshold-Multi Signature in Distributed System

Decryption

encryptMessage

UsingPublicKey

ViewTheOriginalMsg

Activity Diagram For Decryption

54

Page 55: Efficient Key Management for Threshold-Multi Signature in Distributed System

Sequence Diagram:

User Siggeneration Encryption decryption MsgTransmissionverification

1 : generate signature()

2 : Verification()

3 : encryption()

4 : Decryption()

5 : MsgTransmission()

Main Sequence Diagram

55

Page 56: Efficient Key Management for Threshold-Multi Signature in Distributed System

User ElagamalAlog GeneratingKeyPairs Providing i/p GenerateSig

1 : usingalogorithem()

2 : GenerateKeyPairs()

3 : Providing i/p()

4 : GenerateSig()

Sequence Diagram For Generation

56

Page 57: Efficient Key Management for Threshold-Multi Signature in Distributed System

User ElaganmalProcessing SenderPrivateKey ComparePrivateKeyRehashing

1 : Processing()

2 : SendPrivate()

3 : ComparePrivateKey()

4 : RecievesUser()

Sequence Diagram For Receiving

57

Page 58: Efficient Key Management for Threshold-Multi Signature in Distributed System

SYSTEM TESTING

Software Testing is the process used to help identify the correctness,

completeness, security, and quality of developed computer software. Testing is a

process of technical investigation, performed on behalf of stakeholders, that is

intended to reveal quality-related information about the product with respect to the

context in which it is intended to operate. This includes, but is not limited to, the

process of executing a program or application with the intent of finding errors.

Quality is not an absolute; it is value to some person. With that in mind, testing can

never completely establish the correctness of arbitrary computer software; testing

furnishes a criticism or comparison that compares the state and behavior of the

product against a specification. An important point is that software testing should be

distinguished from the separate discipline of Software Quality Assurance (SQA),

which encompasses all business process areas, not just testing.

There are many approaches to software testing, but effective testing of

complex products is essentially a process of investigation, not merely a matter of

creating and following routine procedure. One definition of testing is "the process of

questioning a product in order to evaluate it", where the "questions" are operations the

tester attempts to execute with the product, and the product answers with its behavior

in reaction to the probing of the tester[citation needed]. Although most of the

intellectual processes of testing are nearly identical to that of review or inspection, the

word testing is connoted to mean the dynamic analysis of the product—putting the

product through its paces. Some of the common quality attributes include capability,

reliability, efficiency, portability, maintainability, compatibility and usability. A good

test is sometimes described as one which reveals an error; however, more recent

thinking suggests that a good test is one which reveals information of interest to

someone who matters within the project community.

58

Page 59: Efficient Key Management for Threshold-Multi Signature in Distributed System

Introduction

In general, software engineers distinguish software faults from software failures. In

case of a failure, the software does not do what the user expects. A fault is a

programming error that may or may not actually manifest as a failure. A fault can also

be described as an error in the correctness of the semantic of a computer program. A

fault will become a failure if the exact computation conditions are met, one of them

being that the faulty portion of computer software executes on the CPU. A fault can

also turn into a failure when the software is ported to a different hardware platform or

a different compiler, or when the software gets extended. Software testing is the

technical investigation of the product under test to provide stakeholders with quality

related information.

Software testing may be viewed as a sub-field of Software Quality Assurance but

typically exists independently (and there may be no SQA areas in some companies).

In SQA, software process specialists and auditors take a broader view on software and

its development. They examine and change the software engineering process itself to

reduce the amount of faults that end up in the code or deliver faster.

Regardless of the methods used or level of formality involved the desired result of

testing is a level of confidence in the software so that the organization is confident

that the software has an acceptable defect rate. What constitutes an acceptable defect

rate depends on the nature of the software. An arcade video game designed to

simulate flying an airplane would presumably have a much higher tolerance for

defects than software used to control an actual airliner.

A problem with software testing is that the number of defects in a software product

can be very large, and the number of configurations of the product larger still. Bugs

that occur infrequently are difficult to find in testing. A rule of thumb is that a system

that is expected to function without faults for a certain length of time must have

59

Page 60: Efficient Key Management for Threshold-Multi Signature in Distributed System

already been tested for at least that length of time. This has severe consequences for

projects to write long-lived reliable software.

A common practice of software testing is that it is performed by an independent group

of testers after the functionality is developed but before it is shipped to the customer.

This practice often results in the testing phase being used as project buffer to

compensate for project delays. Another practice is to start software testing at the same

moment the project starts and it is a continuous process until the project finishes.

Another common practice is for test suites to be developed during technical support

escalation procedures. Such tests are then maintained in regression testing suites to

ensure that future updates to the software don't repeat any of the known mistakes.

It is commonly believed that the earlier a defect is found the cheaper it is to fix it.

  Time Detected

Time

Introduced   Requirements    Architecture    Construction   

System

Test   

Post-

Release   

Requirements 1 3 5-10 10 10-100

Architecture - 1 10 15 25-100

Construction - - 1 10 10-25

In counterpoint, some emerging software disciplines such as extreme programming

and the agile software development movement, adhere to a "test-driven software

development" model. In this process unit tests are written first, by the programmers

(often with pair programming in the extreme programming methodology). Of course

these tests fail initially; as they are expected to. Then as code is written it passes

incrementally larger portions of the test suites. The test suites are continuously

60

Page 61: Efficient Key Management for Threshold-Multi Signature in Distributed System

updated as new failure conditions and corner cases are discovered, and they are

integrated with any regression tests that are developed.

Unit tests are maintained along with the rest of the software source code and generally

integrated into the build process (with inherently interactive tests being relegated to a

partially manual build acceptance process).

The software, tools, samples of data input and output, and configurations are all

referred to collectively as a test harness.

History

The separation of debugging from testing was initially introduced by Glenford J.

Myers in his 1978 book the "Art of Software Testing". Although his attention was on

breakage testing it illustrated the desire of the software engineering community to

separate fundamental development activities, such as debugging, from that of

verification. Drs. Dave Gelperin and William C. Hetzel classified in 1988 the phases

and goals in software testing as follows: until 1956 it was the debugging oriented

period, where testing was often associated to debugging: there was no clear difference

between testing and debugging. From 1957-1978 there was the demonstration

oriented period where debugging and testing was distinguished now - in this period it

was shown, that software satisfies the requirements. The time between 1979-1982 is

announced as the destruction oriented period, where the goal was to find errors. 1983-

1987 is classified as the evaluation oriented period: intention here is that during the

software lifecycle a product evaluation is provided and measuring quality. From 1988

on it was seen as prevention oriented period where tests were to demonstrate that

software satisfies its specification, to detect faults and to prevent faults. Dr. Gelperin

chaired the IEEE 829-1988 (Test Documentation Standard) with Dr. Hetzel writing

the book "The Complete Guide of Software Testing". Both works were pivotal in to

today's testing culture and remain a consistent source of reference. Dr. Gelperin and

Jerry E. Durant also went on to develop High Impact Inspection Technology that

builds upon traditional Inspections but utilizes a test driven additive.

61

Page 62: Efficient Key Management for Threshold-Multi Signature in Distributed System

White-box and black-box testing

To meet Wikipedia's quality standards, this section may require cleanup.

Please discuss this issue on the talk page, and/or replace this tag with a more specific

message.

White box and black box testing are terms used to describe the point of view a test

engineer takes when designing test cases. Black box being an external view of the test

object and white box being an internal view. Software testing is partly intuitive, but

largely systematic. Good testing involves much more than just running the program a

few times to see whether it works. Thorough analysis of the program under test,

backed by a broad knowledge of testing techniques and tools are prerequisites to

systematic testing. Software Testing is the process of executing software in a

controlled manner; in order to answer the question “Does this software behave as

specified?” Software testing is used in association with Verification and Validation.

Verification is the checking of or testing of items, including software, for

conformance and consistency with an associated specification. Software testing is just

one kind of verification, which also uses techniques as reviews, inspections, walk-

through. Validation is the process of checking what has been specified is what the

user actually wanted.

Validation: Are we doing the right job?

Verification: Are we doing the job right?

In order to achieve consistency in the Testing style, it is imperative to have and follow

a set of testing principles. This enhances the efficiency of testing within SQA team

members and thus contributes to increased productivity. The purpose of this document

is to provide overview of the testing, plus the techniques.

At SDEI, 3 levels of software testing is done at various SDLC phases

62

Page 63: Efficient Key Management for Threshold-Multi Signature in Distributed System

Unit Testing: in which each unit (basic component) of the software is tested to verify

that the detailed design for the unit has been correctly implemented

Integration testing: in which progressively larger groups of tested software

components corresponding to elements of the architectural design are integrated and

tested until the software works as a whole.

System testing: in which the software is integrated to the overall product and tested to

show that all requirements are met

A further level of testing is also done, in accordance with requirements:

Acceptance testing: upon which the acceptance of the complete software is based. The

clients often do this.

Regression testing: is used to refer the repetition of the earlier successful tests to

ensure that changes made in the software have not introduced new bugs/side effects.

In recent years the term grey box testing has come into common usage. The typical

grey box tester is permitted to set up or manipulate the testing environment, like

seeding a database, and can view the state of the product after his actions, like

performing a SQL query on the database to be certain of the values of columns. It is

used almost exclusively of client-server testers or others who use a database as a

repository of information, but can also apply to a tester who has to manipulate XML

files (DTD or an actual XML file) or configuration files directly. It can also be used

of testers who know the internal workings or algorithm of the software under test and

can write tests specifically for the anticipated results. For example, testing a data

warehouse implementation involves loading the target database with information, and

verifying the correctness of data population and loading of data into the correct tables.

63

Page 64: Efficient Key Management for Threshold-Multi Signature in Distributed System

Test levels

Unit testing tests the minimal software component and sub-component or modules by

the programmers.

Integration testing exposes defects in the interfaces and interaction between integrated

components (modules).

Functional testing tests the product according to programmable work.

System testing tests an integrated system to verify/validate that it meets its

requirements.

Acceptance testing testing can be conducted by the client. It allows the end-user or

customer or client to decide whether or not to accept the product. Acceptance testing

may be performed after the testing and before the implementation phase. See also

Development stage

Alpha testing is simulated or actual operational testing by potential users/customers or

an independent test team at the developers' site. Alpha testing is often employed for

off-the-shelf software as a form of internal acceptance testing, before the software

goes to beta testing.

Beta testing comes after alpha testing. Versions of the software, known as beta

versions, are released to a limited audience outside of the company. The software is

released to groups of people so that further testing can ensure the product has few

faults or bugs. Sometimes, beta versions are made available to the open public to

increase the feedback field to a maximal number of future users.

64

Page 65: Efficient Key Management for Threshold-Multi Signature in Distributed System

It should be noted that although both Alpha and Beta are referred to as testing it is in

fact use emersion. The rigors that are applied are often unsystematic and many of the

basic tenets of testing process are not used. The Alpha and Beta period provides

insight into environmental and utilization conditions that can impact the software.

After modifying software, either for a change in functionality or to fix defects, a

regression test re-runs previously passing tests on the modified software to ensure that

the modifications haven't unintentionally caused a regression of previous

functionality. Regression testing can be performed at any or all of the above test

levels. These regression tests are often automated.

Test cases, suites, scripts and scenarios

A test case is a software testing document, which consists of event, action, input,

output, expected result and actual result. Clinically defined (IEEE 829-1998) a test

case is an input and an expected result. This can be as pragmatic as 'for condition x

your derived result is y', whereas other test cases described in more detail the input

scenario and what results might be expected. It can occasionally be a series of steps

(but often steps are contained in a separate test procedure that can be exercised against

multiple test cases, as a matter of economy) but with one expected result or expected

outcome. The optional fields are a test case ID, test step or order of execution number,

related requirement(s), depth, test category, author, and check boxes for whether the

test is automatable and has been automated. Larger test cases may also contain

prerequisite states or steps, and descriptions. A test case should also contain a place

for the actual result. These steps can be stored in a word processor document,

spreadsheet, database or other common repository. In a database system, you may

also be able to see past test results and who generated the results and the system

configuration used to generate those results. These past results would usually be

stored in a separate table.

The term test script is the combination of a test case, test procedure and test data.

Initially the term was derived from the byproduct of work created by automated

65

Page 66: Efficient Key Management for Threshold-Multi Signature in Distributed System

regression test tools. Today, test scripts can be manual, automated or a combination of

both.

The most common term for a collection of test cases is a test suite. The test suite often

also contains more detailed instructions or goals for each collection of test cases. It

definitely contains a section where the tester identifies the system configuration used

during testing. A group of test cases may also contain prerequisite states or steps, and

descriptions of the following tests.

Collections of test cases are sometimes incorrectly termed a test plan. They might

correctly be called a test specification. If sequence is specified, it can be called a test

script, scenario or procedure.

A sample testing cycle

Although testing varies between organizations, there is a cycle to testing:

Requirements Analysis: Testing should begin in the requirements phase of the

software development life cycle.

During the design phase, testers work with developers in determining what aspects of

a design are testable and under what parameter those tests work.

Test Planning: Test Strategy, Test Plan(s), Test Bed creation.

Test Development: Test Procedures, Test Scenarios, Test Cases, Test Scripts to use in

testing software.

Test Execution: Testers execute the software based on the plans and tests and report

any errors found to the development team.

Test Reporting: Once testing is completed, testers generate metrics and make final

reports on their test effort and whether or not the software tested is ready for release.

Retesting the Defects

Not all errors or defects reported must be fixed by a software development team.

Some may be caused by errors in configuring the test software to match the

66

Page 67: Efficient Key Management for Threshold-Multi Signature in Distributed System

development or production environment. Some defects can be handled by a

workaround in the production environment. Others might be deferred to future

releases of the software, or the deficiency might be accepted by the business user.

There are yet other defects that may be rejected by the development team (of course,

with due reason) if they deem it inappropriate to be called a defect.

67

Page 68: Efficient Key Management for Threshold-Multi Signature in Distributed System

IMPLEMENTATION

Implementation is the stage of the project when the theoretical design is turned

out into a working system. Thus it can be considered to be the most critical stage in

achieving a successful new system and in giving the user, confidence that the new

system will work and be effective.

The implementation stage involves careful planning, investigation of the

existing system and it’s constraints on implementation, designing of methods to

achieve changeover and evaluation of changeover methods.

Implementation is the process of converting a new system design into

operation. It is the phase that focuses on user training, site preparation and file

conversion for installing a candidate system. The important factor that should be

considered here is that the conversion should not disrupt the functioning of the

organization.

The proposed threshold-multisignature scheme is based on a multiparty

extension of the ElGamal type signature variant: GES. The proposed threshold-

multisignature scheme can equally use any other secure and efficient signature variant

of the ElGamal type signature scheme.

The main reason for using the defined GES is to minimize the computational

cost of generating and verifying the individual signatures and group-oriented

signature in a multiparty setting without compromising security.

The proposed threshold-multisignature scheme, it is enough to show that the

scheme fulfills all of the fundamental properties of generic threshold-multisignature

schemes and resists attacks to which other similar schemes are subject.

68

Page 69: Efficient Key Management for Threshold-Multi Signature in Distributed System

Module Description

1. Signature

Signature Generation

In this module we are going to generate signatures for the group members. The

signature is generated with Participants long-term private key. A participant

broadcasts the message and the signature to all protocol participants. The encryption

of the secret shadow for protocol participant is performed using an appropriate

publicly verifiable encryption scheme. Using the ElGamal type signature variant GES,

generates a signature using its long-term private key. And they broadcast to all

protocol participants.

Any subset β of t or more members can represent the group and sign an

arbitrary message m. To sign a message, m, choose a random integer, k, such that 0 <

k < P-1 (where k has not been used before AND is relatively prime to (P-1)), and

compute r = g^k mod P and s = k^-1 (m - xr) mod (P-1) [where k^-1 is the

multiplicative inverse of k mod (P-1) ] The message signature is then r and s.

This module having two sub modules they are

i) Individual Signature Generation

ii) Multiple Signature Generation

Signature Verification

In this module we are going to verify the group member’s signatures with

generated signatures. On receiving all of the signatures, we performs the functionality

of a clerk and uses the public key set to authenticate the individual signature of Pi by

verifying if fails to hold, the individual signature of Pi on message m is invalid.

Participants are disqualified if their individual signatures are found to be invalid. The

remaining honest participants form the set and repeat the individual signature

generation. The protocol aborts if it contains less than exact members.

69

Page 70: Efficient Key Management for Threshold-Multi Signature in Distributed System

The recipient can use the public key y to compute y^r * r^s mod P and

determine that it is equivalent to g^m mod P (this is when the recipient wants to be

sure that the message has been sent by the claimed sender -- so it's like the sender is

the private-key holder and is signing the message; then the recipient can use that

sender's public info to verify that the sender did *indeed* send (and sign) that

message).

2. Cryptography

2.1 Encryption

In this project encryption is mainly used for security purpose. Here we are

message convert to unknown format. Then only intermediate person can’t understand

the message. The user messages is encrypted by the received Private Key and

append the public key with that encrypted message, then transmit the whole

information to the corresponding receiver.

2.2 Decryption

It is the process of retrieving the original format from the encrypted message.

After the Decryption we can read the message by using public key. This key is

deliberately attached with encrypted message. Then this message will be shown to

user.

3. Message Transmission

In this module we are going to transmit our encrypted message to the

selected group members. Then the encrypted message will be decrypt at the other end

of group member with the proper verification of the signature. In most of these

applications, users typically receive identical information from a single or multiple

senders. This module having two sub modules they are

i) Unicasting and

ii) Multicasting

Unicasting sub module is used to transfer our message from one to one recipient.

Multicasting sub module is used to transfer our message from one to many recipient.

70

Page 71: Efficient Key Management for Threshold-Multi Signature in Distributed System

CONCLUSION

The main aim of this paper is to introduce such a secure threshold-

multisignature scheme. To reach this objective, the secure and optimally efficient

ElGamal type signature variant, GES, was extended to a multiparty setting to yield a

threshold-multisignature scheme, which from the authors’ point of view is the first to

provide a guaranteed traceability property. The proposed threshold-multisignature

scheme was shown to satisfy all of the specified security requirements and to fulfill

the stronger break-resistant property. The threshold multisignature signature scheme

thus remains secure, even if the threshold cryptosystem has been broken, i.e., the

group secret or individual secret shares are known or controlled by an adversary. It

was shown that the proposed threshold-multisignature scheme eliminates the latest

attacks on similar threshold signature schemes with traceability. The efficiency

analysis showed that the proposed threshold-multisignature scheme outperforms other

existing schemes and is optimal in terms of exponentiations with respect to threshold

signature verification and near optimal for individual signature verification, while

providing break resistance.

Use of the DKRU mechanism makes the proposed fully distributed threshold-

multisignature scheme proactively secure, allows for dynamic group membership, and

gives the group members the capability of adjusting the availability/ security trade-off

by redistributing the existing access structure to a new access structure.

71

Page 72: Efficient Key Management for Threshold-Multi Signature in Distributed System

BIBLIOGRAPHY

Good Teachers are worth more than thousand books, we have

them in Our Department

References Made From:

1. Professional Java Network Programming

2. Java Complete Reference

3. R. Gennaro, S. Jarecki, H. Krawczyk, and T. Rabin, “Secure Distributed Key

Generation for Discrete-Log Based Cryptosystems,” Proc. Advances in

Cryptology EUROCRYPT ’99, May 1999.

4. A. Boldyreva, “Threshold Signatures, Multisignatures and Blind Signatures

Based on the Gap-Diffie-Hellman-Group Signature Scheme,” Proc. Public Key

Cryptography PKC ’03, 2003.

5. Z.-C. Li, J.-M. Zhang, J. Luo, W. Song, and Y.-Q. Dai, “Group- Oriented (t,

n) Threshold Digital Signature Schemes with Traceable Signers,” Proc.

Second Int’l Symp. Topics in Electronic Commerce (ISEC ’01), Apr. 2001.

Sites Referred:

http://java.sun.com

http://www.sourcefordgde.com

http://www.networkcomputing.com/

72

Page 73: Efficient Key Management for Threshold-Multi Signature in Distributed System

SAMPLE SCREENS

73

Page 74: Efficient Key Management for Threshold-Multi Signature in Distributed System

74

Page 75: Efficient Key Management for Threshold-Multi Signature in Distributed System

75

Page 76: Efficient Key Management for Threshold-Multi Signature in Distributed System

76

Page 77: Efficient Key Management for Threshold-Multi Signature in Distributed System

77

Page 78: Efficient Key Management for Threshold-Multi Signature in Distributed System

78

Page 79: Efficient Key Management for Threshold-Multi Signature in Distributed System

79