final report training
DESCRIPTION
SOFTWARE TRAINING REPORTTRANSCRIPT
EMAX INSTITUTE OF ENGINEERING & TECHNOLOGY, Ambala
Department of CSE & IT
Guidelines for Summer Training Report
1. The file shall be computer typed (English- British, Font -Times Roman, Size-12 point) and printed on A4 size paper.
2. The file shall be spiral bound only. The name of the candidate, degree (specifying the specialization), year of submission, name of the University including college name, project name shall be printed in black.
3. The file shall be typed with 1.15 or 1.5 spaces with a margin 3.5 cm on the left, 2.5 cm on the top, and 1.25 cm on the right and at bottom on front page. On the back side, the margin shall be 3.5 cm on the right, 2.5 cm on the top, and 1.25 cm on the left and at bottom. Page number must be mentioned on each page (Refer Contents page).The photocopy of Training certificate must be attached inside the file.
4. The diagrams should be printed on a light/white background, Tabular matter should be clearly arranged. Decimal point may be indicated by full stop (.). The caption for Figure must be given at the BOTTOM of the Fig. and Caption for the Table must be given at the TOP of the Table.
5. Conclusion must not exceed more than one page.
1
6. The students in group shall be submitted separate files. Two files shall be made by each student.(One Department Copy & 2nd
Student Copy)
7. The file must consist of following chapters
Chapter 1- Introduction Chapter 2- Company profile Chapter 3- Present work (It can span in two to three sub chapters
depending on the type and volume of the work) Chapter 4- Result and Discussion Chapter 5-Conclusions and future scope
ReferencesAppendix or Annexure-I, II, III (if any)
Student’s report should be submitted and get it signed by HOD/Coordinator by
2
SOFTWARE TESTING ON ALPRUS
TRAINING FILE
SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENT FOR THE AWARD
OF THE DEGREE OF BTECH
BACHELOR OF TECHNOLOGY
(Computer Engineering/Information Technology)
SUBMITTED BY
(VINAY PANDEY)
(4513180)
3
EMAX-INSTITUTE OF ENGINEERING & TECHNOLOGY
MARCH-MAY 2015
CONTENTS
Training Certificate Xerox
Declaration i
Abstract ii
Acknowledgement iii
List of Figures iv
List of Tables v
Abbrivations v-vii
4
KURUKSHETRA UNIVERSITY, KURUKSHETRA
HARYANA,INDIA
EMAX INSTITUTE OF ENGINEERING TECHNOLOGY
BADHAULI, Ambala
DECLARATION
I hereby certify that the work which is being presented in the file entitled “Software Testing ”
by “VINAY PANDEY” in partial fulfillment of requirement for the award of degree of B. Tech.
(CSE/IT) submitted in the Department of Computer Science & Engineering/ Information
Technology at EMAX INSTITUTE OF ENGINEERING & TECHNOLOGY, BADAHULI
under KURUKSHETRA UNIVERSITY, KURUKSHETRA is an authentic record of my own
work carried out during a period from 2nd July to 16th August under the supervision of
BTES,CHANDIGARH . The matter presented in this file has not been submitted in any other
University/Institute for the award of B. Tech. Degree.
Signature of the Student
(VINAY PANDEY)
5
ABSTRACT
The project ALPRUS Software Testing is a solution for a better software output. The Software
testing for the software named alprus is performed here . .`That includes various software test
cases being performed here on the software, The software Testing is being performed using
various test criteria and various testing types the software tests output resulted in categorized in
various fields Module, Test Description, Test Data,Priority,Test Enivornment
Preconditions,Prerequisite,Steps ,Steps Description Expected Results,Testing Type
& Actual Results than status were checked and maintained. The software testing helps developer
to find the bugs if any or if any requirement is not fulfilled as demanded by the client. Various
testing types Acceptance Testing, Unit Testing, Black Box Testing, White Box Testing were
performed during the training period,Bug report was also created to specify the details and make
the report more precise which helped developers to improve the development process and make
the software more valuable and increase the quality of the software
6
ACKNOWLEDGEMENT
The success and final outcome of this project required a lot of guidance and assistance from
many people and I am extremely fortunate to have got this all along the completion of my project
work. Whatever I have done is only due to such guidance and assistance and I would not forget
to thank them.
I respect and thank Miss. SAMPADA, for giving me an opportunity to do the project work in
BTES ,CHANDIGARH and providing us all support and guidance which made me complete the
project on time . I am extremely grateful to him for providing such a nice support and guidance
though he had busy schedule managing the company affairs.I owe my profound gratitude to our
project guide Mr.Sharad Chauhan, HOD, Department of Computer Science who took keen
interest on our project work and guided us all along, till the completion of our project work by
providing all the necessary information for developing a good system.
I am thankful to and fortunate enough to get constant encouragement, support and guidance from
all Teaching staffs of Department of computer science which helped us in successfully
completing our project work. Also, I would like to extend our sincere regards to all the non-
teaching staff of department of computer science for their timely support.
(VINAY PANDEY)
7
CONTENTS
Training Certificate Xerox
Declaration i
Abstract ii
Acknowledgement iii
List of Figures iv
List of Tables v
Abbrivations v-vii
Conclusion
Screenshot
8
LIST OF FIGURES
CHAPTER 1 : INTRODUCTION PAGE NO.
1.1 Software Testing 11
1.2 Software Testing Needs 12
1.3) Testing Methods 13
1.3.1) Static vs. dynamic testing 13
1.3.2) White Box Testing 14 1.3.3) Black Box Testing 15
CHAPTER 2: TESTING TYPES 16-19
2.1)Compatibility Testing
2.2)Smoke Testing
2.3)Regression Testing
2.4)Acceptance Testing
2.5)Alpha Testing
2.6)Beta Testing
2.7)Performance Testing
CHAPTER 3 TYPES OF PERORMANCE TESTING 20-21
3.1)Load Testing
3.2)Stress Testing
3.3)Endurance Testing
3.4)Spike Testing
3.5)Volume Testing
3.6)Scalability Testing
3.7)Usability Testing
CHAPTER 2 : CONCLUSIONS 24
9
APPENDIX B Snapshots 25-30
SOFTWARE TESTING
Software testing is the process of evaluation a software item to detect differences between given
input and expected output. Also to assess the feature of A software item. Testing assesses the
quality of the product. Software testing is a process that should be done during the development
process. In other words software testing is a verification and validation process. It involves the
execution of a software component or system component to evaluate one or more properties of
interest. In general, these properties indicate the extent to which the component or system under
test: As the number of possible tests for even simple software components is practically infinite,
all software testing uses some strategy to select tests that are feasible for the available time and
resources application with the intent of finding software bugs (errors or other defects). The job of
testing is an iterative process as when one bug is fixed, it can illuminate other, deeper bugs, or
can even create new ones Although testing can determine the correctness of software under the
assumption of some specific hypotheses (see hierarchy of testing difficulty below), testing
cannot identify all the defects within software. Instead, it furnishes a criticism or comparison that
compares the state and behavior of the product against oracles—principles or mechanisms by
which someone might recognize a problem. These oracles may include (but are not limited to)
specifications, contracts, comparable products, past versions of the same product, inferences
about intended or expected purpose, user or customer expectations, relevant standards
10
NEEDS OF SOFTWARE TESTING
Software Testing is necessary because we all make mistakes. Some of those mistakes are
unimportant, but some of them are expensive or dangerous. We need to check everything
andanything we produce because things can always go wrong – humans make mistakes all the
time. Since we assume that our work may have mistakes, hence we all need to check our own
work. However some mistakes come from bad assumptions and blind spots, so we might make
the same mistakes when we check our own work as we made when we did it. So we may not
notice the flaws in what we have done. Ideally, we should get someone else to check our work
because another person is more likely to spot the flaws. There are several reasons which clearly
tells us as why Software Testing is important and what are the major things that we should
consider while testing of any product or application.
Software testing is really required to point out the defects and errors that were made
during the development phases.
It is very important to ensure the Quality of the product. Quality product delivered to the
customers helps in gaining their confidence.
Testing is necessary in order to provide the facilities to the customers like the delivery of
high quality product or software application which requires lower maintenance cost and
hence results into more accurate, consistent and reliable results.
Testing is required for an effective performance of software application or product.
It’s important to ensure that the application should not result into any failures because it
can be very expensive in the future or in the later stages of the development.
It’s required to stay in the business.
11
TESTING METHODS
Static vs. dynamic testing
There are many approaches available in software testing. Reviews, walkthroughs,
or inspections are referred to as static testing, whereas actually executing programmed code with
a given set of test cases is referred to as dynamic testing. Static testing is often implicit, as
proofreading, plus when programming tools/text editors check source code structure or
compilers (pre-compilers) check syntax and data flow as static program analysis. Dynamic
testing takes place when the program itself is run. Dynamic testing may begin before the
program is 100% complete in order to test particular sections of code and are applied to
discrete functions or modules. Typical techniques for this are either using stubs/drivers or
execution from a debugger environment. Static testing involves verification, whereas dynamic
testing involves validation. Together they help improve software quality. Among the techniques
for static analysis, mutation testing can be used to ensure the test-cases will detect errors which
are introduced by mutating the source code. Under Static Testing code is not executed. Rather it
manually checks the code, requirement documents, and design documents to find errors. Hence,
the name "static". Main objective of this testing is to improve the quality of software products by
finding errors in early stages of the development cycle. This testing is also called as Non-
execution technique or verification testing, Static testing involves manual or automated reviews
of the documents. This review, is done during initial phase of testing to catch defect early in
STLC. It examines work documents and provides review comments Work Under Dynamic
Testing code is executed. It checks for functional behavior of software system , memory/cpu
usage and overall performance of the system. Hence the name "Dynamic “Main objective of this
testing is to confirm that the software product works in conformance with the business
requirements. This testing is also called as Execution technique or validation testing. Dynamic
testing executes the software and validates the output with the expected
12
White-box testing
White-box testing is a method of testing the application at the level of the source code. These test
cases are derived through the use of the design techniques mentioned above control flow testing,
data flow testing, branch testing, path testing, statement coverage and decision coverage as well
as modified condition/decision coverage. White-box testing is the use of these techniques as
guidelines to create an error free environment by examining any fragile code. These White-box
testing techniques are the building blocks of white-box testing, whose essence is the careful
testing of the application at the source code level to prevent any hidden errors later on. These
different techniques exercise every visible path of the source code to minimize errors and create
an error-free environment. The whole point of white-box testing is the ability to know which line
of the code is being executed and being able to identify what the correct output should be.
Also known as clear box testing, glass box testing, transparent box testing and structural testing,
tests internal structures or workings of a program, as opposed to the functionality exposed to the
end-user. In white-box testing an internal perspective of the system, as well as programming
skills, are used to design test cases. The tester chooses inputs to exercise paths through the code
and determine the appropriate outputs. This is analogous to testing nodes in a circuit, e.g. in-
circuit testing (ICT).
13
Black-box testing
Treats the software as a "black box", examining functionality without any knowledge of
internal implementation. The testers are only aware of what the software is supposed to do,
not how it does it. Black-box testing methods include: equivalence partitioning, boundary
value analysis, all-pairs testing, state transition tables, decision table testing, fuzz
testing, model-based testing, use case testing, exploratory testing and specification-based
testing. Tests are done from a user’s point of view and will help in exposing discrepancies in
the specifications. Tester need not know programming languages or how the software has
been implemented. Tests can be conducted by a body independent from the developers,
allowing for an objective perspective and the avoidance of developer-bias. Test cases can be
designed as soon as the specifications are complete. It also have some disadvantages Only a
small number of possible inputs can be tested and many program paths will be left untested.
Without clear specifications, which is the situation in many projects, test cases will be
difficult to design. Tests can be redundant if the software designer/ developer has already run
a test case. Ever wondered why a soothsayer closes the eyes when foretelling events? So is
almost the case in Black Box Testing.
14
TESTING TYPES
Compatibility Testing
A common cause of software failure (real or perceived) is a lack of its compatibility with
other application software, operating systems (or operating system versions, old or new), or
target environments that differ greatly from the original (such as a terminal or GUI application
intended to be run on the desktop now being required to become a web application, which must
render in a web browser). For example, in the case of a lack of backward compatibility, this can
occur because the programmers develop and test software only on the latest version of the target
environment, which not all users may be running.
Smoke Testing
Smoke Testing is a testing technique that is inspired from hardware testing, which checks for
the smoke from the hardware components once the hardware's power is switched on. Similarly in
Software testing context, smoke testing refers to testing the basic functionality of the build.
Smoke tests can be broadly categorized as functional tests or unit tests. Functional tests exercise
the complete program with various inputs. Unit tests exercise individual functions, subroutines,
or object methods. Functional tests may comprise a scripted series of program inputs, possibly
even with an automated mechanism for controlling mouse movements. Unit tests can be
implemented either as separate functions within the code itself, or else as a driver layer that links
to the code without altering the code being tested.
Regression Testing
Reg ression testing focuses on finding defects after a major code change has occurred.
Specifically, it seeks to uncover software regressions, as degraded or lost features, including old
bugs that have come back. Such regressions occur whenever software functionality that was
previously working, correctly, stops working as intended. Typically, regressions occur as
an unintended consequence of program changes, when the newly developed part of the software
collides with the previously existing code. Common methods of regression testing include re-
15
running previous sets of test-cases and checking whether previously fixed faults have re-
emerged. The depth of testing depends on the phase in the release process and the risk of the
added features. They can either be complete, for changes added late in the release or deemed to
be risky, or be very shallow, consisting of positive tests on each feature.
Acceptance Testing
In engineering and its various sub disciplines, acceptance testing is a test conducted to determine
if the requirements of specification or contract are met. It may involve chemical tests, physical
tests, or tests. In systems engineering it may involve black-box testing performed on
a system (for example: a piece of software, lots of manufactured mechanical parts, or batches of
chemical products) prior to its delivery. In software testing the ISTQB defines acceptance as:
formal testing with respect to user needs, requirements, and business processes conducted to
determine whether or not a system satisfies the acceptance criteria and to enable the user,
customers or other authorized entity to determine whether or not to accept the
system. Acceptance testing is also known as user acceptance testing (UAT), end-user
testing, operational acceptance testing (OAT) or field (acceptance) testing. A smoke test may be
used as an acceptance test prior to introducing a build of software to the main testing process
16
Alpha Testing
Alpha testing is simulated or actual operational testing by potential users/customers or an
independent test team at the developers’ site. Alpha testing is often employed for off-the-shelf
software as a form of internal acceptance testing, before the software goes to beta testing.
Alpha tests are conducted in the software developer’s offices or on any designated systems so
that they can monitor the test and list out any errors or bugs that may be present. Thus, some of
the most complex codes are developed during the alpha testing stage. Furthermore, the project
manager who is handling the testing process will need to talk to the software developer on the
possibilities of integrating the results procured from the alpha testing process with the future
software design plans so that all potential future problems can be avoided.
17
Beta Testing
In software development, a beta test is the second phase of software testing in which a sampling
of the intended audience tries the product out. (Beta is the second letter of the Greek alphabet.)
Originally, the term alpha test meant the first phase of testing in a software development process.
The first phase includes unit testing, component testing, and system testing. Beta testing can be
considered "pre-release testing." Beta test versions of software are now distributed to a wide
audience on the Web partly to give the program a "real-world" test and partly to provide a
preview of the next release.
18
Performance Testing
Performance testing is the testing, which is performed, to ascertain how the components of a
system are performing, given a particular situation. Resource usage, scalability and reliability of
the product are also validated under this testing. This testing is the subset of performance
engineering, which is focused on addressing performance issues in the design and architecture of
software product. Performance testing is done to provide stakeholders with information about
their application regarding speed, stability and scalability. More importantly, performance testing
uncovers what needs to be improved before the product goes to market. Without performance
testing, software is likely to suffer from issues such as: running slow while several users use it
simultaneously, inconsistencies across different operating systems and poor usability.
Performance testing will determine whether or not their software meets speed, scalability and
stability requirements under expected workloads. Applications sent to market with poor
performance metrics due to nonexistent or poor performance testing are likely to gain a bad
reputation and fail to meet expected sales goals. Also, mission critical applications like space
launch programs or life saving medical equipment’s should be performance tested to ensure that
they run for a long period of time without deviations.
19
Types of performance testing
Load testing - checks the application's ability to perform under anticipated user loads.
The objective is to identify performance bottlenecks before the software application goes
live.
Stress testing - involves testing an application under extreme workloads to see how it
handles high traffic or data processing .The objective is to identify breaking point of an
application.
Endurance testing - is done to make sure the software can handle the expected load over
a long period of time.
Spike testing - tests the software's reaction to sudden large spikes in the load generated
by users.
Volume testing - Under Volume Testing large no. of. Data is populated in database and
the overall software system's behavior is monitored. The objective is to check software
application's performance under varying database volumes.
Scalability testing - The objective of scalability testing is to determine the software
application's effectiveness in "scaling up" to support an increase in user load. It helps
plan capacity addition to your software system.
20
Usability Testing
Usability testing is a technique used in user-centered interaction design to evaluate a product by testing it on users. This can be seen as an irreplaceable usability practice, since it gives direct input on how real users use the system. This is in contrast with usability inspection methods where experts use different methods to evaluate a user interface without involving users. Usability testing focuses on measuring a human-made product's capacity to meet its intended purpose. Examples of products that commonly benefit from usability testing are foods, consumer products, web sites or web applications, computer interfaces, documents, and devices. Usability testing measures the usability, or ease of use, of a specific object or set of objects, whereas general human-computer interaction studies attempt to formulate universal principles. When conducting user testing, the researcher reads a participant one task at a time, such as “Find out how to contact technical support, and allows the participant to complete the task without any guidance. To prevent bias, the researcher follows the same “script” when explaining the task to each participant. The researcher may also ask the participant to talk aloud as he works on a task to better understand the participant’s mental model for the task and his decision-making in real time. When the participant has completed a task, the researcher sets up the starting point for the next task and continues the test. Ideally, task order is counterbalanced from participant to participant.
21
TESTING ARTIFACTS
Test Plans
A test plan documents the strategy that will be used to verify and ensure that a product or system meets its design specifications and other requirements. A test plan is usually prepared by or with significant input from engineers. Depending on the product and the responsibility of the organization to which the test plan applies, a test plan may include a strategy for one or more of the following:
Design Verification or Compliance test - to be performed during the development or approval stages of the product, typically on a small sample of units.
Manufacturing or Production test - to be performed during preparation or assembly of the product in an ongoing manner for purposes of performance verification and quality control.
Acceptance or Commissioning test - to be performed at the time of delivery or installation of the product.
Service and Repair test - to be performed as required over the service life of the product.
A complex system may have a high level test plan to address the overall requirements and supporting test plans to address the design details of subsystems and components. Test plan document formats can be as varied as the products and organizations to which they apply. There are three major elements that should be described in the test plan: Test Coverage, Test Methods, and Test Responsibilities. These are also used in a formal test strategy.
22
Test Case
A test case normally consists of a unique identifier, requirement references from a design specification, preconditions, events, a series of steps (also known as actions) to follow, input, output, expected result, and actual result. Clinically defined a test case is an input and an expected result. This can be as pragmatic as 'for condition x your derived result is y30, whereas other test cases described in more detail the input scenario and what results might be expected. It can occasionally be a series of steps (but often steps are contained in a separate test procedure that can be exercised against multiple test cases, as a matter of economy) but with one expected result or expected outcome. The optional fields are a test case ID, test step, or order of execution number, related requirement depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database, or other common repository. In a database system, you may also be able to see past test results, who generated the results, and what system configuration was used to generate those results. These past results would usually be stored in a separate table.
23
CONCLUSION
Here by it is to conclude that the software testing of alprus will make the application more
reliable and bug free and meet to the demand of the client the software testing was performed In
all criteria depending on the level of testing required the alprus is expected to have few bugs as
before thus software testing not only brings bug free software it also focuses on the client
requirement and the quality of software. Software testing is opted by the companies who solely
focuses on the quality it eliminates the threat of software failure.Tools that were used while
performing the testing is Chrome,Mozzila,Safari,Mozzila Firefox,Internet Explorer for testing
the online fourm and the tools that were used for the writing of the ug report and test cases were
as Microsoft word 2008,Microsoft Excel,Microsoft Outlook,Paint
24
SCREENSHOTS
25
TESTCASES
26
Login Page test case 2
27
CASE TEST 3
28
29
Calender
30
31