about acceptance testing

Upload: beghinbose

Post on 29-May-2018

230 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/9/2019 About Acceptance Testing

    1/28

    About Acceptance TestingIn software engineering, acceptance testing is formal testing conducted to determine

    whether a system satisfies its acceptance criteria and thus whether the customer should

    accept the system. The main types of software testing are:

    1.Component.2.Interface.

    3.System.

    4.Acceptance.5.Release.

    Acceptance Testing checks the system against the "Requirements". It is similar to

    systems testing in that the whole system is checked but the important difference is the

    change in focus: Systems testing checks that the system that was specified has beendelivered. Acceptance Testing checks that the system delivers what was requested.

    The customer and not the developer should always do acceptance testing.

    The customer knows what is required from the system to achieve value in the business

    and is the only person qualified to make that judgment. The forms of the tests may followthose in system testing, but at all times they are informed by the business needs.

    The test procedures that lead to formal 'acceptance' of new or changed systems. User

    Acceptance Testing is a critical phase of any 'systems' project and requires significantparticipation by the 'End Users'. To be of real use, an Acceptance Test Plan should be

    developed in order to plan precisely, and in detail, the means by which 'Acceptance' will

    be achieved. The final part of the UAT can also include a parallel run to prove the systemagainst the current system.

    Factors influencing Acceptance Testing

    The User Acceptance Test Plan will vary from system to system but, in general, the

    testing should be planned in order to provide a realistic and adequate exposure of thesystem to all reasonably expected events. The testing can be based upon the User

    Requirements Specification to which the system should conform.

    As in any system though, problems will arise and it is important to have determined what

    will be the expected and required responses from the various parties concerned; includingUsers; Project Team; Vendors and possibly Consultants / Contractors.

    In order to agree what such responses should be, the End Users and the Project Team

    need to develop and agree a range of 'Severity Levels'. These levels will range from (say)

    1 to 6 and will represent the relative severity, in terms of business / commercial impact,of a problem with the system, found during testing. Here is an example which has been

    used successfully; '1' is the most severe; and '6' has the least impact :

  • 8/9/2019 About Acceptance Testing

    2/28

    1.'Show Stopper' i.e. it is impossible to continue with the testing because of the severity

    of this error / bug

    2.Critical Problem; testing can continue but we cannot go into production (live) with

    this problem

    3.Major Problem; testing can continue but live this feature will cause severe disruptionto business processes in live operation

    4.Medium Problem; testing can continue and the system is likely to go live with only

    minimal departure from agreed business processes Minor Problem ; both testing and liveoperations may progress. This problem should be corrected, but little or no changes to

    business processes are envisaged

    5.'Cosmetic' Problem e.g. colors; fonts; pitch size However, if such features are key to

    the business requirements they will warrant a higher severity level. The users of thesystem, in consultation with the executive sponsor of the project, must then agree upon

    the responsibilities and required actions for each category of problem. For example, you

    may demand that any problems in severity level 1, receive priority response and that all

    testing will cease until such level 1 problem are resolved.

    Even where the severity levels and the responses to each have been agreed by all parties;the allocation of a problem into its appropriate severity level can be subjective and open

    to question. To avoid the risk of lengthy and protracted exchanges over the categorization

    of problems; we strongly advised that a range of examples are agreed in advance to

    ensure that there are no fundamental areas of disagreement; or if there are, these will beknown in advance and your organization is forewarned.

    Finally, it is crucial to agree the Criteria for Acceptance. Because no system is entirelyfault free, it must be agreed between End User and vendor, the maximum number of

    acceptable 'outstanding' in any particular category. Again, prior consideration of this is

    advisable.

    In some cases, users may agree to accept ('sign off') the system subject to a range of

    conditions. These conditions need to be analyzed as they may, perhaps unintentionally,seek additional functionality which could be classified as scope creep. In any event, any

    and all fixes from the software developers, must be subjected to rigorous System Testing

    and, where appropriate Regression Testing.

    ConclusionHence the goal of acceptance testing should verify the overall quality, correct operation,

    scalability, completeness, usability, portability, and robustness of the functional

    components supplied by the Software system.

    Brief about Integration Testing

    One of the most significant aspects of a software development project is the integration

    strategy. Integration may be performed all at once, top-down, bottom-up, critical piecefirst, or by first integrating functional subsystems and then integrating the subsystems in

    separate phases using any of the basic strategies. In general, the larger the project, the

  • 8/9/2019 About Acceptance Testing

    3/28

    more important the integration strategy.

    Very small systems are often assembled and tested in one phase. For most real systems,this is impractical for two major reasons. First, the system would fail in so many places at

    once that the debugging and retesting effort would be impractical.

    Second, satisfying any white box testing criterion would be very difficult, because of the

    vast amount of detail separating the input data from the individual code modules. In fact,

    most integration testing has been traditionally limited to "black box" techniques.

    Large systems may require many integration phases, beginning with assembling modules

    into low-level subsystems, then assembling subsystems into larger subsystems, and

    finally assembling the highest level subsystems into the complete system.

    To be most effective, an integration testing technique should fit well with the overall

    integration strategy. In a multi-phase integration, testing at each phase helps detect errors

    early and keep the system under control. Performing only cursory testing at earlyintegration phases and then applying a more rigorous criterion for the final stage is really

    just a variant of the high-risk "big bang" approach. However, performing rigorous testingof the entire software involved in each integration phase involves a lot of wasteful

    duplication of effort across phases. The key is to leverage the overall integration structure

    to allow rigorous testing at each phase while minimizing duplication of effort.

    It is important to understand the relationship between module testing and integration

    testing. In one view, modules are rigorously tested in isolation using stubs and drivers

    before any integration is attempted. Then, integration testing concentrates entirely onmodule interactions, assuming that the details within each module are accurate. At the

    other extreme, module and integration testing can be combined, verifying the details of

    each module's implementation in an integration context. Many projects compromise,combining module testing with the lowest level of subsystem integration testing, and then

    performing pure integration testing at higher levels. Each of these views of integration

    testing may be appropriate for any given project, so an integration testing method shouldbe flexible enough to accommodate them all. Combining module testing with bottom-up

    integration.

  • 8/9/2019 About Acceptance Testing

    4/28

    Generalization of module testing criteria

    Module testing criteria can often be generalized in several possible ways to support

    integration testing. As discussed in the previous subsection, the most obviousgeneralization is to satisfy the module testing criterion in an integration context, in effect

    using the entire program as a test driver environment for each module. However, thistrivial kind of generalization does not take advantage of the differences between module

    and integration testing. Applying it to each phase of a multi-phase integration strategy,

    for example, leads to an excessive amount of redundant testing.More useful generalizations adapt the module testing criterion to focus on interactions

    between modules rather than attempting to test all of the details of each module's

    implementation in an integration context. The statement coverage module testingcriterion, in which each statement is required to be exercised during module testing, can

    be generalized to require each module call statement to be exercised during integration

    testing. Although the specifics of the generalization of structured testing are moredetailed, the approach is the same. Since structured testing at the module level requiresthat all the decision logic in a module's control flow graph be tested independently, the

    appropriate generalization to the integration level requires that just the decision logic

    involved with calls to other modules be tested independently.

    Module design complexity

  • 8/9/2019 About Acceptance Testing

    5/28

    Rather than testing all decision outcomes within a module independently, structured

    testing at the integration level focuses on the decision outcomes that are involved with

    module calls. The design reduction technique helps identify those decision outcomes, sothat it is possible to exercise them independently during integration testing. The idea

    behind design reduction is to start with a module control flow graph, remove all control

    structures that are not involved with module calls, and then use the resultant "reduced"flow graph to drive integration testing. Below figure shows a systematic set of rules for

    performing design reduction. Although not strictly a reduction rule, the call rule states

    that function call ("black dot") nodes cannot be reduced. The remaining rules worktogether to eliminate the parts of the flow graph that are not involved with module calls.

    The sequential rule eliminates sequences of non-call ("white dot") nodes. Since

    application of this rule removes one node and one edge from the flow graph, it leaves the

    cyclomatic complexity unchanged. However, it does simplify the graph so that the otherrules can be applied.

    The repetitive rule eliminates top-test loops that are not involved with module calls. The

    conditional rule eliminates conditional statements that do not contain calls in their bodies.The looping rule eliminates bottom-test loops that are not involved with module calls. It

    is important to preserve the module's connectivity when using the looping rule, since forpoorly-structured code it may be hard to distinguish the "top" of the loop from the

    "bottom."

    For the rule to apply there must be a path from the module entry to the top of the loop anda path from the bottom of the loop to the module exit. Since the repetitive, conditional,

    and looping rules each remove one edge from the flow graph, they each reduce

    cyclomatic complexity by one.

    Rules 1 through 4 are intended to be applied iteratively until none of them can be applied,

    at which point the design reduction is complete. By this process, even very complex logiccan be eliminated as long as it does not involve any module calls.

  • 8/9/2019 About Acceptance Testing

    6/28

    Brief about GUI Testing

    What is GUI Testing?

    GUI is the abbreviation for Graphic User Interface. It is absolutely essential that any

    application has to be user-friendly. The end user should be comfortable while using allthe components on screen and the components should also perform their functionality

    with utmost clarity. Hence it becomes very essential to test the GUI components of any

    application. GUI Testing can refer to just ensuring that the look-and-feel of the

    application is acceptable to the user, or it can refer to testing the functionality of each andevery component involved.

    The following is a set of guidelines to ensure effective GUI Testing and can be used evenas a checklist while testing a product/application.

    Windows Compliance Testing

    Application Start Application by Double Clicking on its ICON. The Loading messageshould show the application name, version number, and a bigger pictorial representation

  • 8/9/2019 About Acceptance Testing

    7/28

    of the icon. No Login is necessary. The main window of the application should have the

    same caption as the caption of the icon in Program Manager. Closing the application

    should result in an "Are you Sure" message box .Try to start the application twice as it isloading. On each window, if the application is busy, then the hour glass should be

    displayed. If there is no hour glass, then some enquiry in progress message should be

    displayed. All screens should have a Help button (i.e.) F1 key should work the same.

    If Window has a Minimize Button, click it. Window should return to an icon on the

    bottom of the screen. This icon should correspond to the Original Icon under ProgramManager. Double Click the Icon to return the Window to its original size. The window

    caption for every application should have the name of the application and the window

    name - especially the error messages. These should be checked for spelling, English and

    clarity, especially on the top of the screen. Check does the title of the window makesense. If the screen has a Control menu, then use all un-grayed options.

    Check all text on window for Spelling/Tense and Grammar. Use TAB to move focus

    around the Window. Use SHIFT+TAB to move focus backwards. Tab order should beleft to right, and Up to Down within a group box on the screen. All controls should get

    focus - indicated by dotted box, or cursor. Tabbing to an entry field with text in it shouldhighlight the entire text in the field. The text in the Micro Help line should change -

    Check for spelling, clarity and non-updateable etc. If a field is disabled (grayed) then it

    should not get focus. It should not be possible to select them with either the mouse or by

    using TAB. Try this for every grayed control.

    Never updateable fields should be displayed with black text on a gray background with a

    black label. All text should be left justified, followed by a colon tight to it. In a field thatmay or may not be updateable, the label text and contents changes from black to gray

    depending on the current status. List boxes are always white background with black text

    whether they are disabled or not. All others are gray.

    In general, double-clicking is not essential. In general, everything can be done using both

    the mouse and the keyboard. All tab buttons should have a distinct letter.

    Text Boxes

    Move the Mouse Cursor over all Enterable Text Boxes. Cursor should change from arrowto Insert Bar. If it doesn't then the text in the box should be gray or non-updateable. Refer

    to previous page. Enter text into Box Try to overflow the text by typing to many

    characters - should be stopped Check the field width with capitals W. Enter invalidcharacters - Letters in amount fields, try strange characters like + , - * etc. in All fields.

    SHIFT and Arrow should Select Characters. Selection should also be possible with

    mouse. Double Click should select all text in box.

    Option (Radio Buttons)

    Left and Right arrows should move 'ON' Selection. So should Up and Down. Select with

  • 8/9/2019 About Acceptance Testing

    8/28

    mouse by clicking.

    Check Boxes

    Clicking with the mouse on the box, or on the text should SET/UNSET the box. SPACE

    should do the same.

    Command Buttons

    If Command Button leads to another Screen, and if the user can enter or change details on

    the other screen then the Text on the button should be followed by three dots. All Buttons

    except for OK and Cancel should have a letter Access to them. This is indicated by a

    letter underlined in the button text. Pressing ALT+Letter should activate the button. Makesure there is no duplication. Click each button once with the mouse - This should activate

    Tab to each button - Press SPACE - This should activate Tab to each button - Press

    RETURN - This should activate The above are VERY IMPORTANT, and should be

    done for EVERY command Button. Tab to another type of control (not a commandbutton). One button on the screen should be default (indicated by a thick black border).

    Pressing Return in ANY no command button control should activate it. If there is aCancel Button on the screen, then pressing should activate it. If pressing the Command

    button results in uncorrectable data e.g. closing an action step, there should be a message

    phrased positively with Yes/No answers where Yes results in the completion of the

    action.

    Drop Down List Boxes

    Pressing the Arrow should give list of options. This List may be scrollable. You should

    not be able to type text in the box. Pressing a letter should bring you to the first item in

    the list with that start with that letter. Pressing Ctrl - F4 should open/drop down the listbox. Spacing should be compatible with the existing windows spacing (word etc.). Items

    should be in alphabetical order with the exception of blank/none, which is at the top or

    the bottom of the list box. Drop down with the item selected should be display the listwith the selected item on the top. Make sure only one space appears, shouldn't have a

    blank line at the bottom.

    Combo Boxes

    Should allow text to be entered. Clicking Arrow should allow user to choose from list

    List Boxes

    Should allow a single selection to be chosen, by clicking with the mouse, or using the Upand Down Arrow keys. Pressing a letter should take you to the first item in the list

    starting with that letter. If there is a 'View' or 'Open' button besides the list box then

    double clicking on a line in the List Box, should act in the same way as selecting and

    item in the list box, then clicking the command button. Force the scroll bar to appear,

  • 8/9/2019 About Acceptance Testing

    9/28

    make sure all the data can be seen in the box.

    Screen Validation Checklist

    Aesthetic Conditions:

    Is the general screen background the correct color?

    Are the field prompts the correct color?

    Are the field backgrounds the correct color?

    In read-only mode, are the field prompts the correct color?

    In read-only mode, are the field backgrounds the correct color?

    Are all the screen prompts specified in the correct screen font?

    Is the text in all fields specified in the correct screen font?

    Are all the field prompts aligned perfectly on the screen?

    Are all the field edits boxes aligned perfectly on the screen?

    Are all group boxes aligned correctly on the screen? Should the screen be resizable?

    Should the screen be allowed to minimize?

    Are all the field prompts spelt correctly?

    Are all characters or alphanumeric fields left justified? This is the default unlessotherwise specified.

    Are all numeric fields right justified? This is the default unless otherwise

    specified.

    Is all the micro-help text spelt correctly on this screen?

    Is all the error message text spelt correctly on this screen? Is all user input

    captured in UPPER case or lowercase consistently?

    Where the database requires a value (other than null) then this should be defaultedinto fields. The user must either enter an alternative valid value or leave the

    default value intact.

    Assure that all windows have a consistent look and feel.

    Assure that all dialog boxes have a consistent look and feel.

    Validation Conditions:

    Does a failure of validation on every field cause a sensible user error message?

    Is the user required to fix entries, which have failed validation tests?

    Have any fields got multiple validation rules and if so are all rules being applied? If the user enters an invalid value and clicks on the OK button (i.e. does not TAB

    off the field) is the invalid entry identified and highlighted correctly with an errormessage?

    Is validation consistently applied at screen level unless specifically required at

    field level? For all numeric fields check whether negative numbers can and should

    be able to be entered.

  • 8/9/2019 About Acceptance Testing

    10/28

    For all numeric fields check the minimum and maximum values and also some

    mid-range values allowable?

    For all character/alphanumeric fields check the field to ensure that there is acharacter limit specified and that this limit is exactly correct for the specified

    database size?

    Do all mandatory fields require user input? If any of the database columns don't allow null values then the corresponding

    screen fields must be mandatory. (If any field, which initially was mandatory, has

    become optional then check whether null values are allowed in this field.)

    Navigation Conditions:

    Can the screen be accessed correctly from the menu?

    Can the screen be accessed correctly from the toolbar?

    Can the screen be accessed correctly by double clicking on a list control on theprevious screen? Can all screens accessible via buttons on this screen be accessed

    correctly? Can all screens accessible by double clicking on a list control be accessed

    correctly?

    Is the screen modal? (i.e.) Is the user prevented from accessing other functions

    when this screen is active and is this correct?

    Can a number of instances of this screen be opened at the same time and is thiscorrect?

    Top of Form

    Usability Conditions:

    Are all the dropdowns on this screen sorted correctly? Alphabetic sorting is the

    default unless otherwise specified.

    Is all date entry required in the correct format?

    Do the Shortcut keys work correctly?

    Have the menu options that apply to your screen got fast keys associated andshould they have?

    Does the Tab Order specified on the screen go in sequence from Top Left to

    bottom right? This is the default unless otherwise specified.

    Are all read-only fields avoided in the TAB sequence?

    Are all disabled fields avoided in the TAB sequence?

    Can the cursor be placed in the microhelp text box by clicking on the text box

    with the mouse? Can the cursor be placed in read-only fields by clicking in the field with the

    mouse?

    Is the cursor positioned in the first input field or control when the screen is

    opened?

    Is there a default button specified on the screen? Does the default button work

    correctly?

  • 8/9/2019 About Acceptance Testing

    11/28

    When an error message occurs does the focus return to the field in error when the

    user cancels it?

    When the user Alt+Tab's to another application does this have any impact on thescreen upon return to the application?

    Do all the fields edit boxes indicate the number of characters they will hold by

    there length? e.g. a 30 character field should be a lot longer . Have all pushbuttons on the screen been given appropriate Shortcut keys?

    Data Integrity Conditions:

    Is the data saved when the window is closed by double clicking on the close box?

    Check the maximum field lengths to ensure that there are no truncated characters?

    Where the database requires a value (other than null) then this should be defaulted

    into fields. The user must either enter an alternative valid value or leave thedefault value intact.

    Check maximum and minimum field values for numeric fields?

    If numeric fields accept negative values can these be stored correctly on thedatabase and does it make sense for the field to accept negative numbers?

    If a set of radio buttons represents a fixed set of values such as A, B and C then

    what happens if a blank value is retrieved from the database? (In some situations

    rows can be created on the database by other functions, which are not screenbased, and thus the required initial values can be incorrect.)

    If a particular set of data is saved to the database check that each value gets saved

    fully to the database. (i.e.) Beware of truncation (of strings) and rounding ofnumeric values.

    Modes (Editable Read-only) Conditions:

    Are the screen and field colors adjusted correctly for read-only mode?

    Should a read-only mode be provided for this screen?

    Are all fields and controls disabled in read-only mode?

    Can the screen be accessed from the previous screen/menu/toolbar in read-only

    mode?

    Can all screens available from this screen be accessed in read-only mode?

    Check that no validation is performed in read-only mode.

    General Conditions:

    Assure the existence of the "Help" menu.

    Assure that the proper commands and options are in each menu.

    Assure that all buttons on all tool bars have corresponding key commands.

    Assure that each menu command has an alternative (hot-key) key sequence,which will invoke it where appropriate.

  • 8/9/2019 About Acceptance Testing

    12/28

    In drop down list boxes, ensure that the names are not abbreviations / cut short

    In drop down list boxes, assure that the list and each entry in the list can be

    accessed via appropriate key / hot key combinations.

    Ensure that duplicate hot keys do not exist on each screen

    Ensure the proper usage of the escape key (which is to undo any changes that

    have been made) and generates a caution message "Changes will be lost -Continue yes/no"

    Assure that the cancel button functions the same as the escape key.

    Assure that the Cancel button operates, as a Close button when changes have beenmade that cannot be undone. Assure that only command buttons, which are used

    by a particular window, or in a particular dialog box, are present. (i.e) make sure

    they don't work on the screen behind the current screen.

    When a command button is used sometimes and not at other times, assures that it

    is grayed out when it should not be used.

    Assure that OK and Cancel buttons are grouped separately from other commandbuttons.

    Assure that command button names are not abbreviations.

    Assure that all field labels/names are not technical labels, but rather are namesmeaningful to system users.

    Assure that command buttons are all of similar size and shape, and same font &

    font size.

    Assure that each command button can be accessed via a hot key combination.

    Assure that command buttons in the same window/dialog box do not have

    duplicate hot keys.

    Assure that each window/dialog box has a clearly marked default value(command button, or other object) which is invoked when the Enter key is pressed

    - and NOT the Cancel or Close button Assure that focus is set to an object/button,which makes sense according to the function of the window/dialog box.

    Assure that all option buttons (and radio buttons) names are not abbreviations.

    General Conditions (contd.):

    Assure that option button names are not technical labels, but rather are names

    meaningful to system users. If hot keys are used to access option buttons, assurethat duplicate hot keys do not exist in the same window/dialog box.

    Assure that option box names are not abbreviations.

    Assure that option boxes, option buttons, and command buttons are logically

    grouped together in clearly demarcated areas "Group Box" Assure that the Tab key sequence, which traverses the screens, does so in a logical

    way.

    Assure consistency of mouse actions across windows. Assure that the color red isnot used to highlight active objects (many individuals are red-green color blind).

    Assure that the user will have control of the desktop with respect to general colorand highlighting (the application should not dictate the desktop background

    characteristics).

  • 8/9/2019 About Acceptance Testing

    13/28

    Assure that the screen/window does not have a cluttered appearance

    Ctrl + F6 opens next tab within tabbed window

    Shift + Ctrl + F6 opens previous tab within tabbed window

    Tabbing will open next tab within tabbed window if on last field of current tab

    Tabbing will go onto the 'Continue' button if on last field of last tab within tabbed

    window Tabbing will go onto the next editable field in the window

    Banner style & size & display exact same as existing windows

    If 8 or less options in a list box, display all options on open of list box - should beno need to scroll

    Errors on continue will cause user to be returned to the tab and the focus should

    be on the field causing the error. (i.e the tab is opened, highlighting the field withthe error on it)

    Pressing continue while on the first tab of a tabbed window (assuming all fields

    filled correctly) will not open all the tabs.

    On open of tab focus will be on first editable field

    All fonts to be the same

    Alt+F4 will close the tabbed window and return you to main screen or previousscreen (as appropriate), generating "changes will be lost" message if necessary.

    Micro help text for every enabled field & button

    Ensure all fields are disabled in read-only mode

    Progress messages on load of tabbed screens

    Return operates continue

    If retrieve on load of tabbed window fails window should not open

    Specific Field Tests

    Date Field Checks

    Assure that leap years are validated correctly & do not causeerrors/miscalculations.

    Assure that month code 00 and 13 are validated correctly & do not cause

    errors/miscalculations.

    Assure that 00 and 13 are reported as errors.

    Assure that day values 00 and 32 are validated correctly & do not cause

    errors/miscalculations.

    Assure that Feb. 28, 29, 30 are validated correctly & do not cause errors/

    miscalculations.

    Assure that Feb. 30 is reported as an error.

    Assure that century change is validated correctly & does not cause errors/

    miscalculations.

  • 8/9/2019 About Acceptance Testing

    14/28

    Assure that out of cycle dates are validated correctly & do not cause

    errors/miscalculations.

    Numeric Fields

    Assure that lowest and highest values are handled correctly.

    Assure that invalid values are logged and reported.

    Assure that valid values are handles by the correct procedure. Assure that numeric

    fields with a blank in position 1 are processed or reported as an error.

    Assure that fields with a blank in the last position are processed or reported as an

    error an error.

    Assure that both + and - values are correctly processed.

    Assure that division by zero does not occur.

    Include value zero in all calculations.

    Include at least one in-range value. Include maximum and minimum range values.

    Include out of range values above the maximum and below the minimum.

    Assure that upper and lower values in ranges are handled correctly.

    Alpha Field Checks

    Use blank and non-blank data.

    Include lowest and highest values. Include invalid characters & symbols.

    Include valid characters. Include data items with first position blank.

    Include data items with last position blank.

    Validation Testing - Standard Actions

    Examples of Standard Actions - Substitute your specific commands

    Add

    View

    Change

    Delete

    Continue - (i.e. continue saving changes or additions)

    Add View

    Change

    Delete

    Cancel - (i.e. abandon changes or additions)

    Fill each field - Valid data

    Fill each field - Invalid data

    Different Check Box / Radio Box combinations

  • 8/9/2019 About Acceptance Testing

    15/28

    Scroll Lists / Drop Down List Boxes

    Help

    Fill Lists and Scroll

    Tab

    Tab Sequence

    Shift Tab

    Shortcut Keys / Hot Keys

    Note: The following keys are used in some windows applications, and are included as a

    guides

    Key No Modifier Shift CTRL ALT

    F1 Help Enter Help Mode N/A N/A

    F2 N/A N/A N/A N/A

    F3 N/A N/A N/A N/A

    F4 N/A N/A Close Document /

    Child Window

    Close Application

    F5 N/A N/A N/A N/A

    F6 N/A N/A N/A N/A

    F7 N/A N/A N/A N/A

    F8 Toggle extend

    mode, if

    supported.

    Toggle Add

    mode, if

    supported

    N/A N/A

    F9 N/A N/A N/A N/A

    F10 Toggle menubar activation.

    N/A N/A N/A

    F11,

    F12

    N/A N/A N/A N/A

    TAB Move to next

    active / editablefield

    Move to previous

    active/editablefield.

    Move to next open

    Document or Childwindow. (Adding

    SHIFT reverses the

    order of movement).

    Switch to previously

    used application.(Holding down the ALT

    key displays all open

    applications).

    Alt Puts focus onfirst menu

    command (e.g.

    'File').

    N/A N/A N/A

    Control Shortcut Keys

  • 8/9/2019 About Acceptance Testing

    16/28

    Key Function

    CTRL + Z Undo

    CTRL + x Cut

    CTRL + C Copy

    CTRL + Z Paste

    CTRL + N New

    CTRL + O Open

    CTRL + P Print

    CTRL + S Save

    CTRL + B Bold

    CTRL + I Italic

    CTRL + U Underline

    Brief about Regression Testing

    What is regression testing?

    Regression testing is the process of testing changes to computer programs to make

    sure that the older programming still works with the new changes.

    Regression testing is a normal part of the program development process. Test

    department coders develop code test scenarios and exercises that will test new

    units of code after they have been written. Before a new version of a software product is released, the old test cases are run

    against the new version to make sure that all the old capabilities still work. The

    reason they might not work because changing or adding new code to a programcan easily introduce errors into code that is not intended to be changed.

    The selective retesting of a software system that has been modified to ensure that

    any bugs have been fixed and that no other previously working functions havefailed as a result of the reparations and that newly added features have not created

    problems with previous versions of the software. Also referred to as verification

    testing.

    Regression testing is initiated after a programmer has attempted to fix a

    recognized problem or has added source code to a program that may haveinadvertently introduced errors.

    It is a quality control measure to ensure that the newly modified code still

    complies with its specified requirements and that unmodified code has not been

    affected by the maintenance activity.

    Test Execution

  • 8/9/2019 About Acceptance Testing

    17/28

    Test Execution is the heart of the testing process. Each time your application changes,

    you will want to execute the relevant parts of your test plan in order to locate defects and

    assess quality.

    Create Test Cycles

    During this stage you decide the subset of tests from your test database you want to

    execute.

    Usually you do not run all the tests at once. At different stages of the quality assurance

    process, you need to execute different tests in order to address specific goals. A related

    group of tests is called a test cycle, and can include both manual and automated tests

    Example: You can create a cycle containing basic tests that run on each build of the

    application throughout development. You can run the cycle each time a new build is

    ready, to determine the application's stability before beginning more rigorous testing.

    Example: You can create another set of tests for a particular module in your application.

    This test cycle includes tests that check that module in depth to decide which test cyclesto build, refer to the testing goals you defined at the beginning of the process. Also

    consider issues such as the current state of the application and whether new functions

    have been added or modified.

    Following are examples of some general categories of test cycles to consider:

    sanity cycle checks the entire system at a basic level (breadth, rather than depth) to seethat it is functional and stable. This cycle should include basic-level tests containing

    mostly positive checks.

    normal cycle tests the system a little more in depth than the sanity cycle. This cycle cangroup medium-level tests, containing both positive and negative checks.

    advanced cycle tests both breadth and depth. This cycle can be run when more time is

    available for testing. The tests in the cycle cover the entire application (breadth), and alsotest advanced options in the application (depth).

    regression cycle tests maintenance builds. The goal of this type of cycle is to verify that

    a change to one part of the software did not break the rest of the application. A regression

    cycle includes sanity-level tests for testing the entire software, as well as in-depth testsfor the specific area of the application that was modified.

    Run Test Cycles (Automated & Manual Tests)

    Once you have created cycles that cover your testing objectives, you begin executing the

    tests in the cycle. You perform manual tests using the test steps. Testing Tools executes

    automated tests for you. A test cycle is complete only when all tests-automatic andmanual-have been run.

  • 8/9/2019 About Acceptance Testing

    18/28

    With Manual Test Execution you follow the instructions in the test steps of each test. You

    use the application, enter input, compare the application output with the expected output,

    and log the results. For each test step you assign either pass or fail status.During Automated Test Execution you create a batch of tests and launch the entire batch

    at once. Testing Tools runs the tests one at a time. It then imports results, providing

    outcome summaries for each test.

    Analyze Test Results

    After every test run one analyze and validate the test results. And have to identify all the

    failed steps in the tests and to determine whether a bug has been detected, or if the

    expected result needs to be updated.

    Change Request

    Initiating a Change Request: A user or developer wants to suggest a modification that

    would improve an existing application, notices a problem with an application, or wants torecommend an enhancement. Any major or minor request is considered a problem with

    an application and will be entered as a change request.

    Type of Change Request

    1.Bug the application works incorrectly or provides incorrect information. (for example,a letter is allowed to be entered in a number field)

    2.Change a modification of the existing application. (for example, sorting the files

    alphabetically by the second field rather than numerically by the first field makes themeasier to find)

    3.Enhancement new functionality or item added to the application. (for example, a new

    report, a new field, or a new button)

    Priority for the request

    1.Low the application works but this would make the function easier or more user

    friendly.

    2.High the application works, but this is necessary to perform a job.

    3.Critical the application does not work, job functions are impaired and there is no workaround. This also applies to any Section 508 infraction.

    Bug Tracking

    1. Locating and repairing software bugs is an essential part of software development.

    2. Bugs can be detected and reported by engineers, testers, and end-users in all

    phases of the testing process.3. Information about bugs must be detailed and organized in order to schedule bug

    fixes and determine software release dates.

    Bug Tracking involves two main stages: Reporting and Tracking.

  • 8/9/2019 About Acceptance Testing

    19/28

    Report Bugs

    Once you execute the manual and automated tests in a cycle, you report the bugs (ordefects) that you detected. The bugs are stored in a database so that you can manage them

    and analyze the status of your application.

    When you report a bug, you record all the information necessary to reproduce and fix it.You also make sure that the QA and development personnel involved in fixing the bug

    are notified.

    Track and Analyze Bugs

    The lifecycle of a bug begins when it is reported and ends when it is fixed, verified, and

    closed.

    1. First you report New bugs to the database, and provide all necessary informationto reproduce, fix, and follow up the bug.

    2. The Quality Assurance manager or Project manager periodically reviews all New

    bugs and decides which should be fixed. These bugs are given the status Open

    and are assigned to a member of the development team.3. Software developers fix the Open bugs and assign them the status Fixed.

    4. QA personnel test a new build of the application. If a bug does not reoccur, it is

    Closed. If a bug is detected again, it is reopened.

    Communication is an essential part of bug tracking; all members of the development and

    quality assurance team must be well informed in order to insure that bugs information is

    up to date and that the most important problems are addressed.

    The number of open or fixed bugs is a good indicator of the quality status of yourapplication. You can use data analysis tools such as re-ports and graphs in interpret bug

    data.

    Brief About System Testing

    For most organizations, software and system testing represents a significant element of a

    project's cost in terms of money and management time. Making this function more

    effective can deliver a range of benefits including reductions in risk, development costs

    and improved 'time to market' for new systems.

    Systems with software components and software-intensive systems are more and more

    complex everyday. Industry sectors such as telecom, automotive, railway, andaeronautical and space, are good examples. It is often agreed that testing is essential to

    manufacture reliable products. However, the validation process does not often receive the

    required attention. Moreover, the validation process is close to other activities such asconformance, acceptance and qualification testing.

  • 8/9/2019 About Acceptance Testing

    20/28

    The difference between function testing and system testing is that now the focus is on the

    whole application and its environment . Therefore the program has to be givencompletely. This does not mean that now single functions of the whole program are

    tested,because this would be too redundant. The main goal is rather to demonstrate the

    discrepancies of the product from its requirements and its documentation. In other words,this again includes the question, ``Did we build the right product?'' and not just, Did we

    build the product right?''

    However, system testing does not only deal with this more economical problem,

    it also contains some aspects that are orientated on the word ``system'' . This means that

    those tests should be done in the environment for which the program was designed, like a

    mulituser network or whetever. Even security guide lines have to be included. Once

    again, it is beyond doubt that this test cannot be done completely, and nevertheless, whilethis is one of the most incomplete test methods, it is one of the most important.

    A number of time-domain software reliability models attempt to predict the growth of a

    system's reliability during the system test phase of the development life cycle. In this

    paper we examine the results of applying several types of Poisson-process models to the

    development of a large system for which system test was performed in two paralleltracks, using different strategies for test data selection.

    We will test that the functionality of your systems meets with your specifications,integrating with which-ever type of development methodology you are applying. We test

    for errors that users are likely to make as they interact with the application as well as your

    applications ability to trap errors gracefully. These techniques can be applied flexibly,whether testing a financial system, e-commerce, an online casino or games testing.

    System Testing is more than just functional testing, however, and can, when appropriate,also encompass many other types of testing, such as:

    1. security

    2. load/stress3. performance

    4. browser compatibility

    5. localization

    Need for System Testing Effective software testing, as a part of software engineering, has

    been proven over the last 3 decades to deliver real business benefits including:

    Reduction of costs Reduce rework and support overheads

    Increased productivity More effort spent on developing new

    functionality and less on "bug fixing" as

  • 8/9/2019 About Acceptance Testing

    21/28

    quality increases

    Reduce commercial

    risks

    If it goes wrong, what is the potential impact

    on your commercial goals? Knowledge is

    power, so why take a leap of faith while yourcompetition step forward with confidence?

    These benefits are achieved as a result of some fundamental principles of testing

    for example, increased independence naturally increases objectivity.

    Your test strategy must take into consideration the risks to your organization, commercialand technical. You will have a personal interest in its success in which case it is only

    human for your objectivity to be compromised.

    System Testing Techniques1. Goal is to evaluate the system as a whole, not its parts2. Techniques can be structural or functional

    3. Techniques can be used in any stage that tests the system as a whole (acceptance,

    installation, etc.)4. Techniques not mutually exclusive

    5. Structural techniques

    6. Stress testing - test larger-than-normal capacity in terms of transactions, data,users, speed, etc.

    7. Execution testing- test performance in terms of speed, precision, etc.

    8. Recovery testing - test how the system recovers from a disaster, how it handles

    corrupted data, etc.9. Operations testing - test how the system fits in with existing operations and

    procedures in the user organization

    10. Compliance testing - test adherence to standards11. Security testing - test security requirements

    12. Functional techniques

    13. Requirements testing - fundamental form of testing - makes sure the system doeswhat its required to do

    14. Regression testing - make sure unchanged functionality remains unchanged

    15. Error-handling testing - test required error-handling functions (usually user error)16. Manual-support testing - test that the system can be used properly - includes user

    documentation

    17. Intersystem handling testing - test that the system is compatible with other

    systems in the environment18. Control testing - test required control mechanisms

    19. Parallel testing - feed same input into two versions of the system to make sure

    they produce the same output.

    Functional techniques

  • 8/9/2019 About Acceptance Testing

    22/28

    1. Input domain testing - pick test cases representative of the range of allowable

    input, including high, low, and average values2. Equivalence partitioning - partition the range of allowable input so that the

    program is expected to behave similarly for all inputs in a given partition, then

    pick a test case from each partition3. Boundary value - choose test cases with input values at the boundary (both inside

    and outside) of the allowable range

    4. Syntax checking - choose test cases that violate the format rules for input5. Special values - design test cases that use input values that represent special

    situations

    7. Output domain testing - pick test cases that will produce output at the extremes of

    the output domain8. Structural techniques

    9. Statement testing - ensure the set of test cases exercises every statement at least

    once

    10. Branch testing - each branch of an if/then statement is exercised11. Conditional testing - each truth statement is exercised both true and false

    12. Expression testing - every part of every expression is exercised13. Path testing - every path is exercised (impossible in practice)

    14. Error-based techniques

    Basic idea is that if you know something about the nature of the defects in the code, youcan estimate whether or not youve found all of them or not

    1. Fault seeding - put a certain number of known faults into the code, then test until

    they are all found

    2. Mutation testing - create mutants of the program by making single changes, thenrun test cases until all mutants have been killed

    3. Historical test data - an organization keeps records of the average numbers of

    defects in the products it produces, then tests a new product until the number ofdefects found approaches the expected number

    Conclusion:

    Hence the system Test phase should begin once modules are integrated enough to

    perform tests in a whole system environment. System testing can occur in parallel with

    integration test, especially with the top-down method.

    Brief about Unit Testing

    Unit testing. Isn't that some annoying requirement that we're going to ignore? Many

    developers get very nervous when you mention unit tests. Usually this is a vision of agrand table with every single method listed, along with the expected results and pass/fail

    date. It's important, but not relevant in most programming projects.

  • 8/9/2019 About Acceptance Testing

    23/28

    The unit test will motivate the code that you write. In a sense, it is a little design

    document that says, "What will this bit of code do?" Or, in the language of objectoriented programming, "What will these clusters of objects do?"

    The crucial issue in constructing a unit test is scope. If the scope is too narrow, then thetests will be trivial and the objects might pass the tests, but there will be no design of

    their interactions. Certainly, interactions of objects are the crux of any object oriented

    design.

    Likewise, if the scope is too broad, then there is a high chance that not every component

    of the new code will get tested. The programmer is then reduced to testing-by-poking-

    around, which is not an effective test strategy.

    Need of Unit Testing

    How do you know that a method doesn't need a unit test? First, can it be tested byinspection? If the code is simple enough that the developer can just look at it and verify

    its correctness then it is simple enough to not require a unit test. The developer shouldknow when this is the case.

    Unit tests will most likely be defined at the method level, so the art is to define the unit

    test on the methods that cannot be checked by inspection. Usually this is the case whenthe method involves a cluster of objects. Unit tests that isolate clusters of objects for

    testing are doubly useful, because they test for failures, and they also identify those

    segments of code that are related. People who revisit the code will use the unit tests todiscover which objects are related, or which objects form a cluster. Hence: Unit tests

    isolate clusters of objects for future developers.

    Another good litmus test is to look at the code and see if it throws an error or catches an

    error. If error handling is performed in a method, then that method can break. Generally,

    any method that can break is a good candidate for having a unit test, because it may breakat some time, and then the unit test will be there to help you fix it.

    The danger of not implementing a unit test on every method is that the coverage may be

    incomplete. Just because we don't test every method explicitly doesn't mean that methodscan get away with not being tested. The programmer should know that their unit testing is

    complete when the unit tests cover at the very least the functional requirements of all the

    code. The careful programmer will know that their unit testing is complete when theyhave verified that their unit tests cover every cluster of objects that form their application

    Life Cycle Approach to Testing

    Testing will occur throughout the project lifecycle i.e., from Requirements till User

    Acceptance Testing. The main Objective to Unit Testing is as follows:

  • 8/9/2019 About Acceptance Testing

    24/28

    1. To execute a program with the intent of finding an error;

    2. To uncover an as-yet undiscovered error and

    3. Prepare a test case with a high probability of finding an as-yet undiscovered error.

    Levels of Unit Testing

    1. UNIT

    2. 100% code coverage

    3. INTEGRATION4. SYSTEM

    5. ACCEPTANCE

    6. MAINTENANCE AND REGRESSION

    Concepts in Unit Testing:

    1. The most 'micro' scale of testing;

    2. To test particular functions or code modules.3. Typically done by the programmer and not by testers.

    4. As it requires detailed knowledge of the internal program design and code.5. Not always easily done unless the application has a well-designed architecture

    with tight code.

    Unit Testing Flow

    Types of Errors detected

    The following are the Types of errors that may be caught :

    1. Error in Data Structures

    2. Performance Errors3. Logic Errors

    4. Validity of alternate and exception flows

    5. Identified at analysis/design stages

    Unit Testing Black Box Approach

    1. Field Level Check

  • 8/9/2019 About Acceptance Testing

    25/28

    2. Field Level Validation

    3. User Interface Check

    4. Functional Level Check

    Unit Testing White Box Approach

    1. STATEMENT COVERAGE

    2. DECISION COVERAGE

    3. CONDITION COVERAGE4. MULTIPLE CONDITION COVERAGE (nested conditions)

    5. CONDITION/DECISION COVERAGE

    6. PATH COVERAGE

    Unit Testing FIELD LEVEL CHECKS

    1. Null / Not Null Checks

    2. Uniqueness Checks3. Length Checks

    4. Date Field Checks5. Numeric Checks

    6. Negative Checks

    Unit Testing Field Level Validations

    1. Test all Validations for an Input field

    2. Date Range Checks (From Date/To Dates)3. Date Check Validation with System date

    Unit Testing User Interface Checks

    1. Readability of the Controls

    2. Tool Tips Validation3. Ease of Use of Interface Across

    4. Tab related Checks

    5. User Interface Dialog

    6. GUI compliance checks

    Unit Testing - Functionality Checks

    1. Screen Functionalities

    2. Field Dependencies

    3. Auto Generation4. Algorithms and Computations

    5. Normal and Abnormal terminations

    6. Specific Business Rules if any...

  • 8/9/2019 About Acceptance Testing

    26/28

    Unit Testing - OTHER MEASURES

    1. FUNCTION COVERAGE2. LOOP COVERAGE

    3. RACE COVERAGE

    Execution of Unit Tests

    1. Design a test case for every statement to be executed.2. Select the unique set of test cases.

    3. This measure reports whether each executable statement is encountered.

    4. Also known as: line coverage, segment coverage and basic block coverage.

    5. Basic block coverage is the same as statement coverage except the unit of codemeasured is each sequence of non-branching statements.

    Advantage of Unit Testing

    1. Can be applied directly to object code and does not require processing source

    code.2. Performance profilers commonly implement this measure.

    Disadvantage of Unit Testing

    1. Insensitive to some control structures (number of iterations)

    2. Does not report whether loops reach their termination condition

    3. Statement coverage is completely insensitive to the logical operators (|| and &&).

    Method for Statement Coverage

    Design a test-case for the pass/failure of every decision point -Select unique set of test

    cases

    1. This measure reports whether Boolean expressions tested in control structures

    (such as the if-statement and while-statement) evaluated to both true and false.

    2. The entire Boolean expression is considered one true-or-false predicate regardless

    of whether it contains logical-and or logical-or operators.3. Additionally, this measure includes coverage of switch-statement cases, exception

    handlers, and interrupt handlers

    4. Also known as: branch coverage, all-edges coverage, basis path coverage,decision-decision-path testing

    5. "Basis path" testing selects paths that achieve decision coverage.

    ADVANTAGE:

    Simplicity without the problems of statement coverage.

  • 8/9/2019 About Acceptance Testing

    27/28

    DISADVANTAGE:

    This measure ignores branches within Boolean expressions which occur due to short-circuit operators.

    Method for Condition Coverage:

    1. Test if every condition (sub-expression) in decision for true/false -Select unique

    set of test cases.2. Reports the true or false outcome of each Boolean sub-expression, separated by

    logical-and and logical-or if they occur.

    3. Condition coverage measures the sub-expressions independently of each other.

    4. Reports whether every possible combination of Boolean sub-expressions occurs.As with condition coverage, the sub-expressions are separated by logical-and and

    logical-or, when present.

    5. The test cases required for full multiple condition coverage of a condition are

    given by the logical operator truth table for the condition.

    DISADVANTAGES:

    1. Tedious to determine the minimum set of test cases required, especially for very

    complex Boolean expressions

    2. Number of test cases required could vary substantially among conditions thathave similar complexity

    3. Condition/Decision Coverage is a hybrid measure composed by the union of

    condition coverage and decision coverage.4. It has the advantage of simplicity but without the shortcomings of its component

    measures

    5. This measure reports whether each of the possible paths in each function havebeen followed.

    6. A path is a unique sequence of branches from the function entry to the exit.

    7. Also known as predicate coverage. Predicate coverage views paths as possiblecombinations of logical conditions

    8. Path coverage has the advantage of requiring very thorough testing

    Function Coverage:

    1. This measure reports whether you invoked each function or procedure.

    2. It is useful during preliminary testing to assure at least some coverage in all areasof the software.

    3. Broad, shallow testing finds gross deficiencies in a test suite quickly.

    Loop Coverage:

    This measure reports whether you executed each loop body zero times, exactly once,

    twice and more than twice (consecutively). For do-while loops loop coverage reports

  • 8/9/2019 About Acceptance Testing

    28/28

    whether you executed the body exactly once and more than once.

    The valuable aspect of this measure is determining whether while-loops and for-loopsexecute more than once, information not reported by others measure.

    Race Coverage:his measure reports whether multiple threads execute the same code at the same time.

    Helps detect failure to synchronize access to resources.

    Useful for testing multi-threaded programs such as in an operating system.

    Conclusion

    1. Testing irrespective of the phases of testing should encompass the following :2. Cost of Failure associated with defective products getting shipped and used by

    customer is enormous

    3. To find out whether the integrated product work as per the customer requirements

    4. To evaluate the product with an independent perspective5. To identify as many defects as possible before the customer finds

    6. To reduce the risk of releasing the product