performance evaluation & measurement of e-business/erp systems

Post on 10-Feb-2016

45 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

Performance Evaluation & Measurement of E-Business/ERP Systems. Packaged applications are being widely adopted at mid- and large-size companies to automate both internal and business-to-business processes. - PowerPoint PPT Presentation

TRANSCRIPT

Performance Evaluation & Measurement of E-Business/ERP Systems

Packaged applications are being widel - - y adopted at mid and large size comp

anies to automate both internal and b- - usiness to business processes.

- E Business/ERP implementations are u sually part of a broader enterprise co

mputing effort involving the integratio - n of Web, server and host based applic

ations.

In many cases, EBusiness/ERP system s constitute the engine upon which ne w business processes are deployed.

These business transactions are essen tial elements in the drive to deploy ne

w products and services an support ne w customer relationship management

efforts.

With so much resting on these system s, it's imperative that they operate at

peak performance. IT departments need to adopt measur

ements for tracking the performance o - f their E Business/ERP applications bot h during implementation and producti

on phases of operation.

Companies that deploy these complex, multi application business applications must conti

nually adjust them to meet performance and capacity demands as the system scales.

The rate with which these applications are d eployed by organizations, the pace at which

data grows and the size of the user populati on contribute to the type of system and net

work infrastructure required.

These factors also influence the applic ation architecture adopted to support l- arge scale EBusiness/ERP deployment

s.

- When developing an E Business/ERP deploy ment plan, factors such as the operating sys tems, databases, Web servers, and middlew

are used must be taken into account. Any type of performance evaluation should i

nclude extensive stress, system integration, unit and parallel tests to examine every face

t of the system and how it performs in relatio n to the underlying infrastructure.

Testing should be addressed at every - stage of the E Business/ERP project fro

-m pilot deployment to full blow roll out .

Tests should also be performed after li fecycle changes and software upgrade

s are made to the system.

There are 9 basic steps used to evaluate and measure the performance of an EBusiness/E

RP system: Definition of performance requirements such as

- -minimal acceptable response time for all end to e nd tasks and acceptable network latency of an int

erface Creation of a test bed with all components in plac

e

There are 9 basic steps used to evaluate and measure the performance of an EBusiness/E

RP system: 1) Definition of performance requirements s

uch as minimal acceptable response time for - - all end to end tasks and acceptable network

latency of an interface 2) Creation of a test bed with all component

s in place

3) Configuration of entire infrastructur e based on vendor recommendations

4) Execution of unit tests for each appl ication in the package to ensure all req

uired functions work 5) Execution of integration test to ens

ure compatibility and connectivitybet ween all components

6) Launch monitoring tools for underly ing systems including operating syste ms, databases and middleware

7) Creation of baseline reference of th e response times for all key tasks whe n the system is not under stress

8) Creation of baseline reference of re sponse times for all key tasks under va

rying load conditions 9) If requirements are not met, make n

ecessary changes in the hardware, sof tware and networks and repeat the tes

ts

Without successfully completing tests aimed at the functionality offered with

-each software module in the E Busines s/ERP suite in isolation (unit) and in co

njunction with each other (integration) , a full blown stress test can't be orchestrated.

All individual components should be w orking in isolation and in relation to on

e another for a successful stress test t o be accomplished, since both the fun

ctionality and the integration between these components are exercised heavi

ly during a stress test.

When embarking on a detailed test pla n, it's important to look at the system

architecture and how it influences the- E Business/ERP/ERP deployment and s

ystem performance. The following is a list of the architectur

al components associated with a large- - -scale web enabled E Business/ERP/ER

P implementation:

- - Back end network infrastructure such as fiber based st orage area network, clustered storage area network or

high speed backbone Scaleable servers such as symmetrical multiprocessing

and NUMA systems - Back end software components such as databases and

middleware - Back end connection Client hardware Client software - Client side ISP connection

A typical connection flows from the cli ent machine running the OS and brows

- er to the back end via the virtual priva te network provided by the ISP, Web s

erver connecting to the application ser ver, and finally connecting to the data base server.

P erformance issues and bottlenecks c an stem from any of these connections

, or from a component and the interfac e with which the component interacts.

-The enormity and complexity of E Busi ness/ERP/ERP implementation environ

ments can give rise to a variety of bott lenecks stemming from network conn ections, database locks, memory usag

e, router latency, and firewall problem s.

A subset of such problems could include:

Too many software modules installed and cached

Outdated and insufficient hardware Outdate d and incompatible software Slow modem ac

cess Lack of virtual memory or disk space Router latency resulting from too many hops Clogged and insufficient network infrastruct

ure

Improper operating system configuration Insufficient database locks Inefficient use of application partitioning Inefficient use of logical memory manager Improper RAID configuration -Slow storage sub systems - Components that are not multi threaded an

d ones that do not scale across multiple CPU's

Efficient use of addressable memory space

Lack of efficient distribution of data be -tween disks or storage sub systems

Poor SQL syntax No load balancing Component interaction latencies Interf

ace latencies

It's possible to add several thousand f actors like the ones above that contrib

ute to poor performance. However, to identify the critical ones t

hat contribute significantly to slow per formance, a detailed analysis of the 9

stepsassoci at ed wi t h per f or mance measu rementsi s t he best pl ace t o st ar t .

1StepD efine performance requirements Instead of defining performance requiremen

ts in terms of the number of transactions per minute, which is relevant when measuring d atabase performance in isolation, or networ

k latency, which is relevant for a network res ource planning study, performance require - - ments should define the end to end require ments that are the most relevant to system

users.

Forexample, ther esponse t i me of a human r esour ce s or financial system should look at the

- - response time from start to finish, and evaluate tasks such as:

Selecting a product and placing it in the s hopping cart

Ordering a list of products selected

posting a job opening submitting a leave application ordering g

oods adding a new supplier submitting a requisition adding an asset submitting a purchase order

Usersoft he syst emar e onl y concer ned abou t the time it takes for them to accompli

-sh specific tasks using the E Business/ ERP system.

They're not concerned about the inter nalsof t he syst emand howmany mi l l i ono perationsper second a ser ver can handl e.

By measuring the performance of task s that exercise all components of the s

ystem, anadministratorcanbasel i ne t he appl i cat i on' s per f ormance, identify bottlenecks, make c

hangesand basel i ne t he eff ect s once again.

When defining performance requirements it' s important to consider the user input and m

anage expectations. Critical tasks should be identified. If response time for generating a specific rep

ort that actually analyzes several gigabits of data is unrealistically set below one second

by the user, expectations have to be managed.

It's also important to define other perf ormancepar amet er s al ong wi t h r equi r ement s such as a typical work load or peak w

orkload. A peak workload could include 1,000

WAN users and 200 local users, includi ng 20power users accessing the financ

ialmodul e.

The peak period could include 20 batc h processes, 10application domains, 30

queries, and 30 print jobs running on t hesyst em.

Measuringther esponse t i me of each i dent i fi ed t - ask and sub task with no workload, typ ical workload and peak load iscritical t

omeasur i ng and pr edi ct i ng appl i cat ionper f or mance.

2Step S et up a test bed with all the co mponents in place

It's extremely important to establish a testbed t hat i ncl udes al l component s a ndi s confi gur ed as cl oset o t he pr odu ctionenvi r onment as possi bl e.

- All E Business/ERP application vendors providecer t i fi cat i on i nf or mat i on about t hepl at f or ms suppor t ed.

- The back end hardware used in a test bed sh ould be based on the hardware recommend

ations for a production environment. - - If a giga bit switch is used on the back end t

o speed up data transfers between the appli cation server, database and the interfaced s ystem in the production environment, a simi

lar one should be set up as part of the test bed.

To simulate wide area network and Internet traffic, client machines need to be configure

-d outside the router that connects the back end network to the WAN or the Internet.

If access to the application is expected from client machines with 64MB of memory, 300

MHz Intel processor and a 56KB modem, this combination of hardware for the client side s

hould be setup as part of the test bed.

Also as part of the test bed setup, sam ple data, test monitoring tools and test

automation tools all need to be in plac e prior to exercising any efforts with re

gardst o confi gur at i on and t uni ng. These include protocol analyzer, a test

automationtool, oper at i ng syst emand moni t or i ng tool.

3Step Use the vendor's recommendat ions as a baseline.

Configure and tune the entire infrastru cturebased on vendor r ecommendat i ons a ndcer t i fi cat i ons.

Before beginning any testing, it's impo rtantt o confi gur e and t une t he ent i r e i n frastructurebut onl y on gi ven par amet er s and va riables.

It's impossible to get recommendation s for all parameters for a platform from

specific vendors and even if it's possibl e, this could introduce unnecessary co

mplexities given the thousands of vari ablest hat can be t uned at each l ayer .

Certain key recommendations could include: total memory configuration for the database

, total allowable locks, network packet size, number of I/O demons, number of applicatio

n domains, buffered I/O size, customized cac he configurations, recommended OS patche

s, recommended client connectivity tools ve rsion and patches and network interface car

d throughput.

Any tests attempted without accomplishing this third step is basically a wasted effort.

Baselining the response time without the def ault product configurations could lead to mis

leading test results. The configuration and tuning guidelines give

n by the respective vendors can only act as a starting point.

Further changes in specific hardware and so ftware configuration that can improve perfor mance will be dependent on any additional fi ndings based on the remaining steps associ

ated with this process. Keep in mind that steps 3 through 8 are itera

tive processes that must be repeated until t he performance requirements are met.

4Step Conduct unit tests to ensure all required functions work.

Based on the performance requiremen ts defined in step 1 , all identified func

tions like adding an asset and generati ng an accounts payable report ought t

o be tested first in isolation to ensure t hat these tasks can be accomplished e

- - ndt o end usi ng t he syst em.

This is a precursor to the remaining te stst hat need t o be accompl i shed.

Unit tests are meant to ensure that the product works the way it's supposed t

o. Automated testing tools can be used t

o accomplish these tasks.

Although the unit tests are conducted i n isolation from one another, it's still e

xpected that all the components that a repartof t he syst emar e i n pl ace and f unc tionappropriately, but ar e not r unni ng at t hat poi nt int i me.

5Step Conduct an integration test to e nsure compatibility and connectivity

between all components. This step is critical since it's the first oc

casionwhen t he ent i r e syst emi s put t o t h et est .

- All functions, sub functions and their in teractionsar e t est ed concur r ent l y.

To accomplish this task the computing environmentand al l i t s component s must be act ive.

The main purpose of this the test is to ensure that all components can talk to

eachot her .

Any compatibility and connectivity iss ues that exist will be revealed. Before

studying and baselining the performan ce behavior of each of the components in the system, it's important to fix thes

ecompat i bi l i t y and connect i vi t y i ssues.

An example of a problem could be a co nnectivity time out by an application s

erver for the 1 0 0 0 0 1 st user wh en the five application servers are con

figured to provide access to 2 0 0 0 0 users each.

Another example is bad configuration pointingt o di ff er ent ver si ons of connect i vityl i br ar i es.

By completing this step successfully, y ou ensure that the system in its entiret

y works, even if it is not yet at the acce ptablel evel of per f or mance.

6Step K ickoff all monitoring. At this phase we kick off all monitoring

tools that have been put in place durin g step 2 .

Baseline and track the resource consu mption of these monitoring tools since

they can act as a skew to the test measurements.

An example of the monitoring tools th at can be used based on the sample pl

atform identified in step 2 includes: SymonforSolaris2.6tomeasureresourceusage of t he dat abase, appl i c

ation server, OLAP server memory con sumption, CPU usage and I/O activity

11Sybase Monitor Server for ASE . 5.1to measure database usage, lock co

nt ent i on and cache hi t r at i o Performance Monitor for Windows NT

4.0to measure resource consumption on t he fi l e ser ver and secondar y ap

pl i cat i on ser ver LANanalysis tool to identify network b

andwidth utilization

When starting these performance mea surement tools get output both in the f

- orm of real time visualization to study instant changes to the degradation or boosts in response time/performance

and as a file for the purpose of analyzi ngt he dat a.

7Step Baseline response time for all k ey tasks under no stress.

Once all the monitoring tools and the e ntire infrastructure is up and running,

we get a baseline of the response time of all identified critical tasks without si

mulating the typical or the peak workload.

This gives us an idea about how individual ta sks perform and how quickly they respond w - hen the back end is basically idle.

The response time for each task is compute d by submitting them and measuring the sta- - rt time to end time with a tool like a stopwat ch or time scripts.

The numbers generated through this step ca n be used for comparative analysis purposes

.

8Step B aseline response time for all ke y tasks under different load condition

s Two key load conditions under which t

he response times are measured once again include the typicalworkload and

the peak load. Assume that a task like submitting a re

quisition is expected to be completed i 40n less than seconds.

When running these tests several time uu u uuu uuuuuuuu uu u u u, 2 2

32econds under idle conditions, sec - 52onds under typical work load and s

- econds under peak load conditions.

Now, we know clearly that there woulduu uuuuuuuuuu u uuu uuu uuuuuuuu uuuuu sof per f or mance cannot be achi eved witht he gi ven i nf r ast r uct ur e and i t s configuration.

The above table illustrates the statistic s generated for one such task.

- A typical test for a large scale deploym - - ent of a web enabled E Business/ERP s

ystem will include hundreds of such ta sks and may even range in the thousa

.

Proper analysis of the data generated by the different monitoring tools will re

veal the bottleneck or the cause for th e slow response to submitting a requis

ition. By alleviating such a bottleneck the re

sponset i me can be i mpr oved sl owl y but st eadily.

By running steps 3 to 8 repeatedly, bottlene cks associated with every task can be identi fied and fixed.

In cases where the performance requiremen ts are met for specific tasks after running thr

ough the process the first time, it's importan t to continue baselining after every change t o ensure no performance degradation associ

ated with a specific task has taken place.

9Step If requirements are not met, m ake necessary changes in the form

of tuning Tweaking, tuning, and adding or remo

ving the relevant resource after runnin g this process several times is an opti

mal way of meeting performance requi rements for all tasks.

In one situation the bottleneck could b ei dent i fi ed as a pr ocess such as a c reditr epor t i ng pr ocess.

When looking at the process flow, it wil l become apparent this task can't hand

le more than a limited number of conc urrent requests.

Givenasituationl i ke t hi s, t he confi gur at i on of t h is process could be modified to spawn

more of the same to accommodate mo re requests.

In another situation the firewall and its authentication/authorization process c

ould be identified as a bottleneck.

- A load balanced cluster of firewall serv erscoul d r esol ve t hi s i ssue.

A particular server's CPU utilization co uld cause a bottleneck and something

as simple as adding CPU's could be the answert o t he pr obl em.

Similarly, if the bottleneck is caused b y I/O contention on a specific device, i

mplementingst r i ped mi r r or s f or t hat devi ce co uldbe t he answer .

By adopting this methodology, all relevant macro level and micro level issues and bottl enecks can be identified.

This enables the overall throughput of the sy stem to be optimized.

An added benefit of this process is ensuring f unctionality requirements and addressing co

mpatibility requirements.

Finally, by embracing this process you can t ell if the applications will work under realistic

workloads and prove if service level agreem ents will be met.

These tests will also help an IT department u nderstand how the system will affect existin

g IT infrastructure. These tests should help gauge capacity plan

ning efforts during the system's lifecycle aswell.

top related