alternative measure of system effectiveness

Upload: espion-ben

Post on 06-Apr-2018

242 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/2/2019 Alternative Measure of System Effectiveness

    1/12

    Alternative Measures of System Effectiveness: Associations and Implications

    Author(s): Ananth SrinivasanReviewed work(s):Source: MIS Quarterly, Vol. 9, No. 3 (Sep., 1985), pp. 243-253Published by: Management Information Systems Research Center, University of MinnesotaStable URL: http://www.jstor.org/stable/248951 .

    Accessed: 01/02/2012 12:46

    Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp

    JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of

    content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new formsof scholarship. For more information about JSTOR, please contact [email protected].

    Management Information Systems Research Center, University of Minnesota is collaborating with JSTOR to

    digitize, preserve and extend access toMIS Quarterly.

    http://www.jstor.org

    http://www.jstor.org/action/showPublisher?publisherCode=misrchttp://www.jstor.org/stable/248951?origin=JSTOR-pdfhttp://www.jstor.org/page/info/about/policies/terms.jsphttp://www.jstor.org/page/info/about/policies/terms.jsphttp://www.jstor.org/stable/248951?origin=JSTOR-pdfhttp://www.jstor.org/action/showPublisher?publisherCode=misrc
  • 8/2/2019 Alternative Measure of System Effectiveness

    2/12

    Measures of SystemEffectiveness

    AlternativeMeasuresof System Effective-ness: Associations andImplicationsBy: Ananth SrinivasanAssistant Professor

    Operations and SystemsManagementIndiana UniversityBloomington, Indiana

    AbstractThisarticlereportsresults froma studythatexaminedtheimplementationfcomputerizedmodeling ystemsin29 organizations.The ocus is on the use of variousMISeffectivenessmeasures thatare reported n MISresearch. Specifically,we examine the relationshipbetween userperceivedeffectivenessmeasures(usersatisfaction)andbehavioralmeasures of systemeffec-tiveness(systemuse).Whilemuch of the existingMISresearchimpliesthatthe two types of measures are positivelyassociatedwitheach other, the results fromthis study indicateotherwise.By using a perceivedeffectivenessinstru-ment thatis stronglygroundedin a widely acceptedtheoretical model, the results provide importantinsights ntothe natureof thisrelationship.The importance of interpreting perceived andbehavioral measures of system effectiveness isdemonstrated by examining the effect of systemsophisticationon MISeffectiveness.Keywords:MISeffectiveness measurement,manage-mentof informationystemsACMCategories:K.4.3, K.6.0, K.6.4.

    IntroductionMeasurement of the effectiveness of a manage-ment information system (MIS) is an issue thathas generated much debate and consequentresearch interest over the years. The mostrecent among a series of indicators of suchinterest is the initiation of roundtable discus-sions at the annual Conference on InformationSystems where one of the proposed topics for1985 is "Measuring and Improving InformationSystem Effectiveness/Productivity." The spec-trum of approaches that have been suggested todeal with this complex issue presents abewildering array to a researcher whose inten-tion is to include MISeffectiveness as a depen-dent variable in a study, or to a practicingmanager who wants to get a clear indication ofthe quality of the MIS being used. Approachesthat have been advocated include MIS usageestimation [4], user satisfaction [2], incrementalperformance in decision making effectiveness[16], cost-benefit analysis [15], informationeconomics [20], utilityanalysis [17], the analytichierarchy approach [22], and information attri-bute examination [6].While acknowledging the importance ofeconomic analyses of MIS value, researchersresponded to the shifting emphasis from effi-ciency to user effectiveness by focusing eitheron MIS usage or user perceived effectiveness.Much of the MISliterature of late uses one or theother as the dependent variable of interest.Briefly,the MISusage approach uses behavioralindicators as surrogates for MIS effectiveness.Examples of such indicators are the number ofreports generated, the number of changes madeto a file, connect time, etc. The perceived effec-tiveness approach uses measures of effec-tiveness as perceived by users of the system.Examples of such measures include usersatisfaction, perceived system quality, etc. Theliterature is replete with arguments both for andagainst the use of these two approaches. Pro-viding a typical argument for the system usageapproach, Ein-Dor and Segev [4] state:

    "[Various] criteria [for success that arementioned in the literature] are clearlymutually dependent; profitability is cor-related with performance, application tomajor problems, and actual use. We claimthat a manager will use a system inten-

    MIS Quarterly/September 1985 243

  • 8/2/2019 Alternative Measure of System Effectiveness

    3/12

    Measures fSystemEffectiveness

    sively only if it meets some of the criteria,and that use is highly correlated withthem" [4, pp. 1065-1066].Ginzberg[8] argued against the system usageapproach by stating that the link betweensystem usage and the qualityofdecision makingwas a weak one. Ifone views the system as aservice (insteadof a product) hatis designed toenable managers to performmore effectively,the extent of use measure would be a very mis-leading indicatorof success. Based on theseassertionsthen, and hisapproach o the issue inensuing research [7], Ginzbergadvocated theuser perceived effectiveness approach.Citingsituationswhere system usage may andmay not be an appropriatemeasure of MISeffectiveness, Ives,Olson,and Baroudi 13]sug-gest that the use of both approaches (systemusage and userperceivedeffectiveness) maybewarranted n manysituations.It is apparentthat bothsystem usage and userperceived effectiveness play key roles in deter-mining the effectiveness of an MIS. It wouldseem, then, that the relationshipbetween thetwowouldbe of interest o researchersandprac-titioners alike in an attempt to examine cor-relates between effectiveness measurementapproaches. From a researcher's perspectiveit is important to understand relationshipsbetween several competing surrogates of (pur-portedly)he same phenomenon.Froma practi-tioner's perspective, it is important o under-stand what exactly is being measured when asystem effectiveness study is inititatedin anorganization.Previousexaminationsof this relationship seeZmud [29] for a review of this literature),however,sufferfrom he maincriticismsofferedby Ives,et al., [1]intheir use of perceivedeffec-tiveness measures. The relationshipbetweenMISusage and user satisfaction that has beenreported nthe past reflects an inadequate reat-ment of the perceived measures. Hence, whatwe do knowabout this relationship s, at best,superficial.

    It is not the purpose of this article to examineprevious research pertaining to MIS effec-tiveness perse. Excellentreviewsmaybe foundin Ives, et al., and Zmud. However, we will

    examinethatresearchthat has specificallycon-cerned itself withthis relationship.The extensive system implementation tudiesreportedby Lucas [18, 19] have examined therelationship between system use and somemeasures of user satisfactionwiththe system.Generallypositiveassociations are reported.Inhis studyof informationystems inten food pro-cessing firms, Schewe [24] reporteda lack ofsignificant association between certain userattitudes (now considered important com-ponents of user satisfaction with an MIS)anduse of the system. Maish[21] reportedpositiveassociations between usage and some attitudespertainingto user satisfaction in his study ofinformationystems infederalagencies. Swan-son [27] reporteda similarassociation in thecase of an MISused bya manufacturer f com-plex electronicequipment.Robey[23] reporteda positive association between system usageand user perceived worth of a system in hisstudyof an industrialales forceand theiruse ofan MIS.The results presented by Ginzberg [7]suggest some positive(albeitweak)associationbetween the two outcome measures.Inall of the studies reportedabove, satisfactionwith he system is eithercaptured nthe broadercontext of obtaining user attitudes about thesystem or through the use of a set of itemsthought o be relevant o user satisfaction.Whatwas lacking nthese attemptswas a comprehen-sive understanding of what constituted usersatisfactionwitha system. Thisresulted nusinga single indexto representsatisfaction[23, 27],or a series of single itemmeasures pertaining osatisfaction [19, 24], each treated inde-pendently.Unless we have appropriate satisfactionmeasures (there reallyhas never been a prob-lemwithobtainingusage measures),we willnotadequately understandthis relationship.Suchan understandingwillprovidebetterguidancetoresearchers in drawing behavioral interpreta-tions of user perceived satisfaction data. Fur-ther,understandinghisrelationships an impor-tant step in bridging a gap for practitionersbetween researchresultsandtheirobservationsof user behavior n system environments.The objectiveof this articlethen is to examinebehavioral and user perceived effectiveness

    244 MISQuarterly/September985

  • 8/2/2019 Alternative Measure of System Effectiveness

    4/12

    Measures of SystemEffectiveness

    empirically in the context of a particularclass ofinformation systems in order to investigate thecritical relationships between the two measures.The importance of doing this is demonstrated bythe application of both measures as dependentvariables in examining the suitability of thecharacteristics of information systems in anumber of organizations. The research reportedin this article was conducted as part of a largerstudy on the implementation of computerizedplanning models in large organizations.

    MethodologyThe initialstep in the research was to choose ameasurement of user perceived effectiveness ofthe system. While a number of approaches toexamining user perceived effectiveness havebeen proposed, the approach reported byJenkins and Ricketts [14] is superior. Itis one ofthe few (ifnot only) approaches that develops aninstrument to measure user satisfaction that iswell grounded in a widely accepted theoreticalmodel. Despite the fact that Ives, etal., [13] pointout shortcomings to this approach, the pro-cedure adopted by Jenkins and Ricketts indeveloping and testing a satisfaction instrumentprovides a firm basis for researchers interestedin this issue. The Jenkins and Ricketts frame-work (described below) is adapted to the uniqueneeds of our situation to arrive at user perceivedmeasures of system effectiveness.In order to uncover the underlying factors thatconstituted overall satisfaction with the system,Jenkins and Ricketts hypothesized that Simon's[25] paradigm for the problem-solving process

    (intelligence, design, and choice phases) was anappropriate perspective of the manner in whichusers evaluate their experiences with thesystem. Using a factor analytic approach toempirically test their claim, they postulated thatthere are five key underlying dimensions thatmake up overall user satisfaction: report con-tent, report form, assistance in problem solving,input procedures, and system stability. Table 1shows the correspondence between each of thefive dimensions and the problem-solvingparadigm.Jenkins and Ricketts outlined the nature of theissues to be addressed under each of the fivedimensions as follows:Report Content:

    Accuracy of report contentsRelevance of report contentsAdequacy of report contentsUnderstandability of report contents

    Report Form:Quality of formatTimeliness of reportMode of presentationSequencing of informationProblem Solving:Usefulness for identifying and definingproblemsUsefulness for selecting among alternativesPower of the modeling language employedFlexibilityof the modeling language involved

    Input Procedures:Ease of understanding input proceduresComprehensiveness of documentation

    Table 1. Dimensions of Perceived Effectiveness and the Problem Solving ParadigmProblem Solving Paradigm Perceived Effectiveness Dimensions

    Intelligence Input ProceduresDesign Systems StabilityProblem SolvingChoice Report ContentsReport Form

    Adapted fromJenkins, A.M. and Ricketts, J.A. "Development of an Instrument to Measure User Satisfac-tion with Management InformationSystems." Unpublished paper, Indiana University, Bloomington, In-diana, 1979.

    MIS Quarterly/September 1985 245

  • 8/2/2019 Alternative Measure of System Effectiveness

    5/12

    Measures of SystemEffectiveness

    InterfacinganguagesEditor haracteristicsSystems stability:Response timeErrorpronenessReliability f the systemAccessibility/availabilityf the systemThis framework s used here for the measure-ment of user perceived effectiveness of thesystem.

    DataCollectionThe intentionof this research was to examinesituations in the field where model-based com-puterizedsystems were in use. These includedsystems that utilized a predefined model of aparticular roblemand also those that providedan environmentfor model building.The ideawas to study those systems that catered tocorporate-planning related activities (thosedevoted to addressing strategic concerns [1])rather hanproduction-orientedransactionpro-cessing systems. Furthermore,he focus of thestudy was on large firms where there existedeither a corporate planning staff which usedcomputer-basedmodeling systems, or a formalsystems departmentwhich providedtechnicalsupportfor the use of such systems.A pilot study was conducted in two largeorganizationswhere it was known hat the plan-ningstaff had been usingcomputer-basedplan-ningmodels inthe past. Those people responsi-ble forthe functioningof the system were inter-viewed to obtain: 1)a generalunderstanding fthe environment n which the system operated,(2) input orappropriate ehavorialmeasures ofsystem use, and (3) to have them examine thepreliminary version of the perceived effec-tiveness instrument or content validation.From this phase of the study a few tentativeobservationswere made. Inlarge organizationsmodeling systems were the responsibilityof aplanningdepartment,a systems department,oran ad hoc groupcreatedspecifically orthatpur-pose. Insome cases, the end user of the systemrarely interacted with the system directly; in-stead, a technical intermediary erformed asksas defined by the user.

    Appropriate behavioral measures used tomonitor apparent system effectiveness wereidentified. They included; number of formalreports generated, connect time with thesystem, and number of interaction essions. Itwas also noted thatthe issues identified orthemeasurementof perceived effectiveness were,in fact, relevant. A review of the issues withthese people resulted in minorchanges beingmadeto the instrumentwhileretaining he basicdimensionality eferred o earlier.Thenextphase ofthe studyinvolvedadminister-ingthe revisedinstrument o a largersample forthe purpose of data analysis. Firmswith cor-porateplanningstaffs were identifiedusing thedirectory f the NorthAmericanSociety for Cor-poratePlanning.The person listed in the direc-torywas contacted,the projectwas explainedbythe researcher, and the cooperationof the firmwas solicited. A total of 37 firmsagreed to par-

    Table 2. Site CharacteristicsUser Access Mode:Directaccess .....................15Indirectaccess (throughan intermediary) ................. 14Typeof System Used:Developed in-house ............... 16Purchased from an outside firm .....11Leased from an outside firm ........ 2ApplicationType1:Financialplanning ................ 16Corporateplanning ................13

    Marketingplanning ................ 9Productionplanning ............... 3User Job Titles:

    Manager,CorporatePlanning .......11Director,Strategic Planning ......... 8Manager,Planning Systems ......... 3AssistantVP, Planning ............. 2PlanningAnalyst .................. 2Senior ProjectManager ............ 1DivisionPlanner ................. 1Director,MIS..................... 1

    1Manysites reported the use of the system for multiple applications;hence the total inthis category adds up to a number greater than thenumber of sites involved in the study.

    246 MISQuarterly/September985

  • 8/2/2019 Alternative Measure of System Effectiveness

    6/12

    Measures of SystemEffectiveness

    ticipate in the study. These firms were selectedafter it was determined that they were involvedin developing and using modeling systems on aregular basis, and that they were willing to par-ticipate ina study of this nature. The instrumentswere then mailed to the contact person. Whilethe effectiveness instrument was aimed at theend user, other measures relevant to the studywere obtained from a technical support personaffiliated with the system. Usable responseswere obtained from 29 firms representing aresponse rate of 78/o. Table 2 shows somecharacteristics of the sites involved in the study.

    Data Analysis, Results andInterpretationTable 3 provides a list and explanations of themeasures used in the data analysis. In order toexamine the relationships of interest, anassociative analysis was performed between thetwo types of variables involved in effectivenessmeasurement. Table 4 shows the results of theanalysis.Examination of the significant correlation coeffi-cients in Table 4 reveals some interestingphenomena. The correlation between TPSESSand OFORM is significant and inverse. This indi-cates that as far as benefits from the system areconcerned, it appears that users who spendlonger time periods at each session with thesystem tend to see the system as not con-tributing favorably to their operations.It also appears that users spending a largeamount of time at an interaction session aremore inclined to look for assistance in problemsolving tasks. While they may find the system tobe quite useful in providing such assistance,(evidenced by the positive correlation betweenTPSESS and PSOLV) such extended interac-tion sessions may not necessarily be directedtoward the most pressing tasks at hand (retriev-ing information in a specific output form) andmay come at the expense of equally, if not moreimportant activities.The primarydifference between users with longinteraction sessions and those with shorter onesmay be that the former tend to use the system for

    focusing on an unstructured problem for defini-tion and consequent solution generation. Thosewith relatively shorter interaction sessions tendto use the system for assistance in answeringspecific questions that do not extensively testthe output form capabilities of the system.Substantive issues are brought into prominencein determining if a user is a heavy or a light userin comparison to others in the firm. The abilityofthe system to help the user structure a problemand seek out viable alternative solutions,coupled with the accuracy and understandabil-ity of the outputs it generates, appear to bestrong motivators for system use. This is evi-denced by the positive and significant correla-tion between USETYPE and OCONT, andUSETYPE and PSOLV. System use to aid indecision making is a function of its ability toassist in problem structuring and search forsolutions, as much as its abilityto provide accu-rate information.An especially interesting observation from thistable is the fact that the frequency withwhich thesystem is used is not significantly correlatedwith any of the perceived dimensions of effec-tiveness. This is in direct contradiction to find-ings mentioned earlier in this article in the con-text of MIS use. It also lends credence to Ginz-berg's [8] position that actual use of the systemmay not always be an indicator of system worth.

    The absence of pervasive association betweenactual use and perceived system worth is notsurprising in the context of this research. This isbecause the very philosophy behind modelingsystems is that they are not used on a produc-tion basis, but in fact are used sporadically asthe situation dictates. The results caution us thatwhen we use a measure of system effective-ness, especially in the context of systems thatsupport decision making that is more strategic innature, we may be wise to assess the role ofbehavioral measures separately from perceivedmeasures of system effectiveness.

    Research ImplicationsThe previous section demonstrated the impor-tance of examining behavioral and perceived

    MIS Quarterly/September 1985 247

  • 8/2/2019 Alternative Measure of System Effectiveness

    7/12

    Measures fSystemEffectiveness

    Table 3. Measures Used in the Study

    Variable Explanation TypePerceived Measures

    OutputContents(OCONT)

    OutputForm(OFORM)

    Problem-SolvingCapabilities(PSOLV)

    InputProcedures(INPUT)

    System Stability(STABL)

    Qualityof the contents ofsystem output.(Outputspro-vided by the system are rele-vantto the decisions I make.)Qualityof the form nwhichthe output s received. (Out-puts contain informationnthe sequence that Ifind to beuseful.)Qualityof the system as anaid in problem olving.(Thesystem helps me select fromamong manyalternativesolutions.)Procedures or data input. Itis difficulto understand heinputprocedures orusingthe system.)Operational tabilityof thesystem. (System has been upand runningwhenever I haveneeded to use it.)

    Indexof fouritems, eachmeasuredon a 5-pointLikert cale (Cronbach'salpha=0.81).Indexof fouritems, eachmeasured on a 5-pointLikert cale (Cronbach'salpha= 0.67).Indexof fouritems, eachmeasured on a 5-pointLikert cale (Cronbach'salpha=0.83).Indexof fouritems,eachmeasured on a 5-pointLikert cale (Cronbach'salpha=0.79).Indexof five items, eachmeasuredon a 5-pointLikert cale (Cronbach'salpha=0.61).

    BehavioralMeasures

    Frequencyof Use(USEFREQ)Timeper Session(TPSESS)Numberof Reports(NREPS)UserType(USETYPE)

    Frequencyof use of thesystemAverageconnect time peraccessAveragenumberof formalreports/documents eneratedusingthe system outputTypeof user, relative o otherusers inthe firm

    Numberof accesses permonthMinutesof connect timeAveragenumberof reportsper monthOrdinalmeasure (light,average, heavy)

    248 MISQuarterly/September985

  • 8/2/2019 Alternative Measure of System Effectiveness

    8/12

    Measuresof SystemEffectiveness

    Table 4. Associations Between Perceived and Behavioral Measures of EffectivenessPerceived Measures

    OCONT OFORM PSOLV INPUT STABLUSEFREQ 0.040 -.173 0.218 0.086 0.198

    Behavioral TPSESS -.013 -.402 * 0.382* -.311 -.283Measures NREPS 0.221 0.194 -.237 0.052 -.021USETYPE 0.395* -.053 0.620* * * 0.014 0.045

    Kendall's tau *p

  • 8/2/2019 Alternative Measure of System Effectiveness

    9/12

    Measures fSystemEffectiveness

    such systems typically possess. The specificfeatures that were selected represented thetwentymost widelypublicizedfeatures of suchsystems based on an examinationof consultantreports and sales publicationsof many firmsmarketing hese products.For each of the features, the respondent wasasked the following hree questions:1. Does your system possess this feature?2. If itdoes, is it used?3. Is the featureneeded?Table6 shows how the different ombinationsofavailability,use and needs are scored.Twosophisticationscores are computed basedon the needs-availability nd the use-availabilityinteraction.The scoring scheme is designed tomeasure the extent of fitthat exists between theneeds of the situationand the technicalcapabili-ties of the system. Srinivasanand Kaiser[27]reported he use of the needs-availabilitynter-action in determining the user informationrequirementsin a laboratoryexperiment. Theresults revealedthat one needs to consider boththe importanceand availabilityof informationduring the requirements analysis stage ofsystem design.

    Table 6. Scoring for TechnicalSophisticationTSOPH1

    NEEDNo Some LargeExtent Extent

    Yes 1 2 3Avail-ability No 3 1ability Don't 3 1 1KnowTSOPH2

    USENo Some LargeExtent Extent

    Avail-ability Yes 1 2 3

    Consider the technical sophistication 1 score(TSOPH1). TOSPH1 is an indication of thedegree of fit between the existing and non-existing features, and the extent to whicheachfeature is perceived as being needed for deci-sion-makingtasks. A good fit between needsandavailabilitys achieved when the system hasa feature and it is needed to a large extent.Similarly,a good fit is also achieved when asystem does not have a feature and it is notneeded. These twogood fitsituations are givena score of 3. A score of 1 is given to a situationwhere the qualityof the fit is the poorest.Theseinclude situations where the feature does notexist andit is even marginally eeded. Lastly,anintermediate core of 2 is given to the situationwhere the featureexists andit is needed tosomeextent.The technical sophistication2 score (TSOPH2)is a measure of sophistication hat involvestheextent to whichan existent feature s actuallyus-ed in decision-making asks. If a feature existsand it is used to a large extent, a score of 3 isassigned to indicatea good fit,and if it is notus-ed, a score of one is assigned to indicatea poorfit. Anintermediate core of 2 is assigned to thesituationwhere the feature exists and it is usedto a moderate extent.At the most rudimentaryevel we can comparethe absolute number of features a particularsystem has (a frequency count)and then see ifthe number of features has a bearing on theeffectiveness of the system. Table 7 shows theassociation between system effectiveness andtechnical sophistication measured in terms ofthe absolute number of features possessed bythe system.This measure of technical sophistication ispositivelyrelated to USEFREQand negativelyrelated to USETYPEand OCONT.An increasein the number of features results in peoplebecoming relativelylight users, but they alsotend to access the system more on an absolutebasis. In he absence ofTPSESS being involvedin any significantassociations, it appears thatusers tend to access the system morefrequentlyout of curiosityabout the features. However,alarge numberof features leads to a perceptionthat other users may actually be using thesystem to do useful work. The argumentis for-

    250 MISQuarterly/September985

  • 8/2/2019 Alternative Measure of System Effectiveness

    10/12

    Measures of SystemEffectiveness

    Table 7. Association Between TechnicalSophistication (number of features)and System Effectiveness

    EffectivenessMeasureTechnical

    SophisticationUSEFREQ 0.367*TPSESS -.017NREPS 0.080USETYPE -.443* *OCONT -.533 *OFORM 0.063PSOLV -.040INPUT 0.120STABL 0.108Kendall's tau *p

  • 8/2/2019 Alternative Measure of System Effectiveness

    11/12

    Measures fSystemEffectiveness

    Concluding RemarksThis articleattemptsto makea case fora morethoughtfulconsideration of MIS effectivenessmeasures in research. Specifically, by examin-ing the relationshipsbetween behavioral andperceived measures of MIS effectiveness in amodelingapplicationcontext, we have empha-sized the fact that the two are not alwayspositivelyassociated with each other as is sug-gested by much of the MISempirical iterature.Byunderstandinghe relationshipsbetweenthevariousspecific dimensions of perceived effec-tiveness and commonly accepted behavioralmeasures, researchersand practitionerswillbeable to interpret data pertaining to suchmeasures moreaccurately.Thesecond partof the articleappliedthe abovenotionin the examinationof the fitbetween thetechnical sophistication of a system and itseffectiveness. Instead of measuring technicalsophisticationof a system in terms of systemfeatures alone, we suggest a scoring schemebased on user needs and the existence of cer-tain commonly encountered features. Thesignificantassociations between sophisticationmeasures thus obtained, and the variousmeasures of system effectiveness, were bothpositiveand negative.These resultsagain pointout the need to isolate the two types ofmeasures.The results showninthis articlehave strongim-plications for both researchers and practi-tioners. Researchers have to be extremelycautious about using surrogate measures ofsystem effectiveness. While ncertainclasses ofsystems strong positive association may existbetween the two types of measures, in otherclasses ofsystems thisrelationshipmaybe non-existent.Researcherswillhavetoclearlyspecifywhat the exact nature of the dependent vari-ables are. System use andsystem effectivenessmay be indicating two entirely differentphenomena.The implications orpractitioners re also quitestrong. Theyhaveto realizethat a lackof strongbehavioral ndicationsof system use maynot bea negative outcome. In fact, as these resultshave shown, there may very well exist anunderlying lurryof problem solving activities.

    Typicalsystem auditingapproaches must takethis result into account while making systemresource allocationdecisions.

    References[1] Anthony, R.N. Planning and ControlSystems:A FrameworkorAnalysis.HarvardUniversity,Boston, Massachusetts, 1965.[2] Bailey,J.E. and Pearson, S.W. "Develop-mentofa Tool orMeasuring ndAnalyzingComputerUserSatisfaction,"ManagementScience, Volume29, Number5, May1983,

    pp. 530-545.[3] Culnan, M.J. "Chauffeured Versus EndUserAccess to CommercialDatabases:TheEffectsofTaskand Individual ifferences,"MISQuarterly, olume7, Number1, March1982, pp. 55-67.[4] Ein-Dor,P. and Segev, E. "OrganizationalContextandthe Success ofManagementn-formation ystems,"Management cience,Volume 24, Number 10, June 1978, pp.1064-1077.[5] EPS, Inc."Selectingand Evaluating Deci-sionSupportSystem,"promotion rochure,1981.[6] Epstein, B.J. and King,W.R. "AnExperi-mentalStudyof the Value of Information,"OMEGA, olume10,Number , September1982, pp. 249-258.[7] Ginzberg,M.J."EarlyDiagnosisof MIS m-plementation ailure: romisingResultsandUnanswered Questions," ManagementScience, Volume27, Number4, April1981,pp. 459-478.[8] Ginzberg, M.J. "Finding an Adequate

    Measure of OR/MSEffectiveness," Inter-faces, Volume8, Number4, August 1978,pp. 59-62.[9] Hamilton, S. and Chervany, N.L."EvaluatingSEffectiveness Part1:Com-paring Evaluation Approaches," MISQuarterly, olume5, Number , September1981, pp. 55-69.[10] Hamilton, S. and Chervany, N.L."EvaluatingSEffectiveness Part I:Com-paringEvaluation iewpoints,"MISQuarter-ly,Volume5, Number , December1981,pp.79-86.[11] Guthrie,A. "Attitudes fthe UserManagersTowardsMISs,"Management nformatics,

    252 MISQuarterly/September985

  • 8/2/2019 Alternative Measure of System Effectiveness

    12/12

    Measures fSystemEffectiveness

    Volume 3, Number5, October 1974, pp.221-232..[12] Ives,B.andOlson,M.H."UserInvolvementand MISSuccess: A Review of Research,"ManagementScience, Volume30, Number5, May1984, pp. 586-603.[13] Ives, B., Olson, M. and Baroudi,J. "TheMeasurementof User Information atisfac-tion,"Communicationsf theACM,Volume26, Number10,October1983,pp.785-793.[14] Jenkins,A.M.and Ricketts,J.A. "Develop-ment of an Instrument o Measure UserSatisfactionwithManagementInformationSystems," unpublisheddiscussion paper,IndianaUniversity,Bloomington,Indiana,1979.[15] King,J.L.andSchrems,E.L."Cost-BenefitsAnalysis nISDevelopment ndOperation,"Computing urveys,Volume10, Number1,March1978, pp. 19-34.[16] King,W.R.and Rodriguez,J.I."EvaluatingManagement InformationSystems," MISQuarterly, olume2, Number3, September1978, pp. 43-51.[17] Kleijnen, J.P.C. Computers and Profits:QuantifyinginancialBenefitsofInformation,

    Addison-Wesley,Reading,Massachusetts,1980.[18] Lucas, H.C. The Implementation f Com-puterBasedModels,NationalAssociationofAccountants,New York,New York,1976.[19] Lucas, H.C."Performanceand the Use ofa Management Information System,"Management cience, Volume3, Number ,April1975, pp. 908-919.[20] Maish,A.M."AUser's BehaviorTowardHisMIS,"MISQuarterly,Volume3, Number1,March1979, pp. 39-52.[21] Marschak,J. and Radner, R. EconomicTheoryf Teams,YaleUniversity ress,NewHaven, Connecticut,1972.[22] Nigam,R.and Hong,S. "AnalyticalHierar-chy Process Appliedto the EvaluationofFinancialModelingSoftware,"Proceedingsof theDSS-81Conference,Atlanta,Georgia,1981.[23] Robey, D. "User Attitudesand MISUse,"Academyof ManagementJournal,Volume22, Number 3, September 1979, pp.527-538.[24] Schewe, C.D. "TheManagementInforma-tion Systems User: An ExploratoryBehavioralAnalysis,"AcademyofManage-

    ment Journal, Volume 19, Number 3,September 1976, pp. 577-589.[25] Simon, H.A. TheNew Science of Manage-ment Decision, Harperand Brothers,NewYork,New York,1960.[26] Srinivasan,A. and Kaiser,K.M."TheRoleof InformationAccessibilityin InformationRequirementsAnalysis,"Systems, Objec-tives, Solutions, Volume 4, Number 4,November1984, pp. 201-210.[27] Swanson, E.B. "ManagementInformationSystems: Appreciationand Involvement,"ManagementScience, Volume21, Number2, October1974, pp. 178-188.[28] Welsch,G.M."Successful Implementationof Decision Support Systems: Pre-Installation actors,ServiceCharacteristics,and the Role of the InformationTransfer

    Specialist,"unpublishedPh.D.dissertation,NorthwesternUniversity,Evanston,Illinois,1980.[29] Zmud,R.W."Individual ifferenceand MISSuccess: A Review of the EmpiricalLiterature,"ManagementScience, Volume25, Number10,October1979,pp.966-979.

    About the AuthorAnanth Srinivasan is Assistant Professor ofOperationsand Systems Management in theSchool of Business, Indiana University. Hereceived his Ph.D in MISfromthe University fPittsburghn 1983. Hisresearch interestsare inthe areas of MISperformance,data modeling,and the managementof MIS.Dr.Srinivasanhaspublishedarticles in Academy of ManagementJournal, Systems, Objectives, Solutions, andApplicationsof ManagementScience. He is amember of SIM,ACM,AIDS,and TIMS.

    MISQuarterly/September985 253