abacus: agent-based cognitive computational processes for...
TRANSCRIPT
1
ABACUS: Agent-Based Cognitive Computational Processes for
Organizational Learning – Towards Business Process Intelligence
J. Leon Zhao+ and Huimin Zhao*
+ University of Arizona, Tucson, AZ 85721
* University of Wisconsin–Milwaukee, WI 53201
Abstract
Advanced organizational intelligence in complex business applications requires integration of
machine intelligence and human intelligence. However, little research on this issue has been
reported in the literature thus far. As the first step, we envision the concept of "cognitive
processes" that consist of sequences of tasks that encompass the elements of human and machine
learning, knowledge, and reasoning in a cognitive system. In this paper, we first identify the gap
between the current state of the cognitive computing and the requirements for an agile learning
organization. Then, we propose a modeling method for Agent-Based Cognitive Computational
Process (ABACUS) and demonstrate how the ABACUS method can be used in a business
organization.
1 Introduction
Since early 1980s, researchers in artificial intelligence have been making stride in developing practical
techniques that help humans make decisions based on factual data and empirically developed rules. The
well-known techniques in this area include expert systems (Hayes-Roth 1985, Hayes-Roth and Jacobstein
1994), data mining (Lavrac et al. 2004, Berry and Gordon 2004), and intelligent agents (Jennings and
Sycara 1998, Luck et al. 2004). The key research issues in this area include effective fusion of human
knowledge and data mining, incremental reinforcement learning and knowledge revision, multi-agent
collaboration, human-agent interaction, responsibility delegation, and trust.
Business process management is a relatively new discipline originated from the early attempts of
automating office work in the 1960s. The formation of the Workflow Management Coalition (WfMC) in
1993 signified the importance of business process management, which has evolved into an important area
of research in recent years (Stohr and Zhao 2001, Zhao and Cheng 2004). The key research issues
include human and process interface, process modeling and optimization, process mining, business
process integration, and process services.
2
We believe that the state of research in artificial intelligence and that in business process management
have come to a junction where both research communities address some overlapping issues, leading to a
new research area called business process intelligence. Business process intelligence, though not well
defined yet, calls for the application of artificial intelligence techniques in the management of critical
business processes (www,sap.com, Melchert et al. 2004). Some early attempts of business process
intelligence are found in the fields such as the following examples.
• The U.S. Department of the Treasury’s Financial Crimes Enforcement Network (FinCEN) uses a
system called FinCEN AI System (FAIS) in order to establish and implement policies to prevent and
detect money laundering (Senator and Goldberg 2002).
• The National Association of Securities Dealers Regulation, Inc. uses a system called Advanced
Detection System (ADS) to detect potential instances of violations of the rules of participation in the
Nasdaq and related stock markets subject to NASD Regulation’s oversight and jurisdiction (Senator
and Goldberg 2002).
• An e-business software vendor Ithena has a product called e-Customer Intelligence, which analyzes
transactional, web click stream, and other customer data, creating alerts and automating actions based
on defined business rules (Mena 2001).
In these cases, collaboration and coordination activities involving both human and machine agents are
needed for successful conduct of business operations under changing environments. Guided by human
expertise, heuristics, and business goals, machine agents can constantly watch for critical business
opportunities and risks, alert humans, and even directly take delegated actions. Learning agents can
discover changes in business patterns and models by mining a variety of information sources and report
interesting findings to human decision makers for review and approval, leading to a systematic evolution
of the knowledge base of the organization. These intelligent agents may be supervised by human
supervisors or authorized to take actions on behalf of humans.
Based on our preliminary exploration, we find that business process intelligence has a great potential to
make a significant impact on business performance, but little academic research has been done in this
area. In particular, there is not yet a clear definition of business process intelligence, and there is no
agreement on the research issues in this area. We believe that it should be beneficial to define business
process intelligence, to lay some groundwork in this area, and to explore how to apply artificial
intelligence techniques to business process intelligence.
3
In this paper, we attempt to explore the research issues in business process intelligence by investigating
how to integrate human agents with machine agents via cognitive processes to support real-time
organizational learning. The goal of business process intelligence should be to provide an organization
with real-time intelligence and make decisions and take actions in a well-coordinated effort. Using the
metaphor of “sense-and-respond” (Lin et al. 2002), techniques from data mining, organizational memory,
intelligent agents and multi-agents can be used to sense the business environment for exceptional events,
and techniques from workflow management, business process automation, and electronic business can be
used to coordinate the respond effort, which may involve tens or even hundreds of human agents.
Next, we review the literature in the areas of research relevant to cognitive computational processes
mentioned above. Section 3 contains a high-level architecture for the proposed ABACUS system and a
brief discussion on a number of related research issues. Section 4 gives the rationale of computation-
based organizational learning and discusses how this type of learning might be done. Section 5 outlines a
business case in the area of retailing, and Section 6 summarizes the paper.
2 Literature Review
Figure 1. Elements of a Rule-Based System (adopted from Hayes-Roth 1985).
The essence of rule-based expert systems is to use the rules and facts generated by humans to solve real
world problems as illustrated in Figure 1. This requires a logic-based inference process starting with a set
of inputs and a relevant set of rules and facts and ending with a set of outputs. However, the rule-based
system does not have a learning mechanism, and as a result, the system fails when the rules and facts are
out of date. Although many rule-based expert systems have been reported in solving well-defined
business problems such as those in Campbell Soup, many have failed miserably due to a lack of
automatic learning mechanism. For instance, the KPMG (now Bearing Point)’s 8,000 auditing expert
4
system developed between the late 1980s and early 1990s never came to fruition because of the constant
change of regulations and economic conditions made the rules and facts very unstable.
Figure 2. Sense and Respond Enterprise Agent Infrastructure (Adopted from Lin et al. 2002).
Industrial research has led to the development of a “sense-and-respond” layer in a corporate information
infrastructure, which bridges the supply chain planning layer with the supply chain execution layer (Lin et
al. 2002). A sense-and-respond (SAR) system (Figure 2) uses data mining, planning, and optimization
technologies to detect and use the most appropriate management policies in a given business context. A
SAR enables automatic sensing of complex internal and external business environmental changes, and
responds quickly with the best available policies in order to achieve the business objectives. The goal is to
build better and more tightly integrated business processes, more flexible capability networks, and a
business structure that is more responsive to internal and external environmental challenges.
The recent trends in business process integration and management include: (1) system design and
implementation should be fast and efficient, (2) a reusable and robust framework should be used to reduce
cost, and (3) the technology should be lightweight so that revision, development, and deployment of
business solutions can be done quickly and easily. The industry has adopted a model-driven approach
(Figure 3) that includes a set of model-driven business integration and management methods,
frameworks, supporting tools, and a runtime environment (Zhu et al. 2004). A business process
integration and management project based on the model-driven approach was completed successfully on
schedule and within budget, with up to 30% efficiency improvement compared with similar projects.
5
Figure 3. Model-driven Business Process Integration and Management (adopted from Zhu et al. 2004).
Agents represent both a design metaphor and a source of technologies (Luck et al. 2004). The agent-based
view of design provides a powerful way of structuring complex systems around autonomous,
communicative elements known as agents. More sophisticated agents may be able to reason based on
substantial knowledge and rules, learn from experiences and adapt to changing environments, physically
move in a network to explore further opportunities, and communicate, negotiate, and collaborate with
each other in multi-agent systems. Agent technologies have been applied in a wide range of business
domains (Jennings et al. 1998, Luck et al. 2004). When applied to business process management, agent
technologies allow a business process to be modeled and constructed as a group of autonomous,
collaborative agents representing various roles or departments in the organization (Jennings et al. 1996).
Agent technologies have also been applied in managing heterogeneous workflows (Huhns and Singh,
1998) and inter-organizational workflows (Merz et al. 1997).
Recently, agent technologies are increasingly merged with other emerging technologies, such as web
services, semantic web and ontologies (Edgington et al. 2004, Kim 2002), and grid computing. Based
upon a set of XML-based open standards such as SOAP, WSDL, UDDI, BPEL4WS, and WS-CDL, web
services support a service-oriented view of autonomous software components (can be agents)
collaborating to achieve valuable business objectives. Web services technology provides a platform-
neutral, language-neutral distributed computing model and greatly enhances interoperability of agents. An
ontology provides a shared conceptualization of an application domain and allows collaborative agents to
bridge semantic heterogeneity gaps and meaningfully exchange information and knowledge. Semantic
6
annotation languages developed in the semantic web community such as DAML (DARPA Agent Markup
Language) can be used to represent ontologies and markup the metadata of agents in a machine readable
and understandable manner. The Grid provides an ideal high-performance computing infrastructure for
multi-agent systems, within and across enterprises.
The essence of data mining, also known as KDD (Knowledge Discovery in Databases), is to discover
interesting, valuable business patterns and models by exploring and exploiting the data. Some typical data
mining problems include: (1) Predictive modeling (often characterized as supervised learning), including
classification and regression. The target, dependent variable that needs to be predicted based on a set of
explanatory variables is categorical in classification and continuous in regression. (2) Cluster analysis
(often characterized as unsupervised learning), where the objective is to segment a population (e.g.,
potential customers) into several sub-populations such that the elements in the same sub-population have
similar characteristics. (3) Association rule mining. The typical example is market basket analysis, where
the task is to determine what products frequently go together in a shopping cart at a supermarket.
Some current trends in data mining include: (1) Unifying human expertise and machine learning. Human
experts can often provide important domain knowledge, insights, and heuristics to guide the machine
learning process and evaluate learning results. Interactive learning environments that involve human
experts in the learning process are needed to make the best use of machine learning and human reasoning,
experience, and expertise. (2) Active, incremental mining. In many exceptional event (e.g., credit cad
fraud) detection applications, correction actions must be taken as soon as possible, thus demanding the
merging of data mining with agent technology. Such active mining may require incremental multi-agent
learning (through theory and knowledge revision) from various data sources and case bases maintained
within and outside the enterprise.
Modern organizations typically operate upon a variety of distributed, autonomous, and heterogeneous
data sources. Effective integration of these data sources, virtually or physically, is a pre-requisite for
decision support applications such as OLAP and data mining. The current trend of data integration is
moving towards active, hybrid models, driven by the increasingly dynamic, open business environments.
Operational databases under control of the organization can be physically consolidated into data
warehouses to support efficient OLAP and data mining applications (Sen and Sinha 2005). There is a
growing need for active data warehouses, marketed as “@ctive data warehouses” by NCR Teradata
(www.teradata.com), to support real-time tactical decision making and event based actions, besides
strategic decision making supported by traditional static data warehouses. Such active data warehouses
constantly monitor changes in the underlying operational databases and take actions according to ECA
7
(Event-Condition-Action) rules and triggers. Wrappers and mediators (Wiederhold 1992), implemented as
mobile agents, can be deployed to dynamically search for additional valuable information sources over
the Internet.
3 A Framework of Agent-Based Cognitive Computational
Processes (ABACUS)
Based on the literature review on the recent advances in various aspects of cognitive computing, it is clear
that we have come to a stage that real-time analysis of organizational behavior is possible, and therefore
organizational instinct should be supported by agent-based cognitive computational processes:
• Specifically, active data warehouses store the information from different sources such as POS
transactions, database queries, emails, memos, and other documents produced on the daily basis in the
company.
• Data mining tools enable the discovery of various knowledge patterns, which might indicate specific
business behaviors or alarming events.
• Business process integration and management enables real time interactions between various human
and machine agents under a given process model.
• Multi-agent research has indicated the possibility to integrate agents into agencies and agent
communities to coordinate multiple agents toward accomplishing a common goal.
While specific cognitive processes may vary under different business environments, the general process
should contain such possible steps as exception identification, first responses, operational alerts,
management escalation, and knowledge update as shown in Figure 4. Note that both human agents and
intelligent agents can be involved with each of the steps, and human agents and intelligent agents may
interact directly for supervision and information. Each of the five general steps may involve a sub-
process, and the transition from one step to the next may be conditional as indicated with the question
marks. For instance, after an exception is identified, a first response may be needed, and after a first
response, operational workers might be alerted.
8
First responses
Exception identification
Operational alerts
Management escalations
Knowledge updateIntelligent agents
Human agents
?
?
?
?
Figure 4. A Generic Model of Cognitive Processes.
We propose the following architecture for agent-based cognitive computational processes to support
organizational learning.
• Transactional databases• Emails, memos• Customer co mplaints• Other documents
SensoryAgent 2
SensoryAgent 1
SensoryAgent n
. . .
ActiveData
Warehouse
HumanAgent 1
HumanAgent n
HumanAgent 2
. . .
ABACUS Engine• Organizational Charts• Cognitive P rocess Models• Exception Profiles
ABACUSDatabase
Figure 5. The ABACUS Architecture.
9
As illustrated in Figure 5, the ABACUS system consists of human agents, sensory (or machine) agents,
the ABACUS engine, the ABACUS database, and the active data warehouse. The human agents are
people in the organization who took various roles in terms of organizational learning as defined by the
organizational charts in the ABACUS database. The sensory agents are software (or hardware) agents,
which are programmed using machine intelligence techniques to survey the active data warehouse for
exceptional events as defined by the exception profiles in the ABACUS database. The ABACUS engine
is the coordinating software that monitors and controls the cognitive processes according to the cognitive
process models found in the ABACUS database. The active data warehouse contains the various forms of
information and knowledge extracted from the company’s information systems, which are omitted from
the architecture for simplicity.
For the ABACUS system to function, there must be a specific organizational learning environment
defined by the organizational goals and the business processes. In other words, we advocate a goal
oriented learning by knowing a priori the specific and potential problems based on previous experiences.
We also assume that these potential problems are identifiable by examining the data, information, and
knowledge stored in the active data warehouse using known data mining, statistical, and heuristic
computational techniques. The objective of the ABACUS system is to integrate the known pattern
detection techniques, the business know-how, and the business process management methodology in
order to improve the sense-and-respond capability of real-world organizations.
Examples of cognitive process models are numerous depending on specific business situations. For
instance, under logistics management, the following cognitive processes are useful:
• Error reporting and feedback process: During the planning, execution, monitoring, and focused
planning phases throughout the logistics pipeline, inaccurate data might be used that hinders the
overall logistic effectiveness. To enable effective learning as an organization, an error reporting and
feedback process needs to be in place as a regular workflow so that any agent can trigger an error
report that will be routed through the command and control structure. Depending on the seriousness
of the error, different routing processes may occur. In the event of inaction from certain operators of
the workflow, the routing will be elevated to the next level of command and control structure.
Furthermore, a knowledge acquisition process will also be triggered to review the incidence for
possible creation of new business rules.
• Collaborative knowledge discovery process for logistics: As in any unique domain, learning must
occur with respect to specific datasets. In this project, we will integrate advanced knowledge
10
discovery and data mining techniques with organizational learning techniques that enable supervised
learning among humans and machines. The key innovation thrust of collaborative knowledge
discovery process is the focus on rule-based learning from existing knowledge stored in the
knowledge base and the human minds.
• Critical knowledge reminding process: While research has been done on cased-based learning and
planning, we believe that new techniques are needed that will provide knowledge to critical operators,
both humans and machines, proactively. After all, what is the good of knowledge if the critical
operators are not aware of it and cannot apply it in the field? To achieve this, a critical knowledge
distribution process needs to be implemented, incorporating various techniques such as case-based
reasoning (Madhusudan, Zhao, and Marshall 2004) and organizational knowledge distribution
(Sarnikar, Zhao, and Kumar 2004).
Note that these processes are considered cognitive processes because they are not the normal business
processes needed to deliver services and products, but ones that are designed to improve organizational
learning. They are also different from the cognitive processes that support reasoning and knowledge
management found inside an intelligent agent.
Our goal is to develop a test bed for such a system and apply it to a realistic environment. To accomplish
this goal, we must resolve the following theoretical and technological issues:
• What forms of data, information, and knowledge can be subject to surveillance by sensory agents in
real time?
• What existing intelligent agent and data mining techniques could be used in developing sensory
agents?
• What cognitive process models can we build in order to coordinate both human and machine agents
effectively to support real-time organizational learning?
• Do existing database and data warehouse technologies contain the necessary functionality and
efficiency needed to support real-time cognitive processes?
• Can a conventional workflow engine be used to support the necessary functions for the ABACUS
engine?
11
4 The Rationale of Cognitive Organization Learning
In order to shed some light on how the ABACUS system and cognitive processes can be used to support
organizational learning, we show how ABACUS might be applied to a case of organizational learning.
Before doing so, we first explain the concept of cybernetic management.
Wiener (1948) proposed the idea of cybernetics as the science of feedback and control in biological and
mechanical systems, Ashby (1956) applied this idea to social systems, and Beer (1959) further extended
the idea into the management arena by defining management as the science and profession of control.
Jackson (1991) identifies three underpinning principles of cybernetic management, including 1) the ‘black
box’ technique, which relies upon input manipulation and output classification as it is not possible for
managers to understand all the possible interactions within the systems under their control; 2) the negative
feedback technique, characterized by closed-loop structures that operate by continuously monitoring the
output of a system and tuning it towards the desired end by modifying the input accordingly; 3) the
variety engineering technique, in which managers increase their variety by creating a managerial
hierarchy, delegating responsibilities to experts and consultants, and using information systems.
Beer (1981) constructed the Viable System Model (VSM), which was built for managers from the
cybernetic management principles. According to Beer, a viable system is one that is able to respond to
any kind of environmental change, no matter whether it could have been foreseen or not. Beer’s
organization is made up of interrelated elements at five levels or systems, that is,
1) Implementation (directly concerned with the task of the organization),
2) Coordination (activities and processes),
3) Control (day to day running of the organization),
4) Development (intelligence gathering and reporting), and
5) Policy and Strategy (policy formulation and strategy alignment)
The VSM can be regarded as the theoretical foundation for the ABACUS system. VSM requires an
organization to respond to changes in the organization and the cybernetic management principles suggest
monitoring the system output and making necessary adjustment to the organizational elements. The case
of a small manufacturing company (APCO) discussed next illustrates how the VSM and cybernetic
management principles can be applied in a real world organization (Warren 2003).
12
The small manufacturing company (APCO) was undergoing a full appraisal of its management and
information systems to support company growth based on improved automation techniques. The case is
used to illustrate the complexity of supporting growth in small firms through systems integration, and
argues for ‘solutions’ grounded in organizational learning and to show the value of the VSM for
supporting an organizational learning phase as an essential step to the adoption of ERP/MRP.
Figure 6. A Design of Company Information Flow (Adopted from Warren 2003).
While the original case was used to illustrate how the VSM can be used to improve the planning and
control of information system design and development, namely an ERP system, we use this case to
illustrate how cognitive computational processes can be used to support real-time organizational learning.
The company information flow shown in Figure 6 represents a partial organizational environment, and
Figure 7 contains the managerial concerns, some of which can lead to exceptions detectable by examining
the information contained in the active data warehouse. Note that the notions of levels 1, 2, 3, 4, and 5
correspond to the five VSM systems (or levels) aforementioned.
13
Figure 7. System Issues and Managerial Concerns (Adopted from Warren 2003).
Under the ABACUS system, cognitive processes can be developed to support the company information
flow shown in Figure 6 and to support the planning, application, and renewal of knowledge needed to
manage organizational learning in the various levels (also known as systems under the notion of VSM)
shown in Figure 7. For instance, knowledge at the development level must be passed to the strategy level
in order to maintain the information flow between people who develop information technology and those
who set up business strategies.
5 The Case of Retailing
Consider a retailer conducting e-business. The key to success is personalization, i.e., making the right
offer at the right time to the right person through the right channel. Without personalization, a company is
doomed to fail in such a competitive industry, as revealed by the recent burst of the .COM bubbles.
14
To support personalization, a variety of data sources need to be explored and analyzed. These include
transactional data in Web packets and server log files, visitor information collected via cookies, web bugs,
and online forms, operational databases, and offline demographics about customers and neighborhoods.
These data sources need to be continuously consolidated into active data warehouses to facilitate real-time
decision-making and event driven actions.
Actions must be taken promptly via appropriate channels (e.g., dynamic web pages, banner ads, targeted
e-mails, and direct phone calls). When a good prospect walks into the web site, targeted recommendations
and offers must be instantly given, or otherwise the visitor may go away in seconds. When a potential
credit card fraud is detected, correction actions (e.g., notifying the card holder, rejecting the transaction,
etc.) must be taken as soon as possible to prevent loss. When a valuable customer is about to jump ship
and turn to a competitor (i.e., churning), complementary retention offers must be made just in time to
retain the customer.
Given the huge amount of data that need to be analyzed and the rigid requirement on reaction time,
artificial intelligence technologies undoubtedly need to be utilized to make recommendations, create
alerts, and even automate some actions. Many data mining techniques can be used to extract useful
business models and patterns from the active data warehouses and other related information sources.
Cluster analysis and case-based reasoning techniques can be used to segment and profile customers.
Classification techniques can be used to predict which customers will likely to respond to a particular
offer, which customers are about to churn, which marketing channels may be most effective in a
particular region, and which credit card transactions may be in fault. Survival analysis techniques can be
used to predict how long a customer will stay. Regression techniques can be used to predict customer
lifetime values. Association rule mining techniques can be used to discover groups of products frequently
sold together or purchased by the same customer over time, enabling cross-selling, up-selling, and making
targeted consumer product recommendations. Web usage mining such as sequence analysis based on click
stream data can unveil interesting access patterns useful for improving web site design.
Once new knowledge is derived about customers and retailing via data mining, the knowledge must be
put into use at the various levels of cybernetic management for the retailing company. That is, the
machine agents of data mining should be integrated with human agents who manage the retailing
company by passing the new knowledge about customers to company managers in order to update their
knowledge and/or their business management practices. Feedback loops may be necessary in order for
human agents and machine agents to work together towards better customer relationships.
15
The e-business environment is extremely dynamic. Real-time sense-and-respond is almost mandatory.
Business models and patterns also change frequently. An effective marketing channel today—be it banner
ads or e-mail advertising—may soon be abused by the industry and overwhelm potential customers. In
such a dynamic, open environment, close collaboration among human and machine agents organized
within complex business processes is critical for achieving desired profitability and surviving
environmental changes. Human background knowledge and heuristics can be used to guide sensory and
learning agents and evaluate learning results. Machine agents can be apprenticed to human teachers,
observing how humans handle business cases (e.g., potential credit card frauds, churns, etc.), learning
from such experiences, and eventually being delegated direct actions when they can perform these tasks
in sufficient accuracy. When significant changes in the business models and patterns (e.g., emerging or
diminishing of highly profitable customer segments or neighborhoods, booming or deterioration of
particular marketing channels, entering and leaving of competitors, etc.) are detected by machine agents,
strategic decisions need to be made by human decision makers, enabling the evolution of the
organizational knowledge base.
6 Concluding Remarks
Given the recent advances in information technology including active data warehousing, data mining,
intelligent agents, and business process integration, the time has come to develop the next generation of
computational organizational learning, which we refer to as cognitive computational processes for
organizational learning.
In this paper, we outlined our preliminary results for cognitive computational processes for organizational
learning on the architecture, the theoretical rationale, and a business case in this research domain.
However, there are many research issues that remain to be investigated such as cognitive process
modeling, technology feasibility analysis, system building and verification, and application deployment
and validation in real world settings.
We are currently working towards a more in-depth study of the problems and issues surfaced in this
paper. The full paper will contain a more thorough case analysis and a more detailed system design.
Furthermore, we will also attempt to develop a proof-of-concept prototype to demonstrate the feasibility
of the proposed system.
16
References
Ashby, W. R. An Introduction to Cybernetics. Methuen: London, 1956.
Beer, S. Cybernetics and Management. EUP: Oxford, 1959.
Beer, S. The Brain of the Firm. Wiley: Chichester, 1981.
Berry, Michael J. A. and Gordon S. Linoff. Data Mining Techniques for Marketing, Sales, and Customer Relationship Management (2nd Edition). Wiley, 2004.
Edgington, Theresa, Beomjin Choi, Katherine Henson, T.S. Raghu, and Ajay Vinze. Adopting ontology to facilitate knowledge sharing, Communications of the ACM, 2004, 47(11): 85–90.
Hayes-Roth, Frederick, and Neil Jacobstein. The state of knowledge-based systems, Communications of the ACM, 1994, 37(3): 27–39.
Hayes-Roth, Frederick. Rule-based systems, Communications of the ACM, September 1985, 28(9):921–932.
Huhns, M. N. and M. P. Singh. Managing heterogeneous transaction workflows with cooperating agents, in N. R. Jennings and M. Wooldridge (eds.), Agent Technology: Foundations, Applications and Markets. Springer-Verlag: Berlin, Germany, 1998, pp. 219–239.
Jackson, M. C. Systems Methodology for the Management Sciences, Plenum: New York, 1991.
Jennings, N. R., P. Faratin, M. J. Johnson, T. J. Norman, P. O’Brien, and M. E. Wiegand. Agent-based business process management, International Journal of Cooperative Information Systems, 1996, 5(2–3):105–130.
Jennings, Nicholas R., Katia Sycara, and Michael Wooldridge. A roadmap of agent research and development, Autonomous Agents and Multi-Agent Systems, 1998, 1(1): 7–38.
Kim, Henry. Predicting how ontologies for the semantic web will evolve, Communication of the ACM, 2002, 45(2): 48–54.
Lavrac, Nada, Hiroshi Motoda, Tom Fawcett, Rob Holte, Pat Langley, and Pieter Adriaans. Introduction: Lessons learned from data mining applications and collaborative problem solving, Machine Learning, 2004, 57(1–2): 13–34.
Lin, Grace, Steve Buckley, Heng Cao, Nathan Caswell, Markus Ettl, Shubir Kapoor, Lisa Koenig, Kaan Katircioglu, Anil Nigam, Bala Ramachandran, and Ko-Yang Wang. Value chain optimization - the sense-and-respond enterprise, OR/MS Today, April 2002.
Luck, Michael, Peter McBurney, and Chris Preist. A manifesto for agent technology: Towards next generation computing, Autonomous Agents and Multi-Agent Systems, 2004, 9(3): 203–252.
Madhusudan, Therani, J. Leon Zhao, and Byron Marshall. A case-based reasoning framework for workflow model management, Data and Knowledge Engineering: Special Issue on Business Process Management, 2004, 50(1): 87–115..
17
Melchert, Florian, Robert Winter, and Mario Klesse. Aligning process automation and business intelligence to support corporate performance management, in Proceedings of the 2004 Americas Conference on Information Systems (AMCIS), New York City, 2004.
Merz, M., B. Lieberman, and W. Lamersdorf. Using mobile agents to support inter-organizational workflow management, Applied Artificial Intelligence, 1997, 11(6): 551–572.
Mena, Jesus. WebMining for Profit: E-Business Optimization, Digital Press, 2001.
Sarnikar, Surendra, J. Leon Zhao, and Akhil Kumar, "Organizational Knowledge Distribution: An Experimental Evaluation", Proceedings of the AIS Americas Conference on Information Systems, New York, August 5-8, 2004.
Sen, Arun and Atish P. Sinha. A comparison of data warehousing methodologies, Communications of the ACM, forthcoming in March 2005.
Senator, Ted, E. Henry, and G. Goldberg. Industry: break detection systems, in Willi Klösgen and Jan M. Zytkow (eds.), Handbook of data mining and knowledge discovery, Oxford University Press, Inc., New York, NY, 2002.
Stohr, Edward A. and J. Leon Zhao, Workflow automation: overview and research issues, Information Systems Frontiers: Special Issue on Workflow Automation, September 2001, 3(3): 281–296.
Warren, Lorraine. The crisis of control: A progressive approach towards information systems integration for small companies, Information Systems and e-Business Management, 2003, 1(4): 353–371.
Wiener, N. Cybernetics, Wiley: New York, 1948.
Wiederhold, G. Mediators in the architecture of future information systems, Computer, 1992, 25(3): 38–49.
Zhao, J. Leon and Hsing Kenneth Cheng, (Editorial) Web services and process management: A union of convenience or a new area of research? Decision Support Systems (Forthcoming; accepted 2004).
Zhu, J., Z. Tian, T. Li, W. Sun, S. Ye, W. Ding, C. C. Wang, G. Wu, L. Weng, S. Huang, B. Liu, and D. Chou, Model-driven business process integration and management: A case study with the Bank SinoPac regional service platform, IBM J. Res. & Dev., September/November 2004, 48(5/6): 649–669.