chapter 16 sbded0105

89
DRAFT CHAPTER 16 2 H. Bruce Bongiorni SIMULATION-BASED DESIGN 12.1 NOMENCLATURE -- to be developed -- 12.2 WHAT IS SIMULATION-BASED DESIGN? Simulation-based Design (SBD) is the program name given to a DARPA project to develop an integrated design environment (see Reference 1). A primary goal of this project is to make it practical to create "virtual prototypes" and test them in "synthetic environments". By doing so, the designer could make decisions tradeoffs while getting instantaneous feedback on the consequences of those changes. 12.2.1 Virtual Prototypes: the Smart Product Model The first notion of a virtual prototype was the development of digital mockups in lieu of physical mockups. These were and are visualizations of the product geometry based on features, dimensions, and spatial relations taken from the CAD representation. These representations are typically sparse in that they represent a subset of much of the information that is contained or generated by a CAD system. This is a result of limitations in the speed of rendering the images and constraints in handling the large amounts of data required. Another type of virtual prototype is one in which the "behaviors" of the product are represented. An example of a behavior is the structural response of a physical object to a given load. As there are many behaviors that are considered by a designer, there are as many different models that are used. For a ship example, for the structural response of a ship’s structure there is the finite element model. But there can also be a computational fluid dynamics model, a radar signature model, a seakeeping model, a model of the cargo handling systems, a fluid system model, an electrical load model, and so on.(see Reference 2) The virtual prototype is then defined as: The logical representation of the digital models and data which describe the behaviors of the product in response to environmental inputs. There is a lot of discussion surrounding the product model, the 3D product model, the product information model, and the "smart" product model. The 3D product model is essentially the 3 dimensional geometry of the product and where the product information model usually refers to the set of non-geometric data related to the product. People use "product model" to refer to either the 3D product model, the product HBB/UMTRI-MSD Page 1 DRAFT

Upload: hondafanatics

Post on 13-May-2015

3.028 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Chapter 16 SBDED0105

DRAFT

CHAPTER 162H. Bruce Bongiorni

SIMULATION-BASED DESIGN

12.1 NOMENCLATURE

-- to be developed --12.2 WHAT IS SIMULATION-BASED

DESIGN?

Simulation-based Design (SBD) is the program name given to a DARPA project to develop an integrated design environment (see Reference 1). A primary goal of this project is to make it practical to create "virtual prototypes" and test them in "synthetic environments". By doing so, the designer could make decisions tradeoffs while getting instantaneous feedback on the consequences of those changes.

12.2.1 Virtual Prototypes: the Smart Product Model

The first notion of a virtual prototype was the development of digital mockups in lieu of physical mockups. These were and are visualizations of the product geometry based on features, dimensions, and spatial relations taken from the CAD representation. These representations are typically sparse in that they represent a subset of much of the information that is contained or generated by a CAD system. This is a result of limitations in the speed of rendering the images and constraints in handling the large amounts of data required.

Another type of virtual prototype is one in which the "behaviors" of the product are represented. An example of a behavior is the structural response of a physical object to a given load. As there are many behaviors that are considered by a designer, there are as many different models that are used. For a ship example, for the structural response of a ship’s structure there is the finite element model. But there can also be a computational fluid dynamics model, a radar signature model, a seakeeping model, a model of the cargo handling systems, a fluid system model, an electrical load model, and so on.(see Reference 2)

The virtual prototype is then defined as:

The logical representation of the digital models and data which describe the

behaviors of the product in response to environmental inputs.

There is a lot of discussion surrounding the product model, the 3D product model, the product information model, and the "smart" product model. The 3D product model is essentially the 3 dimensional geometry of the product andwhere the product information model usually refers to the set of non-geometric data related to the product. People use "product model" to refer to either the 3D product model, the product information model, or both. For the purpose of our discussion, the product model refers to the superset of data composed by the union of the 3D product model and the product information model.

During the DARPA SBD project the notion of the "smart" product model emerged. This was the product model expanded to include product behaviors. So for the purpose of this discussion, the "smart" product model is a virtual prototype.

12.2.2 Synthetic Environments: Simulations

Models are the sets of instructions, constraints, relationships, and data that describe the way that a product will respond to the inputs from the environment. But a model is a static entity, in that there is no time element in the model.

A simulation, on the other hand, is the instantiating of the model in a time domain. That is, when one set of inputs are given to the model, the model responds to those inputs. That set of inputs is a single instance in time. Multiple inputs are discrete steps in the time domain.

Each version of the product model has a corresponding set of inputs. A synthetic environment is then the superset of all of the input sets for the virtual prototype (or smart product model). The synthetic environment can be defined as:

The logical representation of the input sets and data which elicit the behaviors of the product over time.

HBB/UMTRI-MSD Page 1 DRAFT

Page 2: Chapter 16 SBDED0105

DRAFTSummary [Like to have this at end of section or chapter. Or use a different word]

These are key concepts used in the following discussions and seem deceptively simple. As I have said previously, the virtual prototype is the set of all models that describe the behaviors of the product, and thesynthetic environment is the set of all inputs for simulation.

So for simulation-based design all we have to do is build models for all of the behaviors we want to use as a basis for the design, and test those models simultaneously by running all of the simulations at the same time. Right?

12.3 WHY THE INTEREST IN SIMULATION-BASED DESIGN (SBD)?

12.3.1 Introduction

The race is to the swift

The new economic realities are shifting the focus from low cost, low quality commodity products to best value custom products. This makes the ability to conceive, design, and produce a new, quality product quickly an important business survival strategy.

Introduction

It is essential toWe are starting this class by looking at the business drivers for SBD-related technologies. The interest in integrated design technologies is basically driven by the need to create new products. There are two fundamental business strategy scenarios: being late to a market with a new product, or being first to the market with a new product.

Scenario 1: We need one of those too!

This is the situation where a business's competitor is investing in the development of a new product and may be the first to have it in the market. The first one there gains market share, economies of scale, etc. So typically management will have some competitor intelligence and start up its own product development program.

Being late to develop a product is not always a bad thing. In some cases, the business that blazes the trail spends a large part of its time and resources going down dead ends. There are also many examples of cases where the innovator ultimately did not capitalize on its ability to create new

products such as Zerox or Apple and the PC.(Apple?).

So imagine an example where Company A is investing $5,000,000 of current year money over 5 years to create a new product. Assume that the market for that product is expected to be about $20,000,000 per year. Other assumptions are that cost of capital is 10% and that you can expect to have a 50% share of the market when your product launches.

Now imagine Company B has a 1 year lag behind Company A and now will spend the same effort to develop competing product, but must do so in the same time deadline as their competitor. That is, Company B must develop its version of the product in 4 years.

But, it is not good enough to just have your product in the market. Company B wants to have a better product order to gain market share from its competitor. Being late to start development actually helps this situation and can be a powerful strategy. By being late or delaying the decision to develop a product, Company B can avoid some of the costs from dead ends, and/or take advantage of new technologies or changes in the market forecast. So in this example, we'll assume that Company B actually gets 51% of the market because its product better meets the needs the market.

HBB/UMTRI-MSD Page 2 DRAFT

Page 3: Chapter 16 SBDED0105

The cash flows are shown below:

And the cost comparison is shown in the table below:

Scenario 1: late start, 1% better market share for Company BCompany AMarket share 49%Cost of Capital 10%Expense $ (5,000,000)Revenue $ 39,522,634Profit $ 34,522,634Company BMarket Share 51%Cost of Capital 10%Expense $ (5,000,000)Revenue $ 41,135,803Profit $ 36,135,803Difference btw B and A $ 1,613,169

Scenario 2: We need to be first!

The second generalized case is where a company wants to be the first in a market place with a new product. In this situation, the sooner that the business can develop the product the sooner the benefits of the revenue will accrue. Looking at the same conditions as in the first scenario, the second scenario cash flow looks like that below:

Page 4: Chapter 16 SBDED0105

We assume a couple of things in scenario 2. First we assume that the product development process for Company B spends the same amount of resources over a shorter period of time than its competitor. Another assumption is that there is no difference in the quality of the products and that the market is equally shared. Consequently, the benefit to company B is advancing the cash flow.

The results are shown in the table below:

Scenario 2: early finish, no change in market share for Company BCompany AMarket share 50%Cost of Capital 10%Expense $ (5,000,000)Revenue $ 40,329,219Profit $ 35,329,219Company BMarket Share 50%Cost of Capital 10%Expense $ (5,000,000)Revenue $ 44,362,141Profit $ 39,362,141Difference btw B and A $ 4,032,922

Conclusions from the examples

If you compare the examples above, what should be clear is that in neither of the cases does a Company B reduce engineering cost as compared to Company A. The benefit accrues to Company B by the ability to gain market share and revenues, not by reducing

Page 5: Chapter 16 SBDED0105

the product development expenditure. That's not to say that a business should not be concerned with product development costs. What it does say is that, all things being equal, a business gains the most benefit from its ability to develop products that allow it to gain market share and, consequently, revenue as quickly as possible.

Businesses that think this way are excited about technologies like SBD. SBD technologies can be expensive, SBD processes can be more expensive than traditional design processes. But because so many more alternatives can be considered and tested in a given period of time, SBD changes the product development process so that higher quality products can be developed in a shorter time.

The case studies below are examples of the application of SBD processes and technologies.

Case Study: Chrysler

In the auto industry, the key to survival, let alone dominance, is in the effectiveness of an auto maker's design process. In the 1970's and 80's production processes and quality were improved. In the 90's, auto makers have rethinking their design processes. One of the best examples of this is Chrysler.

In 1988, Chrysler recognized that it needed to replace its K-car line with a new model. Chrysler management looked at their competition, in particular Honda and Toyota. At that time one study conducted by the Harvard Business School estimated that the average Japanese auto company spent 1.7 million engineering hours in 4 years to launch a new model. In contrast, American and European auto makers spent 3 million engineering hours and 5 years to accomplish the same project.

Chrysler has had more than one near death experience, and in 1988, was facing another possible threat to its viability. In response, Chrysler management committed $1.6 billion dollars to developing a new product line. They also committed to revamping the way they did design. By doing so, Chrysler launched its new line in 3.5 years. This line of cars is now selling very well, Chrysler not only is surviving, but thriving.

What did Chrysler do? Among the things that they changed were:

Platform Teams - Chrysler organized the design process around the product. It formed cross-functional groups of engineers who functioned as an autonomous business unit. The teams not only included the designers, but also people from materials and manufacturing.

Digital mockups -Chrysler has used the digital geometry from their CAD systems to review and evaluate styling decisions and manufacturing processes.

Centralized CAD database - This shared information allowed everyone working on the design, including the major suppliers, to have the same reference information.

Variation simulation - Chrysler engineers simulated the stack-up of tolerances in order to determine the fit of body panels. This simulation also allowed establishment of tolerances to account for spring-back during manufacturing.

Structure modeling and simulation - Cab-forward design made structural analysis critical to the development of Chrysler's new car designs. In addition, simulation of performance reduced the need to develop prototype vehicles and physical testing.

Page 6: Chapter 16 SBDED0105

Case Study: Boeing

The Boeing 777 is considered a watershed in the use of simulation to reduce construction costs, and concept-to-delivery time. This ability allowed Boeing to delay committing to the design of its new aircraft until its competitors had already done so. By shortening the design time, Boeing was able to better meet needs of their customers with a product that better met the customer's needs in the market place at about the same time as Airbus Industrie.

In 1986, Airbus and McDonnell Douglas were beginning to develop new planes to meet the market for medium range, wide-body airliners. Boeing was caught with nothing in their product line to match the planes their competitors were developing. The McDonnell Douglas MD-11 and its variation, the MD-12, were scheduled for delivery in 1990. The Airbus A330 and its A340 variation were expected to be delivered in 1993.

Boeing's product development cycle was at least 6 years, McDonnell Douglas was about 4 years for the MD-11, and Airbus was on a 7 years. At the time that Boeing finally committed to developing the 777, they were 5 years behind their competition.

Boeing did a number of things to insure that the 777 would gain market share over its competitors. Among them were:

Early involvement of the customers - Before Boeing committed to design features, the first thing they did was talk to their customers. They spent over a year meeting with the 8 major airlines and discussing the things that they needed. What Boeing learned became the features to be incorporated into the 777 and the constraints for the design.

Digital mockups - Boeing typically built 3 sets of full-scale mockups of a new design. The first checked the basic geometry and arrangements, the second incorporated the changes from the first mockup, electrical wiring, and piping systems, the third incorporated the discoveries of the second mockup. Instead of the physical mockups, Boeing decided to use 3 dimensional digital models to coordinate the design of the aircraft systems.

Collaborative design - Boeing integrated the designers and the builders into a design-build team that forced communication and negotiation between disciplines and organizations that, prior to the 777, had never had direct contact.

Case Study: DD-21, 21st Century Destroyer

The marine industry has started to adopt some of the technologies and practices that are becoming common place in commercial industries (Session Readings 6). Much of the interest by the Department of Defense has been due to cutbacks in appropriations, the increasing cost of new acquisitions, and the complexity of new systems.

The DD-21 program is an effort to design and build the Navy's next generation surface combatant vessel. The contract for initial design has recently been let to two consortiums: the first is Bath Iron Works and Lockheed Martin, the second is Ingalls Shipbuilding and Raytheon. Integral to the design process, the Navy has required the extensive use of modeling and simulation in the course of design and evaluation of alternatives.

The requirements include (Session Reading 9):

Page 7: Chapter 16 SBDED0105

Product Model - NAVSEA currently uses the Integrated Ship Design Program (ISDP) software for product model definition provided by the NAVSEA CAD2 contract. The long term goal for SC 21 simulation-based acquisition (SBA) will be the inclusion of physics-based behavioral objects being developed by DARPA. The incorporation of behaviors into the product model results in a "smart" product model.

Physics-based Analysis Programs - Some of the SC 21 Office's analysis needs (15-20%) can be met by the adaptation of commercial-off-the-shelf (COTS) analysis programs developed for general engineering use (e.g., structural finite element, pipe network, and power distribution systems analysis). The majority of needs are unique to ship design or warship design (e.g., seakeeping, survivability). For these areas the SC 21 Office will depend upon software developed by NAVSEA, the Navy or other defense activities.

Behavioral Models - Behavior models capture extensive analytical calculations as parametric equations, as in ship maneuvering coefficients and missile flight characteristics.

Visualization - This capability allows "virtual mockups" to be toured and spatial relationships to be visualized to support the functions of design review and evaluation by managers, production staff, and fleet operators.

Simulations - Simulations are the combination of visualization with realistic behaviors. The SC 21 Office will rely heavily on other Navy and defense activities, industry, and academia for identification and integration of required simulation models.

Product Models

We talk about the the 3d product model as if it were something new and improved over two dimensional drawings. The truth be known, 2d drawings of a 3d object were considered to be a major technological advance. In fact, the British Admiralty required that the ship builder submit a scale model of the ship proposed, which showed the arrangements and structural details of the ship. This resulted in some beautifully crafted models that are now on display in the Royal Maritime Museum in London.

Ultimately, this practice was displaced by the use of orthographic drawings. So it is a bit ironic that we are now in an age where we can deliver 3d representations of a ship design that can replace drawings.

The Logical Product Model

We hear about the product model and immediately associate the term with a digital three dimensional graphic representation of a product's geometry. We also think of the data for that model as if it were in one place somewhere in the computer, maybe on a hard drive or a disk. This is the "logical" product model. Put another way, the logical product model is the way we think about the model as if it were one integrated set of data.

The Physical Product Model

The "physical" product model may be something entirely different from the logical model. Product model data may not be in one computer or storage device, but may be on more than one computer, and in various physical locations. This is made possible by the

Page 8: Chapter 16 SBDED0105

changes in networking and computer technologies. Distributed computing methods allow for the distribution of information around a network and for that information to be accessed by a user as if it were a single integrated database.

Introduction (Reference 1)

CAD has changed significantly over time as computer hardware and software have become more powerful and less expensive. The diagram below shows roughly the evolution from CAD as a drafting tool to CAD as a product modeling tool.

First applications were in computer aided engineering (CAE) applications such as finite element analysis (FEA) or numerical simulations of processes. A later application was the guidance of numerically controlled (NC) machines such as burners. The difficulty in developing models and checking results led to preprocessor applications to aid in entering information into CAE applications. Similarly, the same approach to programming NC machines was used for generation 2d drawings. Finally, there has been a merging of these applications into integrated 3d geometry modeling applications.

3d CAD systems are what we think of when we talk about the product model, but this has been a long evolution and tied to other uses of the information entered in digital form when design is done. Where the original application of CAD was in replacing the use of pencil on paper with better drawings, it has evolved into a tool for creating mathematical representations of three-dimensional geometry. (Reference 2)

3d Geometric Models (References 2 and 3)

The first model we think of as representing the product is the 3d geometric model. There are three basic approaches to representing the geometry of an object. These are described below.

Page 9: Chapter 16 SBDED0105

Wire Modeling

The graphic below shows how a wire frame geometric model works. Wire frame modeling is the extension into three dimensions of the same line-and-point definitions that is done in two dimensions. That is, points are defined according to a 3d coordinate system, and lines are then defined by their end points. Wire frame representations can lead to confusion because, visually, it is difficult to determine the front of the object from the back.

Surface Modeling

A surface model is more complete than a wire frame model and can be used to represent an object accurately and realistically. It also allows for the use of hidden-line algorithms to make visualization easier.

Page 10: Chapter 16 SBDED0105

There are three basic methods for surface modeling:

Extruded lines and curves - This method takes a line or a curve and sweeps it according to a given path.

Polygon faces - This method builds complex surfaces out of simple triangular or quadrilateral surfaces. Elemental surfaces are defined by a series of points listed in a clockwise or counterclockwise sequence, and the elemental polygons are joined by specifying common points with other elemental polygons.

Non-uniform rational B-splines (NURBS) - NURBS is a numerical method for representing a surface using blended equations to interpolate values between points that bound a surface (Session Reading 1).

Solid Modeling

Solid modeling provides the most complete representation of a physical object and often includes information about the material properties in addition to the geometry. Essentially, a solid model is a 3d representation such as a wireframe or surface model, but with the notion of an inside or an outside. This is usually defined by specifying a vector normal to the surfaces defining the volume of the object.

Page 11: Chapter 16 SBDED0105

Solid Modeling requires a lot of computation to handle manipulation and visualization. A number of different methods are used, the most common being:

primitive instancing - This method is used to describe specialized situations where the objects being defined have only a few topological configurations. For example, an "I" beam can be represented as a combination of cube variations

cell decomposition - This procedure starts with a complete solid object and decomposes the space it occupies into small cells. The accuracy of this method depends on the cell sizes, the smaller the cells, the more accurate the representation.

sweep representations -This method constructs a solid by sweeping an area through a path. A straight line path would generate an extruded solid object, and a circular path generates a solid of revolution.

constructive solid geometry (CSG) - In the CSG method, objects are built up starting with simple primitives (such as a box, wedge, cylinder, cone, or sphere) then combining them using Boolean operations (union, subtraction, intersection). For example, if a sphere

Page 12: Chapter 16 SBDED0105

primitive is combined with a cylinder using a subtraction operation would yield a sphere with a cylindrical hole. An object is described by the tree of operations and primitives.

boundary representation (B-rep) - A solid can be described by defining the boundaries of the object. For example, an object can be drawn as a wire frame where the edge segments represent the joining of the surfaces, and the corners as the edges joined at vertices.

The methods described above are not mutually exclusive, and most major commercial modeling applications use combinations of methods. For example, AutoCAD Advanced Modeling Extension (AME) uses CSG, B-rep, and sweep representations.

Solid modeling is becoming more common in ship design applications of CAD (Session Reading 2). This is because solid modeling is a more complete representation of the physical features and properties of physical objects.

Engineering Models

So far we have only looked at the basic geometric models for a product. But the definition of the product model includes not just the geometry but other data and information about the product. During the design and engineering process, there are a number of models built based in some way on the underlying geometry of the product. These engineering models represent the behaviors or responses of the product to its functional environment.

Most of digital engineering models are keyed to, depend on, or are derived from the geometry model of the product. During the design cycle, there may be a number of different models developed (e.g. finite element, computation fluid dynamics, radar cross section, etc.) in support of the particular behavior being analyzed and the software application being used. One of the problems in developing the product model is the integration of these different physical models into a single logical product model.

DT_NURBS (Reference 4)

DT_NURBS is an approach to integrating the different engineering models and results into a common form that makes it possible to share engineering information. DT_NURBS is a library of FORTRAN and C++ routines that map the model and results surfaces onto the CAD geometry model. Underlying the DT_NURBS code are algorithms that essentially extend NURBS surfaces from three dimensions into n dimensions, where the additional dimensions include physical attributes and behaviors. The basic theory underlying DT_NURBS is described in general in Session Reading 3, and detailed in the DT_NURBS Theory Manual.

More than geometry

About 20 years ago, people in the aerospace, automobile, and shipbuilding industries (Session Reading 4) began thinking about Computer Integrated Manufacturing (CIM). Their view of CIM was that there was a central place where all of the "islands of automation" in the factory were storing and using the same data about the products that were being made. The role of graphics or CAD data was seen as a subset of the overall information about the product.

Page 13: Chapter 16 SBDED0105

In fact, the graphic representation of the product geometry is a subset of the overall information, and is actually the mechanism by which information is queried and related. For example, if you look at a drawing, there are a number of "call outs" that reference different tables of information such as the materials list or a specification document. In this way, the geometric data is the key to accessing all of the information about the product (Session Reading 5). Further, as the technologies become more sophisticated, knowledge about the product can actually be generated on rules of association. From this emerges the "smart" product model (Session Reading 6)

Product Model Data Exchange (Reference 4)

Up to this point, I've talked about the methods for developing product geometry and a little bit about other product information such as engineering behavior, analysis results, and manufacturing information (work instructions, process plans, NC instructions, etc.). At this point we need to think about the sequence in which this information is actually accumulated into a digital form.

During conceptual design, information may be in the form of 2d sketches, spreadsheet calculations, and tabular data. As the project moves to the contractual design phase, engineering models may be added with analysis results, possibly 3d models of the product or components of the product. As the project moves to detailed design, more information is added. These may be detailed engineering models and analysis, manufacturing instructions, and so on.

The level of detail of the information describing the product changes from being very general and ambiguous to being quite specific and extensive. More importantly, much of this information may be captured or generated in a digital form but transferred to a downstream user via a paper document. Each of these transactions has a cost associated with time and quality of the transaction. Think about the process of developing a drawing: much of that process is actually transcribing or interpreting information from other sources into a form appropriate for the current user (Session Reading 7)

The promise of information technologies is that this entire set of information describing the product can be captured as it is developed and transferred to the next user of that information. This is a transactional view of the way that information is used and transferred. The issues surrounding this view are one of converting product information between different applications and operating systems. One answer to this problem is the standard exchange of product model data commonly abbreviated as STEP (Session Reading 8)

Another view of the opportunities that emerging information technologies can provide is the idea that the information is not transferred but shared. The product information is not transferred but used in situ where it was created, and with the attendant responsibilities residing there as well. We will address this view later in the course.

The Network

The basic infrastructure for communication between computers is the network. This has become a pervasive technology which is intended to be transparent to those who use it.

Page 14: Chapter 16 SBDED0105

Speeds and capacity continue to increase from 10 Mbs Ethernet to 100Mbs Ethernet to Gigabit Ethernet and ATM for both LAN and WAN applications.

The Extended Computing Environment

An individual working alone can accomplish some things, but a group of people working together can accomplish exponentially more. The network can extend the distribution of work, as well as information, beyond an individual's computer to those of a department, a company, or outside of the enterprise.

Networks

Sun Microsystems has adopted the phrase "The network is the computer as their vision for the future. If we think about it, this is a very natural extension of how we work, or at least is a natural evolution of our working society.

Much of design is the process of transcribing information from various sources into a format that is usable by others downstream. The information is acquired via a network of relationships and through transmission of messages (phone, fax, face-to-face meetings, memos, letters, transmittals, etc.). So if you think of the computer as a communication device, then the network is the computer.

Introduction

For the most part, naval architects and mechanical engineers learn very little about networks and network technologies. So the first part of this session will be definitions and explanations. I have lifted most of this material from Reference 1, putting it into a reasonable sequence.

Toward the end of this lecture, I introduce the reason "the network is the computer". This is the idea of a mixed platform, operating system, network operating system as a larger business environment. The vision is to have transparent access to information and resources both inside the enterprise and across enterprise boundaries. (Session Reading 1)

Inside the enterprise: the Local Area Network (LAN)

To start with, a LAN is a communications network that serves users within a confined geographical area. It is made up of servers, workstations, a network operating system and a communications link.

Servers are computers that hold programs and data that is shared by users connected on the network. The clients are the users' personal computers or workstations. These perform stand-alone processing and access the network servers as required. Servers can also provide access to other devices such as a printer or storage (such as a redundant array of inexpensive drives, a RAID). Increasingly, "thin client" network computers (NCs, think of WebTV) and stripped down PCs (PCs with limited hard disk space or RAM) are also being used where all applications and storage is on a server somewhere on the network.

Small LANs can allow certain workstations to function as a server, allowing users access to data on another user's machine. These peer-to-peer networks are often simpler to install

Page 15: Chapter 16 SBDED0105

and manage, but dedicated servers provide better performance and can handle higher transaction volume. Multiple servers are used in large networks.

The controlling software in a LAN is the network operating system (NetWare, UNIX, Windows NT, etc.) that resides in the server. A component part of the software resides in each client and allows the application to read and write data from the server as if it were on the local machine.

The message transfer is managed by a transport protocol such as TCP/IP and IPX. The physical transmission of data is performed by the access method (Ethernet, Token Ring, etc.) which is implemented in the network adapters that are plugged into the machines. The actual communications path is the cable (twisted pair, coax, optical fiber) that interconnects each network adapter.

Clients and Servers in a LAN

This illustration shows one server for each type of service on a LAN. In practice, several functions can be combined in one machine and, for large volumes, multiple machines can be used to balance the traffic for the same service. For example, a large Internet Web site is often comprised of several computer systems (servers).

Illustration of a local area network (LAN) (from the Technology Encyclopedia)

The Software in a Network Client

This illustration shows the various software components that reside in a user's client workstation in a network. Note that there are different layers of software: the network operating system which handles the communication with the network, the operating systems which handles the processor's resources and platform services, and application programs which provide the user interface and do the work.

Illustration of the software in a network client platform (from the Technology Encyclopedia)

The Software in a Network Server

The graphic below shows the typical software components that reside on a server machine on the network. Not shown are applications for managing the network or shared applications resident on the server.

Page 16: Chapter 16 SBDED0105

Illustration of software on a netwrok server platform (from the Technology Encyclopedia)

Outside the enterprise, the Wide Area Network

A wide area network (WAN) is a communications network that covers a wide geographic area, such as state or country. A LAN (local area network) is contained within a building or complex, and a MAN (metropolitan area network) generally covers a city or suburb.

Network Protocols

Network protocols are communications protocol used by the network. These protocols are defined as layers to allow for modular changes as technology changes. International Standards Organization (ISO) has defined a standard (referred to as the OSI model) for worldwide communications that defines a framework for implementing protocols in seven layers. Control is passed from one layer to the next, starting at the application layer in one station, proceeding to the bottom layer, over the channel to the next station and back up the hierarchy.

At one time, most vendors agreed to support OSI in one form or another, but OSI was too loosely defined and proprietary standards were too entrenched. Except for the OSI-compliant X.400 and X.500 e-mail and directory standards, which are widely used, what was once thought to become the universal communications standard now serves as the teaching model for all other protocols.

Most of the functionality in the OSI model exists in all communications systems, although two or three OSI layers may be incorporated into one. Below is an illustration of the OSI layers:

OSI Communiation Layers (from the Technology Encyclopedia)

Application - Layer 7

This top layer defines the language and syntax that programs use to communicate with other programs. The application layer represents the purpose of communicating in the first place. For example, a program in a client workstation uses commands to request data from a program in the server. Common functions at this layer are opening, closing, reading and writing files, transferring files and e-mail messages, executing remote jobs and obtaining directory information about network resources.

Presentation - Layer 6

When data is transmitted between different types of computer systems, the presentation layer negotiates and manages the way data is represented and encoded. For example, it

Page 17: Chapter 16 SBDED0105

provides a common denominator between ASCII and EBCDIC machines as well as between different floating point and binary formats. Sun's XDR and OSI's ASN.1 are two protocols used for this purpose. This layer is also used for encryption and decryption.

Session - Layer 5

This layer provides coordination of the communications in an orderly manner. It determines one-way or two-way communications and manages the dialogue between both parties; for example, making sure that the previous request has been fulfilled before the next one is sent. It also marks significant parts of the transmitted data with checkpoints to allow for fast recovery in the event of a connection failure.

In practice, this layer is often not used or services within this layer are sometimes incorporated into the transport layer.

Transport - Layer 4

The transport layer is responsible for overall end to end validity and integrity of the transmission. The lower data link layer (layer 2) is only responsible for delivering packets from one node to another. Thus, if a packet gets lost in a router somewhere in the enterprise intranet, the transport layer will detect that. It ensures that if a 12MB file is sent, the full 12MB is received.

"OSI transport services" include layers 1 through 4, collectively responsible for delivering a complete message or file from sending to receiving station without error.

Network - Layer 3

The network layer establishes the route between the sending and receiving stations. The node-to-node function of the data link layer (layer 2) is extended across the entire internetwork (network of networks), because a routable protocol contains a network address in addition to a station address.

This layer is the switching function of the dial-up telephone system, as well as the functions performed by routable protocols such as IP, IPX, SNA and AppleTalk. If all stations are contained within a single network segment, then the routing capability in this layer is not required.

Data Link - Layer 2

The data link is responsible for node to node validity and integrity of the transmission. The transmitted bits are divided into frames; for example, an Ethernet, Token Ring or FDDI frame in local area networks (LANs). Layers 1 and 2 are required for every type of communications.

Physical - Layer 1

The physical layer is responsible for passing bits onto and receiving them from the connecting medium. This layer has no understanding of the meaning of the bits, but deals with the electrical and mechanical characteristics of the signals and signaling methods.

Page 18: Chapter 16 SBDED0105

For example, it comprises the RTS and CTS signals in an RS-232 environment, as well as TDM and FDM techniques for multiplexing data on a line.

Ethernet (Session Reading 2 and 3) (Reference 2)

Ethernet is a LAN technology developed by Xerox, Digital and Intel (IEEE 802.3). It is the most widely used LAN access method. Token Ring is next. Ethernet is normally a shared media LAN. All stations on the segment share the total bandwidth, which is either 10 Mbps (Ethernet), 100 Mbps (Fast Ethernet) or 1000 Mbps (Gigabit Ethernet). With switched Ethernet, each sender and receiver pair have the full bandwidth.

Ethernet breaks up all data being transmitted into variable length frames from 72 to 1526 bytes in length, each containing the addresses of the source and destination stations as well as error correction data. It uses the CSMA/CD technology to broadcast each frame onto the physical medium (wire, fiber, etc.). All stations attached to the Ethernet are "listening," and the station with the matching destination address accepts the frame and checks for errors. Ethernet is a data link protocol (MAC layer protocol) and functions at layers 1 and 2 of the OSI model.

Asynchronous Transfer Mode (ATM) (Session Reading 4) (Reference 2)

ATM is a network technology for both LANs and WANs that supports realtime voice and video as well as data. The topology uses switches that establish a circuit from input to output port and maintain that connection for the duration of the transmission. This connection-oriented technique is similar to the analog telephone system. ATM is scalable and supports transmission speeds of 25, 100, 155, 622 and 2488 Mbps.

ATM works by chopping all traffic into 53-byte packets, or cells. This fixed-length unit allows very fast switches to be built, because the processing associated with variable-length packets is eliminated (finding the end of the frame). The small ATM packet also ensures that voice and video can be inserted into the stream often enough for realtime transmission.

The ability to specify a quality of service is one of ATM's most important features, allowing voice and video to be transmitted smoothly. Constant Bit Rate (CBR) guarantees bandwidth for realtime voice and video. Variable Bit Rate (VBR) is used for compressed video and LAN traffic. Available Bit Rate (ABR) adjusts bandwidth for bursty LAN traffic. Unspecified Bit Rate (UBR) provides no guarantee.

Network applications use protocols, such as TCP/IP, IPX, AppleTalk and DECnet, and there are tens of millions of Ethernet, Token Ring and FDDI client stations in existence. ATM has to coexist with these legacy protocols and networks. (Session Reading 5)

LAN Emulation (LANE), defined by the ATM Forum, interconnects legacy LANs by encapsulating packets into LANE packets and then converting them into ATM cells. It supports existing protocols without changes to Ethernet and Token Ring clients, but uses traditional routers for internetworking between LAN segments. LAN Emulation does not provide ATM quality of service. There are techniques (such as MPOA or CIF) that do provide ATM quality of service

Page 19: Chapter 16 SBDED0105

When ATM came on the scene, it was thought to be the beginning of a new era in networking, because it was both a LAN and WAN technology that could start at the desktop and go straight through to the remote office. This scenario is not evolving due to the high costs of conversion. In addition, huge numbers of Ethernets and Token Rings are already in place, and higher-speed versions of these technologies provide a simpler migration path. ATM's use as a LAN technology has been limited to demanding applications only. However, ATM is indeed establishing itself as an important backbone technology in large organizations, common carriers and Internet providers.

The graphic below provides a comparison of the different networking alternatives.

Summary of network bandwiths (from the Technology Encyclopedia)

TCP/IP

Transmission Control Protocol/Internet Protocol) is a communications protocol developed under contract from the U.S. Department of Defense to inter-network dissimilar systems. It is a de facto UNIX standard that is the protocol of the Internet and widely supported on all platforms.

The TCP part of TCP/IP provides transport functions, which ensures that the total amount of bytes sent is received correctly at the other end. UDP is an alternate transport that does not guarantee delivery. It is widely used for real-time voice and video transmissions where erroneous packets are not retransmitted.

The IP part of TCP/IP provides the routing mechanism. TCP/IP is a routable protocol, which means that the messages transmitted contain the address of a destination network as well as a destination station. This allows TCP/IP messages to be sent to multiple networks within an organization or around the world, hence its use in the worldwide Internet (see Internet address).

TCP/IP uses a sliding window transmission method which maximizes speed and also adjusts to slower circuits and delays in the route. TCP/IP packets use a logical address of the destination station rather than a physical address. This logical IP address, which also includes the network address, is dynamically mapped to a physical station address (Ethernet, Token Ring, ATM, etc.) at runtime.

TCP/IP includes a file transfer capability called FTP, or File Transfer Protocol. This function allows files to be downloaded and uploaded between TCP/IP sites. SMTP, or Simple Mail Transfer Protocol, is TCP/IP's own messaging system for electronic mail, and the Telnet protocol provides terminal emulation. This allows a personal computer or workstation to emulate a variety of terminals connected to mainframes and midrange computers.

Page 20: Chapter 16 SBDED0105

The combination of TCP/IP, NFS and NIS comprise the primary networking components of the UNIX operating system. The following chart compares the TCP/IP layers with the OSI model.

TCP/IP map to the OSI model (from the Technology Encyclopedia)

IP Address

Internet Protocol (IP) address is the physical address of a computer attached to a TCP/IP network. Every client and server station must have a unique IP address. Client workstations have either a permanent address or one that is dynamically assigned to them each dial-up session. IP addresses are written as four sets of numbers separated by periods; for example, 204.171.64.2.

The TCP/IP packet uses 32 bits to contain the IP address, which is made up of a network and host address (netid and hostid). The more bits used for network address, the fewer remain for hosts. Certain high-order bits identify class types and some numbers are reserved.

The following table shows how the bits are divided. The Class Number is the decimal value of the high-order eight bits, which identifies the class type.

Class Class Number

Maximum Networks

Maximum Hosts

Bits in Net ID

Bits in Host ID

A 1-127 127 16,777,214 7 24

B 129-191 16,383 65,534 14 16

C 192-223 2,097,151 254 21 8

Class C addresses have been expanded using the CIDR addressing scheme, which uses a variable network ID instead of the fixed numbers shown above. Network addresses are supplied to organizations by the InterNIC Registration Service.

LAN/WAN Hardware

A router is a device that routes data packets from one local area network (LAN) or wide area network (WAN) to another. Routers see the network as network addresses and all the possible paths between them. They read the network address in each transmitted frame and make a decision on how to send it based on the most expedient route (traffic load, line costs, speed, bad lines, etc.). Routers work at the network layer (OSI layer 3), whereas bridges and switches work at the data link layer (layer 2).

As well as performing actual routing and path determination, routers are also used for such functions as segmenting LANs to balance traffic, filtering traffic for security

Page 21: Chapter 16 SBDED0105

purposes and controlling broadcast storms. Multiprotocol routers support several protocols such as IPX, TCP/IP and DECnet.

Routers often serve as an intranet backbone, interconnecting all networks in the enterprise. This architecture strings several routers together via a LAN topology such as FDDI. Another approach uses a router with a high-speed backplane known as a collapsed backbone. The collapsed backbone router, which connects more subnetworks in one device, makes network management simpler. The substitution of a fast backplane instead of an external LAN topology improves performance.

Routers can only route a message that is transmitted by a routable protocol such as IPX and IP. Messages in non-routable protocols, such as NetBIOS and LAT, cannot be routed, but they can be transferred from LAN to LAN via a bridge. Because routers have to inspect the network address in the protocol, they do more processing and add more overhead than a bridge or switch, which both work at the data link (MAC) layer.

Most routers are specialized computers that are optimized for communications; however, router functions can also be implemented by adding routing software to a file server. NetWare, for example, includes routing software. The NetWare operating system can route from one subnetwork to another if each one is connected to its own network adapter (NIC) in the server. The major router vendors are Cisco Systems and Bay Networks.

Although widely deployed and an essential component in the worldwide Internet and many enterprises, routers are complex and costly devices that add considerable overhead to the transmission of data. Routers work with connectionless networks, in which each frame is inspected and then forwarded.

Routing will always be necessary; however, the traditional router is giving way to devices that perform routing functions but do not inspect each frame in as much detail. The first frame is analyzed at the network layer (layer 3), and a destination path is determined. The remaining frames of the message are forwarded at the data link layer (layer 2), which is considerably faster.

Illustration of topologies for OSI layers (from the Technology Encyclopedia)

Distributed Computing Architecture (Reference 3)

So, you ask, what's the point of the definitions and explanations above? Well, the role of the network has become critical to integrating and extending computing processes, and thereby work processes and business operations. The network allows not just sharing of programs and information, but also the sharing and distribution of tasks to places and platforms that are available, and/or better suited to the tasks.

Page 22: Chapter 16 SBDED0105

The Distributed Computing Environment (DCE)

Networking relies on common methods for communicating and operating. If all the players on a network use the same proprietary communication protocols, then there is not a problem of interoperating. The reality is that this is only practical within an enterprise, and becomes less so as technology becomes better and less expensive.

The solution to the interoperability problem is the adoption of standards for communication. This is simple to state but more difficult to implement. Installed base and market share play the major role in determining what becomes a standard. As an example, TCP/IP has overwhelmed the OSI network model by virtue of the popularity of the Internet.

Because of the heterogeneous nature of most computing environments, a class of software called middleware has emerged. This is software that medicates communication between clients and servers, serving as an interpreter as well as providing other services to software on the network.

One such technology is the Open Software Foundation's DCE software. OSF's DCE is an integrated set of operating-system- and network-independent services that support the development, use, and maintenance of distributed applications. OSF's DCE enables a manageable, transparent, and interoperable network of multivendor, multiplatform systems.(Session Reading 6)

The network database

A network database is a database application that runs in a network. It is a database management system (DBMS) that was designed using a client/server architecture.

Most database applications are being redesigned as network database applications. This allows them to be much faster and more reliable. The speed comes from the fact that queries to the database can be shared by more than one computer working in parallel.

Distributed processing

A digression ...

Supercomputers are designed according to two architectures: vector processing, and parallel processing. In vector processing machines (such as Cray machines) speed was achieved by optimizing the hardware to support vector operations. In 1983, the Cray X-MP used four processors to subdivide computing operations, working on computations in parallel, thus parallel computing. Since then, most supercomputers are multiprocessor machines largely because of their price and performance. Although there are a number of different architectures for multiprocessors, they all use a some kind of communication layer to share resources and/or exchange messages. (Additional Reading 8)

Back to the topic ...

The reason we are making this detour into computer architecture is because the same kind of parallel processing can be achieved via a network. Speeds of communication buses in

Page 23: Chapter 16 SBDED0105

supercomputers are very high, much higher than current network protocols. However, network speeds and capacity are increasing.

It is unlikely that a network of computers will outperform a dedicated supercomputer. There are added layers of communication overhead in a network that are not in a supercomputer. Plus, a supercomputer does not have to deal with the problem of heterogeneous hardware, software, and operating systems that is inherent on a network. But this does not mean that there are not significant benefits to distributing computational tasks to other platforms on the network.

For example: parallel virtual machine software (PVM) is a software package that permits a heterogeneous collection of Unix computers hooked together by a network to be used as a single large parallel computer. Thus large computational problems can be solved more cost effectively by using the aggregate power and memory of many computers. The software is very portable and has been compiled to run on everything from laptops to CRAYs. (Session Reading 7)

PVM enables users to exploit their existing computer hardware to solve much larger problems at minimal additional cost. Hundreds of sites around the world are using PVM to solve important scientific, industrial, and medical problems in addition to PVM's use as an educational tool to teach parallel programming.

Objects

In the previous session, I introduced network technologies into the discussion. The reason for doing this is that the notion of the computing environment has been changed by these tools. The computer is no longer an isolated data processing machine, but is integrated into the way that we communicate and do our work.

The Extended Computing Environment

The extended computing environment does not just mean using the network to move data around. It is a way of delegating work and sharing in-process information. To move data around among dissimilar computing platforms requires communication standards. To share processing cycles among dissimilar paltforms requires not only communication standards, but common software architectures.

Objects

The model which has emerged for sharing information and processing power over a network is the object-oriented programming approach. Objects are the fundamental unit in this architecture which are then agregated into higher level components such as business objects or software components. The definition and use of these objects, business objects, and components forms the basis for an SBD infrastructure.

Introduction

These days calling something "object-oriented" is the same as calling something "new and improved". In many cases this phrase has only buzz-word appeal with no real

Page 24: Chapter 16 SBDED0105

meaning other than to be part of a technology elite, or a reason to charge the consumer more money.

The world is, in fact, "object-oriented". That is, we deal with things that have properties such as color, smell, sound, shape, weight, and other attributes. These objects also do certain things such as a dog barks, a car moves, a bird flies, and, hopefully, ships float.

So what's the big deal about being "object-oriented"?

Being Object-oriented (Session Readings 1 and 2) (Reference 1)

Objects in the real world have state (properties) and behaviors (things that the object does). Objects in software are bundles of code that contain data and procedures that act on that data. These procedures are known as methods and are used to generate behaviors. The data are the properties of the object, the values of which define the object's state.

Traditional software architecture emphasises procedures. That is, first you do A, then you do B, then you do C. In object-oriented software architecture, we define object A which has the properties 1, 2,and 3. If asked by another object, object A will respond in a certain way. Software objects are connected by their relationships to each other, and through the messages that are exchanged between objects.

Encapsulation

In the object world (physical and software) the data about the particular object is contained by that object. This containment is called encapsulation. The data about an object and its behavior is contained in the body of the object.

Implementation hiding

An object's data and behavior is kept internal to the object. It only need expose the information it wishes to share with the ourside world. The object does this through some kind of interface.

Page 25: Chapter 16 SBDED0105

Modularity

Because implementations are hidden within the object, the object can be treated as a black box. Each object is in effect independent of other objects. This means that a change to the internals of the object can be made without affecting its relationships to other objects.

Messages

As in physical objects, software objects exchange information about themselves in response to messages. For example, somebody (object A) approaches you, object B, at a party and asks you your name. The message is the request for your name, and you respond (or not). If you chose to respond, you are granting permission to object A to access your data. the message could contain data about object B that object may need. The message could be "my name is object A, what's yours?" Object A would use that information to determine its reponse.

Based on the information passed in the message, you may chose not to answer (a behavior). The message could also elicite a behavior from object B, such as punching object A's lights out. This gets into an issue we'll address later as to what is appropriate behavior in certain circumstances.

The Pump and Motor example

As an illustration of the relationships between objects, I'll introduce an example of a pump and motor.

Page 26: Chapter 16 SBDED0105

In the diagram above, I have two objects: a pump and a motor. The way we use a pump and a motor is: we flip a switch, it turns the motor on, the motor turns and rotates the pump, as the pump rotates it creates the pressure differential to move fluid.

Creating objects: classes

We've talked about objects and how objects work. The next question is "how are objects created?"

A class is the template for an object in the same way that a blueprint is a template for creating a physical product. In our pump example, all pumps have certain properties and behaviors that characterize a pump. All pumps have an inlet pressure, a geometry, and when the shaft turns, the pump generates a change in pressure. These are the characteristics that define a pump class.

An object is an instance of a class. In the pump example, a specific pump may have be 24 inches long by 18 inches wide by 18 inches high and have an inlet pressure of 5 psi. When 100 rpm are applied to the shaft, the outlet pressure is 25 psi. So the pump class is the set of parameters and functions that describe a generalpump, and when parameters are entered into the class, and object is created.

Inheritance

When I create a pump class, I am defining all of the characteristics for a pump. There are, however, many different types of pumps (positive displacement, kinetic, centrifugal, etc.). So I may have a number of different variations as shown in the figure below.

Page 27: Chapter 16 SBDED0105

Inheritance is the process of creating a new class by starting with properties and behaviors of an existing similar class, then extending those by adding more specific data and methods. The class that does the inheriting is called the child class (or subclass), and the class that provides the information is called the parent class (or superclass).

In the pump example above, the parent class is "pump" which has some very basic properties such as inlet pressure, inlet loaction, outlet location, foot print, as well as behavior such as: for a given input of power there is a change in pressure and an outlet pressure is provided. The child classes are the different variations of pumps (positive displacement or kinetic pump). They inherent the same basic characteristics of a pump, but extend them with additional, more specific information, such as a different type of method for generating the pressure differential. The child classes can themselves be parent classes to even more specific child classes.

The advantage of inheritance is that as software is developed, previously defined objects can be used as a starting point without having to rewrite them.

Java (Session Reading 3, 4, and 5) (Reference 2)

Java is a programming language developed by Sun Microsystems which was specifically designed to operate across platforms via a network. Based to a large extent on C++, Java depends on the installation of "virtual machine" (VM) software on computers that essentially operate as Java-based computers.

Page 28: Chapter 16 SBDED0105

Java code is developed like any other software, in that you start with a text file and submit that file to a java compiler. The output of the Java compiler is a text file of "byte code" which are the instructions to the Java virtual machine. The virtual machine executes those instructions by accessing the resources of the supporting operating system and processor.

When you encounter a Java applet embedded in an HTML page, the compiled byte code file is downloaded to your platform's virtual machine. In the case of an applet, the virtual machine may be part of the web browser.

Java was also designed with security in mind. Most people don't want strange software wandering around their computer. To prevent this, a Java applet cannot read from or write to your hard disk, and has no access to other system resources. In Java version 1.1, some of the strict security restrictions are loosened to allow "trusted" applets to access your system resources using digital signatures to provide verification that the applet is from a "friendly" source.

Java applications, however, are another story. A Java application has the same capabilities as any other programming language, plus the added advantages of being "write-once-run-anywhere", and of being designed for distributed computing.

Java is inherently object-oriented and provides an excellent opportunity to introduce both object-oriented programming methods, and distributed computing methods to this discussion. There are competing software technologies, such as Python or ActiveX, which have similar capabilities. But Java has been the tool of choice for a number of the SBD applications.

Declaring Classes

In Java, the first place to start coding is in the definition of a class. The syntax for declaring a java class is:

class Identifier { ClassBody

}

The Identifier is the name of the new class. The curly braces, {}, surround the body of the class (ClassBody). Our pump and motor example would be:

class Motor { // state is "on" or "off" // method for creating rpm

} class Pump{

// inlet pressure // method for creating the pressure differential

}

In Java, the "//" denotes a comment, and in the above code example, I've put in place holders for properties and methods. The first thing to do is turn the motor on, or change the motor's state, then send the rpm to the pump.

Page 29: Chapter 16 SBDED0105

class Motor { // state is "on" or "off" boolean state = true; // check to see if the motor is "on" if (state = true) {

// if the motor is on calculate the rpm from the motor float rpm() { // rpm calculation else rpm = 0.0; }

}}

The second thing to do here, which will make this more complete, is to add the code to declare the inlet variable as a floating point value, and define a method for the calculation of the outlet pressure for the pump. This method, outletPressure, makes a call to the Motor object and asks for the value of rpm., which is then used in the calculation of the returned value.

class Pump { // set the inlet pressure float inlet_pressure = 5.0; // method for creating the pressure differential float outletPressure(Motor.rpm) {

// calculate the outlet pressure using the rpm from the motor }

}

Deriving Classes

If we go back to the hierarchy of pumps, we can derive a new class from the general class of pumps. In Java, this would look like:

class KineticPump extends Pump{ // add some differentiating characteristic such as "efficiency"

}

Overriding Methods

A derived class (a subclass) will inherit properties and methods from the parent. Sometimes, the methods in the subclass will need to be different. In Java, there is the ability to "override" the inherited methods with another, more appropriate method within the subclass.

class KineticPump extends Pump{ // add a new method for calculaing the outlet pressure float outletPressure(Motor.rpm) {

// calculate the outlet pressure using another function of the motor's rpm }

Page 30: Chapter 16 SBDED0105

}

In the pump example, the kinetic pump object would use a different method for calculating the outlet pressure, even though it would inherit most of the other properties and behaviors.

Overloading Methods

Another object-oriented programming technique is called "method overloading". This allows the programmer to specify differnet parameters to send to methods in an object. To overload a method, the programmer declares another version of the method using the same name but with different parameters.

In the pump example, an overloaded method would look something like this:

class Pump { // set the inlet pressure float inlet_pressure = 5.0; // method for creating the pressure differential float outletPressure(Motor.rpm) {

// calculate the outlet pressure using the rpm from the motor } float outletPressure(Motor.torque) {

// calculate the outlet pressure using the torque from the motor } float outletPressure(Motor.hp) {

// calculate the outlet pressure using the horsepower from the motor }

}

In the example above, depending on the parameter passed to the outletPressure method, the outlet pressure will be calculated different ways. This allows the programmer to make objects extremely flexible and able to hand a range of different relationships with other objects.

Object Construction

Most of the design work in developing an object-oriented application involves defining classes and their relationships. When you actually create an object is when you actually create an instance of the class. To do this requires creating a "constructor" method within the class which initializes the variables to create an object.

Below is an example in Java for the pump class:

class Pump { public Pump(){

// set the inlet pressure float inlet_pressure = 5.0;

} public Pump(float actual_inlet_pressure){

Page 31: Chapter 16 SBDED0105

// pass in the inlet pressure inlet_pressure = actual_inlet_pressure }float outletPressure(Motor.rpm) {

// calculate the outlet pressure using the rpm from the motor } float outletPressure(Motor.torque) {

// calculate the outlet pressure using the torque from the motor } float outletPressure(Motor.hp) {

// calculate the outlet pressure using the horsepower from the motor }

}

In this example, I have used method overloading to create two constructor methods. The first simply initializes the pump object so the inlet pressure is 5.0 psi. Alternatively, I could create an instance of the pump where I pass an actual inlet pressure as a parameter.

I've also used what are called access modifiers (the "public" declaration which prefaces the method) to specifically allow any other object to access the pump construtor method. Access modifiers allow the programmer to define the interfaces to the data and operations in an object.

To actually invoke the pump class and create an instance of a pump, in Java I would use the new operator. This would look like:

// This creates a pump object, "aPump", which will have the default inlet pressure of 5.0 psi

Pump aPump = new Pump();

// This creates a pump object, "anotherPump", which will have an inlet pressure of 10.0 psi

Pump anotherPump = new Pump(10.0);

Components (Reference 3)

Basic objects are elemental and we can assemble these fundamental objects into larger accumulations of objects. Continuing with the pump example: when I specify a pump, I may actually be referring to an assembly which includes the pump body, the impellor, the bearings, a gasket, the shaft, the coupling, a motor, a controller, and a foundation for the whole thing. Each one of those parts is an object in its own right. I can define these assemblies as objects themselves, or components which are collections of objects, and use them to build more complex software systems.

JavaBeans (Reading Session 6)

A Java Bean is a reusable software component that can be visually manipulated in builder tools. To understand the precise meaning of this definition of a Bean, clarification is required for the following terms:

Page 32: Chapter 16 SBDED0105

Software component - Software components are designed to apply the power and benefit of reusable, interchangeable parts from other industries to the field of software construction. Other industries have long profited from reusable components. Reusable electronic components are found on circuit boards. A typical part in your car can be replaced by a component made from one of many different competing manufacturers. Lucrative industries are built around parts construction and supply in most competitive fields. The idea is that standard interfaces allow for interchangeable, reusable components.

Builder tool - The primary purpose of beans is to enable the visual construction of applications. You've probably used or seen applications like Visual Basic, Visual Age, or Delphi. These tools are referred to as visual application builders, or builder tools for short. Typically such tools are GUI applications, although they need not be. There is usually a palette of components available from which a program designer can drag items and place them on a form or client window.

Visual manipulation - Application builders let you do all of this, but in addition, they let you visually hook up components, select events to be fired, and handlers for events through mouse drag, or menu selection. Very little code needs to be written by hand to get the initial component interaction working properly--at least in comparison to a GUI builder or a window builder.

It's logical to wonder: "What is the difference between a Java Bean and an instance of a normal Java class?"

What differentiates Beans from typical Java classes is introspection. Tools that recognize predefined patterns in method signatures and class definitions can "look inside" a Bean to determine its properties and behavior. Bean's state can be manipulated at the time it is being assembled as a part within a larger application. The application assembly is referred to as design time in contrast to run time. In order for this scheme to work, method signatures within Beans must follow a certain pattern in order for introspection tools to recognize how Beans can be manipulated, both at design time, and run time.

In effect, Beans publish their attributes and behaviors through special method signature patterns that are recognized by beans-aware application construction tools. However, you need not have one of these construction tools in order to build or test your beans. The pattern signatures are designed to be easily recognized by human readers as well as builder tools. One of the first things you'll learn when building beans is how to recognize and construct methods that adhere to these patterns.

Not all useful software modules should be Beans. Beans are best suited to software components intended to be visually manipulated within builder tools. Some functionality, however, is still best provided through a programatic (textual) interface, rather than a visual manipulation interface. For example, an SQL, or JDBC API would probably be better suited to packaging through a class library, rather than a Bean.

Business Objects (Reading Session 7)

An example of this component view of software is the idea of business objects. The Object Management Group (OMG) is a non-profit organization that has been established

Page 33: Chapter 16 SBDED0105

by the major software developers (IBM, SUN, Apple, Hewlett-Packard, ORACLE, among others) to promote the use of object-oriented software technology. One of the areas they have focused on is the common definition of business objects.

Business objects are representations of the properties and behavior of real-world things or concepts that are meaningful to a business. Such things as customers, products, orders, employees, trades, financial instruments, shipping containers, and vehicles are all examples of real-world things that can be represented as business objects. They provide a way of managing complexity by giving a higher-level perspective, and packaging the essential characteristics of business concepts more completely.

Business objects act as participants in business processes by performing the required tasks or steps that make up business activities. These business objects can then be used to design and implement systems which exhibit a resemblance to the business that they support. This is possible because object technology allows the development of objects in software that mirror their counterparts in the real world.

Business objects are components composed of elemental objects which provide for the presentation of an interface, the business process rules, and underlying business entity objects.

The presentation object is an elemental object that establishes how it interacts with other business objects. The process object contains the rules and constraints that determine how the business object will operate. The entity object(s) contain the underlying data for the business object.

Each object in the business model is used to create an executable representation of that object in your computer system. This executable object will contain and encapsulate the information and rules associated with that object and its relationships to other objects.

Some business objects may be implemented on top of existing applications as "wrappers", providing an interface to legacy applications. This works so long as the wrappers can "speak" according to common protocol. Otherwise, consistency in the implementation environment (ie. homogenous operating systems, platforms, etc.) is not required.

An application, in terms of business objects, becomes a set of cooperative business objects combined to facilitate business processes. The concept of monolithic applications become outmoded with a system composed of a set of cooperative business objects. Instead, an information system is composed of semi-autonomous but cooperative business objects which can be more easily adapted and changed. This type of component assembly and reuse has been recognized as a better way to build information systems.

Objects in distributed computing

Object-oriented software technology allow for reuse of code and the ability develop complex, robust applications in less time than traditional software development architectures. But it also allows for software to reflect the way that real-world entities operate, and co-operate.

Page 34: Chapter 16 SBDED0105

Returning to the pump and motor example: because Java is network-aware and platform independent, the pump object could be running on a platform in Ann Arbor, and the motor object could be running on a platform at ABS in Houston. If these two objects were combined as a business object, then a shipyard in Asia could include the pump package as part of their software application.

Dealing with Distributed Objects

I'm sure about now you're wondering if we'll ever get to the point where we talk about simulation-based design again. The problem any course in information technology these days is that people have a range of experience and knowledge. Some folks are specialists and others are generalists. The problem is to get everybody on the same page, especially when we are talking about the range of technology that is covered by SBD.

Objects Everywhere

Key to simulation-based design is the distributed nature of the resources that SBD pulls together. There may be an analysis program in use by a design agent that provides information to ABS as to whether the structure meets with the approval requirements. In turn those approval requirements are needed by the design agent, the ship owner, and the shipbuilder. For systems design, there are the piping designer, the manufacturer, valve

vendors, equipment suppliers, and more players in the design, construction, and operation processes. Each of these players have their own design systems, analysis tools, data structures, data bases, and business practices.

We have what is called the object-web. It is almost the same as the world wide web that we know except that the different players are sharing design applications, product model information, and analysis tools through the structure provided by business practices. What makes this different from previous environments is that information is shared, not exchanged.

Objects provide the structure for the software systems to be developed. The object model makes the process comprehensible, and provides the mechanisms for coordination.

That is, if we are all on the same page and speaking the same language.

Standards for Distributed Objects

What made the World Wide Web possible was the adoption of common communications standards. In our discussion networking, it should have been apparent that what made the network possible outside of the enterprise was the establishment and adoption of communication protocols. Mearly establishing communication protocols wasn't enough. Remember the OSI model? TCP/IP overran the ISO OSI model and has become the internetworking standard. It has done so by virtue of its ubiquity, not by international agreement.

An SBD environment, as so many other business environments, cannot function without some common way of allowing objects over a network to discover each other, communicate, and cooperate. In the new economics, he who holds market share is king, and sets the standards. Thus begin religious wars of CORBA, DCOM, and Java.

Page 35: Chapter 16 SBDED0105

Introduction (Readings 1 and 2)

We have introduced a lot of underlying technology into this course which ultimately builds up to the development of a simulation-based design environment. As we talk about the infrastructure for distributed computing, a key concern is making different platforms, operating systems, network operating systems, and applications work together.

When we think of the current computing environment, we think of it in a transaction sense. That is, information is exchanged as a file sent and received. Processing of the information in that file takes place in a serial fashion.

Distributed objects on the other hand allow for concurrent processes to take place. It is a very egalitarian environment: processes can take place where the best resources are available and called by those who need the information.

As I mentioned in the session home page, there are holy wars going on as to who will control the next incarnation of the Internet and this emerging world wide object web. This war is between Microsoft and everybody else. In case you think this is just Microsoft bashing, consider two things:

Microsoft NT is becoming a force in the enterprise network operating system market; Intel has the dominant share of the chip sets that are in most of the computers these days, significantly threatening the high end workstation market.

Given these two market facts, the rest of the computer industry is in the position where these two companies can dictate the standards the rest of them will use. So where are IBM, SUN, HP, SGI, Apple, and the others? Surely we are not going to be forced to select our hardware, software, and networking options from just a pair of powerful vendors? What happens to our legacy systems? Are we going to replace all of them with new systems in order to comply with the emerging defacto standard?

Common Object Request Broker Architecture (CORBA) (Reading 3, Additional Reading 1, and Reference 1)

In 1989, the Object Management Group (OMG) was established to promote the theory and practice of object technology for the development of distributed computing systems. OMG's membership is currently over 800 software vendors, software developers and end users (oddly enough, including Microsoft as a contributing member). The goal is to provide a common architectural framework for object oriented applications based on widely available interface specifications.

OMG realizes its goals by creating standards for interoperability and portability of distributed object-oriented applications, not by producing software or implementation guidelines. Specifications are put together using ideas of OMG members who respond to Requests For Information (RFI) and Requests For Proposals (RFP). Members submit proposals for the specifications including working prototypes of the proposed approaches. The proposed specification are then evaluated and voted on OMG members. The winning proposal is adopted as the standard.

In 1991, the OMG released CORBA version 1.1 which defined the Interface Definition Language (IDL) and the Application Programming Interfaces (API) that enable client/server object interaction within a specific implementation of an Object Request

Page 36: Chapter 16 SBDED0105

Broker (ORB). CORBA 2.0, adopted in December of 1994, defines true interoperability by specifying how ORBs from different vendors can interoperate.

An ORB is the middleware that establishes client-server relationships between distributed objects. Through an ORB, a client applications or object can invoke a method on a server object, whether it is on the same machine or across a network. The ORB intercepts the call and finds an object that can implement the request. It then passes the parameters to the discovered object, invokes its method, and returns the results. The client does not need to know where the object is located on the network, its programming language, its operating system, or any other system aspects that are not part of an object's interface. By providing common protocols, the ORB allows for interoperability between applications on different machines in heterogeneous distributed environments, seamlessly interconnecting multiple object systems.

Application developers use their own design or a recognized standard to define the protocol to be used between the devices. Protocol definition depends on the implementation language, network transport and a dozen other factors. ORBs simplify this process through a single implementation language-independent specification, the IDL. ORBs provide flexibility by letting programmers choose the most appropriate operating system, execution environment and even programming language to use for each component of a system under construction. More importantly, they allow the integration of existing components by allowing a means of modeling the legacy component using the same IDL used for creating new objects. The developer then writes "wrapper" code that translates between the ORB and the interfaces to the legacy application.

There are a number of different ORB products available allowing software vendors to provide products which meet specific needs of their operational environments. Because of this and the fact that there are systems that are not CORBA-compliant, OMG has formulated the ORB interoperability architecture.

The General Inter-ORB Protocol (GIOP) has been specifically defined to provide the mechanisms for ORB-to-ORB interaction over any transport protocol that meets a basic set of criteria. Versions of GIOP implemented using different transport protocols will not necessarily be directly compatible, but their interaction will be made more efficient.

OMG has also specified how it is going to be implemented using the TCP/IP transport and thus defined the Internet Inter-ORB Protocol (IIOP). In order to illustrate the relationship between GIOP and IIOP, OMG points out that it is the same as between IDL and its concrete mapping, for example C++ mapping. IIOP is designed to provide "out of the box" interoperability with other compatible ORBs (TCP/IP being the most popular vendor-independent transport layer). Further, IIOP can also be used as an intermediate layer between half-bridges and in addition to its interoperability functions, vendors can use it for internal ORB messaging (although this is not required, and is only a side-effect of its definition). The specification also makes provision for a set of environment-Specific Inter-ORB Protocols (ESIOPs). These protocols should be used for "out of the box" interoperability wherever implementations using their transport are popular.

Page 37: Chapter 16 SBDED0105

Interface Definition Language: an example

The OMG Object Model defines common object semantics for specifying the externally visible characteristics of objects in a standard and implementation-independent way. In this model clients request services from objects (which will also be called servers) through a well-defined interface. This interface is specified in OMG IDL (Interface Definition Language). A client accesses an object by issuing a request to the object. The request is an event, and it carries information including an operation, the object reference of the service provider, and actual parameters (if any). The object reference is an object name that defines an object reliably.

In this example, I'll define CORBA interfaces for our Pump and Motor.

module Example {

/* Class definition of MyPump which inherits from the general class of Pump */ interface MyPump:Pump

{

attribute float inletPressure; float outletPressure(in float rpm);

}

/* Class definition of MyMotor which inherits from the general class of Motor */ interface MyMotor:Motor

{

attribute float horsepower; float rpm(in boolean status);

}

} /* End Example */

In the example IDL code above I first created a namespace to group the set of class descriptions (or interfaces ). The module is the main identifier of the set of class interfaces. The interface defines a set of methods (or operations ) that a client object can invoke in the server object. An interface can have an attribute which is a value automatically assigned to or retrieved from the server object. With the float statements, I have defined a service that a client object can request of the server object and the type of data that is returned. In both myPump and myMotor, I set the parameters to be passed in to the server object.

Once the IDL structure has been created, it is submitted to an IDL compiler. The IDL compiler maps the IDL to a particular language (Java, C++) and returns the classes necessary to access the referenced classes via the ORB. The IDL also generates data about the interfaces that are stored in an interface repository which is part of the ORB.

CORBA also specifies the Internet InterORB Protocol (IIOP) which is the common communication mechanism for accessing objects over the Internet. The IIOP and IDL

Page 38: Chapter 16 SBDED0105

function very much like the more familiar HTTP and CGI, however with significant performance advantages.

Microsoft's Component Object Model (COM) (Reading 4, Additional Reading 2, and Reference 2)

COM's first incarnation assumed COM objects and their clients were running on the same machine (although they could still be in the same process or in different processes). From the beginning, however, COM's designers intended to add the capability for clients to create and access objects on other machines. Although COM first made its way into the world in 1993, Distributed COM (DCOM) didn't appear until the release of Windows NT 4.0 in mid-1996.

DCOM is Microsoft's alternative to CORBA. It is a distributed version of OLE (Object linking and Embedding) 2.0's Component Object Model. Like OLE's COM, the distributed COM specifies interfaces between component objects within a single application or between applications providing local/remote transparency between components across networks.

Distributed COM builds on the DCE RPC (Distributed Computing Environment remote procedure call).

Like CORBA, distributed COM separates interfaces from implementations and requires that all interfaces be declared using an IDL (interface definition language). However, Microsoft's IDL, based on DCE, is not CORBA-compliant.

Page 39: Chapter 16 SBDED0105

A COM interface is not a class in the object-oriented sense as it is in CORBA. COM interfaces do not have state and cannot be instantiated to create a unique object. Rather, a COM interface is a group of related functions which COM clients retrieve using a pointer to access the functions in an interface. To handle named objects in the object-oriented sense, distributed COM uses the OLE moniker concept to allow instantiation of multiple objects. A client uses the pointer to reconnect to the same object instance with the same state (not just another interface pointer of the same class) at a later time. Monikers provide a combination of services, including naming, persistence, relationships, query, and object location.

Distributed COM provides many of the the same capabilities as CORBA, including alternatives to CORBA's persistence, transaction services, common facilities, interface repository, and relationships. However, the important distinction between distributed COM and CORBA is the OS dependence of distributed COM.

DCOM really doesn't change how a client application creates and interacts with a COM object: client uses the same code to access local and remote objects. However, a client can choose to use extras provided by DCOM. For example, DCOM includes a distributed security mechanism, providing authentication and data encryption. Another extra: to locate COM objects on other machines, DCOM can use directory services such as the Domain Name System (DNS). Many of these features are expected to be available in Windows NT 5.0.

It is unlikely that Microsoft will abandon distributed COM for CORBA, and vice versa. There are a number of bridge applications that allow for communication between CORBA and distributed COM. Interoperability between OLE's non-distributed COM and CORBA is relatively easy with implementations available from IBM, Iona, Candle, and Digital.

Distributed COM is harder, because of dissimilar object models, consequently components won't collaborate as effectively across the network they can within each camp.

Java RMI (Additional Reading 3 and Reference 3)

Java's Remote Method Invocation (RMI) is another choice for supporting distributed objects. Unlike CORBA and DCOM which allow communication between objects written in various languages, RMI is focused on communication between objects implemented in Java. This limitation adds some constraints, but it also makes RMI very simple to use and more efficient. Sun Microsystems, RMI's developers, had the luxury of designing their protocol specifically to match for the Java's features.

Java Enterprise Beans (Reading 5, Additional Readings 4 and 5, and Reference 3)

In the previous lecture, I introduced Java Beans. In that discussion, you may have concluded that they are just a neat toy for hooking together buttons in applets. However, the next wave of Java Beans technology, called Enterprise Java Beans extends their benefits to server systems.

Page 40: Chapter 16 SBDED0105

An Enterprise Java Bean is an encapsulation of a piece of business logic. It can be executed in an environment that supports transaction-processing constructs. In fact, current transaction-processing environments, such as IBM's CICS, will support Enterprise Java Beans in the future.

The basic structure of an Enterprise Java Bean is the essentially the same as that of any other Bean. The Enterprise Bean comes in a Java archive (JAR), but contains more information that defines its transaction-scope rules. The basic model for the Enterprise Bean is one of client and server, where communication between the client application built with conventional Beans providing the presentation object, and the Enterprise Beans executing in the server via remote method invocation (RMI), the CORBA Internet Inter-ORB Protocol (IIOP), or the forthcoming RMI over IIOP.

People in the computer industry have been talking about the advantages of distributed objects and software components for a long time. Selecting a distributed object model has many implications, not the least of which is platform dependence. If you choose DCOM for a multitiered distributed-object solution, you have to consider introducing single-vendor proprietary systems in many places. You need to be able to answer such questions as:

How many of my existing systems support Microsoft technologies? How much will it cost to replace these systems with Microsoft-capable systems?

Will I be choosing the appropriate hardware and OS platform for my solution, or is the choice of component model giving control to a single supplier? Do I want this kind of platform lock-in?

Heterogeneous networks of systems are a reality in today's business environment, yet the selection of the wrong component model for the client in an n-tiered solution could force a corporation to spend millions of dollars changing its backend systems.

CORBA and other true open-architecture protocols are not just a component model for the client; it is the key to integrated, n-tiered solutions using the diverse array of platforms that make up today's business and engineering systems. What is required for an SBD environment is this kind of interoperability and platform independence -- independence from the development platform, independence from the execution platform, and independence from development tools.

Distributed Collaboration and Simulation

To this point in the course, we have covered the motivation for simulation-based design and much of the plumbing. The last element we need to discuss is "simulation" part of SBD. The product model is a representation of the geometry and behavior of the product being designed. It is, in itself, not a simulation in the sense that we are using the concept.

The definition of simulation that we are embracing here is the one that looks at the behavior of a model in a time domain. That is, at each instant in time there is a change to the input submitted to the parts of the model that charaterize its behavior. The model then changes its relationship to the environment, which may have a subsequent change to the environment.

Page 41: Chapter 16 SBDED0105

Using Simulation in Design

In design, we make a lot of guesses and assumptions about what the product we are designing will need to be able to do. For a ship, we know that we need to give some thought to the sea state that the ship will operate, the operating procedures, and the response of our design to extreme environments (ice, hurricanes, grounding, collision, etc.). For class societies, empirical data is collected and used to educate those guesses and make the necessary simplifying assumptions. For the Navy, where there are more extreme possible environments, building prototypes and mock-ups are used to better educate the designers as to the possible needs of the customer.

All of these are methods to mitigate uncertainty or, in one of the more frightening four letter words: risk. There are all kinds of risk: technology risk, business risk, operating risk, environmental risk, and so on. Experience has always been a way of mitigating these risks. This has lead to a very lucrative business in consultants and experts. The problem to be solved is: how do we get experience without actually suffering the pain of the experience (such as crashing a $100M commercial airliner with 300 passengers). So as engineers, we do experiments to gain experience and reduce risk.

I recently read that a computer selling for $3000 today would have cost over $700,000 in 1972. A byte at the dawn of the computer industry cost over $100. Computer simulations were, in many cases, more expensive than building full scale mechanical simulators. Simulators were only built if it was less expensive (monetarily and socially) than actual tests of products. But as a result of the lower cost and greater power of computing hardware, digital simulation is more practical and cost effective.

Distributed Simulation

So now we can do simulations of a ship's structure in a sea way. We can look at its response to collisions, power changes, steering, and so on. But, who among us is an expert in all of those areas? We tend to specialize in certain engineering disciplines and, as a result, we tend to be very familiar with models and simulations in our domains. As a result, a design project tends to involve a number of specialists, their specialized models, and specialized simulations.

I emphasized these last two elements, because we (designers and engineers) have the necessary tools to allow these models and simulations to work together. Not only can they work together, they can work in different locations, on different machines, by different specialists.

Introduction

Engineers have used simulation as an integral part of the design process for years. We have done tank testing of ships or wind tunnel testing of aircraft. These practices have involved the creation of a model, placing it in an environment (which is itself a model), and then passing the two models through time (i.e. setting the models into motion).

Simulation-based design is actually a poor descriptor of the concept that it represents. It originally referred to the application of visualization methods for reviewing the results of a design. In this sense the simulation was a static, visual one. Over time, technology

Page 42: Chapter 16 SBDED0105

changed and what was seen as being possible changed. So the concept of SBD has expanded into a more general notion of a unified, shared design environment in which the tools and knowledge are shared across disciplines. Probably a better description of what SBD encompasses is something like "distributed collaborative design, modeling, and simulation process," or "combined access to design, modeling, and simulation applications," or some long winded but more comprehensive set of terms.

Simulation (Reference 1)

When we think of simulation we think of a digital computer iterating through discrete time steps using the input of a previous event as the starting point for the next one. This is just one example of a simulation, actually what is called a discrete event simulation. There are other types of simulation methods such as Monte Carlo, continuous, or analog. An example of one analog simulation was a fluid-based mechanism for studying economic systems. This simulation consisted of a series of pipes, reservoirs, floats, and valves which, when turned on, could be used to study the relationships between different economic forces and conditions.

There are basically three types of computer-based simulations:

Monte Carlo simulation - a method by which an inherently non-probabilistic problem is solved by a stochastic process; the explicit representation of time is not required.

Continuous simulation - the variables within the simulation are continuous functions, e.g. a system of differential equations

Discrete event simulation - value changes to program variables occur at precise points in simulation time (i.e. the variables are ``piecewise linear'')

A combined simulation refers both to a simulation which uses discrete event and continuous components. A hybrid simulation refers to the use of an analytical submodel within a discrete event model. Finally, gaming can have discrete event, continuous, and/or Monte Carlo modeling components.

Simulation in design is used for the purpose of evaluating the parameters of the product in relation to the goals of its use. For example, in designing a building we would use a combination of static and dynamic simulations of say, the elevator system, the air handling system, the use of space, and so on. We use simulations to test ideas, make comparisons, determine costs and procedures, and predict performances.

Some simulations behave deterministically and some are probabilistic. Environments are in most cases best simulated using some kind of probabilistic, stochastic process (waves, wind, weather). The choice of which method to use depends on a number of factors such as the cost associated with building an actual product and testing it, the cost of building a simulation, or the difficulty of creating an abstract model. For example, the engineers at Chernobyl who wanted to test a new procedure on a nuclear reactor should have probably done it in a simulation rather than running the experiment on the real thing. There is, of course, the other extreme, where a simple measurement on the real system would be better than trying to create a simulation. Another situation would be whether or not a real system can be built or whether it exists already.

Page 43: Chapter 16 SBDED0105

Some of the advantages of simulation are:

Realism - simulations can be used to capture the actual characteristics of a system being modeled. A large number of complex systems can be studied and tested using simulation-based experiments.

Nonexistent systems - systems to be tested don't need to actually exist. Time compression - time can be compressed in a simulation so that the life cycle behavior of a product can be analyzed in a matter of seconds, minutes, or hours (depending on the complexity of the simulation and the power of the platform).

Deferred specification of objectives - some approaches to optimizing a design (such as linear programming) require the definition of an objective (or an objective function). These methods also must be simplified to fewer criteria than exist in reality. Simulation methods allow for the exploration of relationships and influences which can actually be used to develop objectives for optimization.

Experimental control - In a simulation, different variables can be set in order to establish system behaviors and parameter sensitivity.

Training - Simulations are generally easier to set up than other types of analysis methods.

Inexpensive insurance - Simulation studies have run about 2% of the capital outlay for building the real system. In this sense, the use of simulation provides inexpensive insurance against the final product being under designed, over designed, or failing to meet the needs of the customer.

On the other hand, some of the disadvantages of simulation are:

Failure to produce exact results - Simulations do not provide exact results. Almost by definition, simulations are built around a simplified understanding of the system being modeled.

Lack of generality of results - Simulations are developed for specific conditions or scenarios. The results can only be used in the context of those specific constraints.

Failure to optimize - Simulations are not an optimization technique. It does not generate solutions but evaluates the solutions.

Long lead times - Simulations can require a significant amount of time and effort to develop, verify, and validate their performance.

Costs of simulation - Developing and maintaining a capability to perform simulations can be expensive and time consuming.

Misuse of simulation - There are a number of nuances and fundamental assumptions required to perform a simulation. The practitioner needs to understand the simulation tools, how to build and test the simulation, how to design the experiments, and how to perform an analysis. A simulation that is poorly performed or analyzed can lead to catastrophic results from erroneous conclusions.

Page 44: Chapter 16 SBDED0105

Parallel and Distributed Simulation (Session Reading 1, Reference 1)

For the most part, steps in a discrete event simulation are executed serially. A value for time is established and used as the seed for subsequent calculations. As the simulation progresses, the time value is incremented and model calculations are done again. This is done until a reasonable time history is available covering the events to be studied.

As a simulation becomes more complex, the time to perform these calculations becomes greater. Enter parallel processing: by using this computing method, different parts of the simulation can be assigned to different processors. The processor results are synchronized by the time value, and there is sharing of information between the processors as the calculations take place. But for the most part, each of the processors operates independently. It also stands to reason that the more processors available, the bigger the simulation that can be run and the less time it will take.

So we can now think about what the impact on a simulation would be if we could distribute parts of the simulation across a network. The advantage in a multiprocessor computer is that it is a homogenous environment: the processors are the same, there is a common bus and a common operating system. When we move into the network environment, we start removing these advantages and introducing more and more technical complexity. For example, on a network there would be different operating systems, different network operating systems, different processor architectures and speeds, and so on. Add to this the fact that the applications which calculate the behavior of a model and attributes of the environment are written differently, run differently, on different hardware, at different speeds, and you have tremendous obstacles to over come.

High Level Architecture (HLA) (Session Reading 2 and 3, Reference 2)

The Defense Modeling and Simulation Office (DMSO) has started to take on a more significant role in DoD efforts to streamline acquisition of major weapons systems. The military has recognized that in tightening fiscal times and with the high cost of sophisticated weapon systems, simulations can be used to test and evaluate system components and promises. The table below shows the evolution of DoD's development of the tools to allow the use of simulations for military system development, evaluation, and training.

Page 45: Chapter 16 SBDED0105

DMSO has established a common high-level simulation architecture (HLA) to make it practical for all types of models and simulations to interoperate among themselves and other systems. HLA also makes it possible for the reuse of modeling and simulation components, including legacy applications.

Note that HLA is an architecture and, as such, defines the major functional components, design rules, and interfaces for a computer-based simulation system. HLA is NOT the software required to implement it. Rather, it specifies, at a conceptual level, how they hook together and work as a whole. HLA is a standard and does not mandate a specific software implementation.

Some of the features of HLA are that it:

applies to multiple time management schemes

separates data from architecture; evolve data as required by applications

selectively passes data among simulations

is built around providing shared services to the participating simulation applications

The basic premises under which HLA was developed builds on the problems posed by the networking of simulations. Among these are:

no single, monolithic simulation can satisfy the needs of all users

all uses of simulations and useful ways of combining them cannot be anticipated in advance

future technological capabilities and a variety of operating configurations must be accommodated

Page 46: Chapter 16 SBDED0105

As a result of these premises, DMSO needed a way to construct simulation "federations". The use of the term federation is based on the definition of responsibility for a simulation application and its attendant data. That is, the ownership, and therefore the responsibility, for the operation of the application, the quality of the data, the maintenance of the software and data are those of the participant. In the simulation, other members are allowed access to interfaces to the components, not to the data or code of the component itself (this should sound vaguely object-oriented).

This approach resulted in the two fundamental design principles for HLA:

federations of simulations should be constructed using modular components with well-defined functionality and interfaces, and

specific simulation functionality should be separated from a general purpose runtime infrastructure

Consequently, HLA calls for a federation of simulations and specifies:

ten rules which define relationships among federation components

an Object Model Template which specifies the form in which simulation elements are described

a Runtime Interface Specification which describes the ways simulations interact during an operation

The DoD is very serious about incorporating simulation into its acquisition processes. Current DoD policy requires that simulations developed for particular DoD organizations must conform to the HLA. Definition and detailed implementation of specific simulation system architectures, however, remains the responsibility of the developing organization.

DoD has also established deadlines for industry conformance with HLA. As of the first day of FY99, no funds will be allocated toward developing or modifying non-HLA simulations within DoD. Further, as of the first day of FY01, non-HLA compliant simulations will begin to be retired.

OMG and CORBA (Reference 2)

As I have said in other parts of this course (and will continue to emphasize), market share determines the dominant standard and technology. Essentially, all standards are defacto standards.

We have all had experiences with military standards, and most of those experiences have probably been bad ones. Very few military standards end up passing into civilian commercial practices. As a result, DoD ends up spending a lot more money having things built to their standards rather than accepting commercial standards. Recognizing this, DoD has adopted a strategy to move standards developed under DoD auspices into the commercial sector. So we find this effort taking place in both the STEP community and under OMG. As an example, HLA has become an accepted IEEE standard (High Level Architecture, Object Model Template, IEEE P1516.2.) and is being proposed as a standard under OMG. This latter activity we'll explore in a little more detail.

Page 47: Chapter 16 SBDED0105

OMG has smaller working groups which are set up to address the special interests of different industries and users. One of these special interest groups is the Business Object Task Force (which we have referenced before) and another is the Distributed

Simulation Special Interest Group (SimSIG).

SimSIG describes their mission as:

to augment and extend current OMG standards to accommodate distributed simulation, to encourage ORB and CORBAservice vendors to support features relevant to distributed simulation, to promote OMG standards among builders of HLA infrastructure and HLA-compliant simulations and toolkits, and to promote application of OMG standards to distributed simulation over the Internet.

SimSIG defines their goals as:

gather requirements from industry for use in applying OMG standards to distributed simulation, work through other OMG organizations to seek changes or extensions to OMG standards to support distributed simulation, adapt current industry and government standards for distributed simulation, such as the HLA, to influence OMG standards for distributed simulation, involve ORB and CORBAservice vendors in supporting distributed simulation, and promote use of the object management architecture (OMA) by simulation and simulation infrastructure vendors

The OMG standards process includes a "Request for Comments" (RFC) procedure which differs from the typical OMG process. The RFC process is designed for non-controversial technology (one that does not have a competing alternative).

The RFC process requires that:

an OMG member proposes technology in a form similar to a response to an OMG request for proposal (RFP),

an OMG Task Force (in this case the Manufacturing Domain Task Force or Manufacturing DTF) evaluates and recommends issuance of the proposal as a standard,

the OMG Architecture Board then reviews the proposal for coherence and conformance to the rest of the OMA, Domain Technical Committee (DTC) then issues the RFC,

the RFC is reviewed for 90 days for comment from within and beyond OMG,

if there are no significant negative comments, the DTC then recommends adoption, then

the OMG Board of Directors then votes to adopt the proposal as a standard.

DMSO has submitted HLA to the OMG RFC process in order to standardize its interoperation facility, the HLA Runtime Infrastructure (RTI). This RTI specifies the interfaces exposed by the underlying simulation objects. DMSO has done this in order to gain standing and visibility for their standard through a widely recognized industry group, and to promote the use of HLA beyond the DoD. DMSO is also seeking to

Page 48: Chapter 16 SBDED0105

promote further development of standards for distributed and component-based modeling and simulation. In this way, they may be able to gain the benefits of the OMG's open architecture for distributed object computing.

This is an ongoing process and HLA has not yet been adopted as an OMG standard. To a great extent, the adoption of such a standard depends on commercial business requirements rather than those of the DoD. The extent to which businesses have a need for a distributed simulation environment is being investigated by SimSIG. SimSIG has published a Request for Information document in which the group hopes to better understand the user and developer community needs.

There is the question as to whether there is a need under OMG for HLA to support distributed simulations. A number of users and developers consider the integration of simulation applications to be just a special case of deploying CORBA-based services. Their view is that HLA imposes another level of complexity into the development of distributed simulations that is not needed, adds to development costs, and impedes performance. This may be a controversy waiting to happen.

Distributed Collaboration (Session Reading 4, Reference 3)

I have talked in the preceeding sections (and sessions of this course) about the underlying technologies for distributed simulations and the standards that are being developed to support the sharing of information and processing of design models, environments, and behaviors. What I have not talked about is probably the most important technology to enable SBD, and that is managing the distributed design process.

In all of the efforts to integrate design tools into a single logical design environment, what has emerged is the need for a changed design process. Traditional design assumes that activities are undertaken in a sequential process, with tasks strung together like beads on a string. Unfortunately, that view of design inhibits the adoption of the different tools that are available.

A case study of this problem is found in the adoption of flexible manufacturing cells (FMCs) in the US as compared to the use of the same technology in Japan. In the 1980s, US and Japanese manufacturers began buying automated machining centers that integrated a number of machining and material handling operations into one big machine. The Japanese manufacturers were able to use the programmable capabilities of the FMCs to handle a variety of product families. In contrast, US manufacturers used the technology as a replacement for older, manual manufacturing processes. Where the Japanese were producing around 900 different products on their FMCs, US manufacturers were producing only 8 or 9 different products. US manufacturers couldn't meet the ROI expectations for the investment in the FMC technologies, where the Japanese were exceeding their ROI targets.

The moral of the story is: injecting a new technology in an old process will not exploit the benefits of the technology. It is when the process changes, that the technologies can be fully exploited. In order for us to take advantage of an integrated logical design environment, we need to be thinking about an alternative view of design. Fortunately, one is emerging.

Page 49: Chapter 16 SBDED0105

We are all familiar with concurrent engineering. This is the view of design in which design processes are allowed to overlap. Synchronizing these activities is done via structured communication between the different practitioners. In concurrent engineering, essentially the traditional design tasks are performed all, just at the overlapped in time.

An alternative to this approach is collaborative design. In collaborative design there is a more freeform, problem-solving approach to design. Think about the distributed simulation environment where all of the models, behaviors, and environments are operating at the same time. A change to the product model's behavior by one specialist propagates through all other parts of the simulation. To control this propagation requires that there exist constant communication between the players, mechanisms for making parameter tradeoffs, and measures to determine the product's performance against the owner's requirements.

One of my favorite examples of a collaborative design environment is the Lockheed Skunk Works. Founded by Kelly Johnson (a U of M mechanical engineer) this cloistered organization within a corporate giant was responsible for some of the greatest aircraft design accomplishments in aerospace history. Probably the best example of their work is the SR-71 Blackbird, a 21st century aircraft designed in the early 1960's without the benefit of personal computers, the Internet, supercomputers, e-mail, and the like.

Kelly Johnson had some basic rules for the operation of the Skunk Works which apply to an electronic version of a collaborative environment. Among them were:

have a strong program manger who is responsible for making decisions and has control of the project,

use a small number of good people,

provide flexibility in the design process to allow for making changes,

provide for integration with suppliers,

test the interim products,

have close cooperation with the customer, and

control access to the project.

Such environments can be created with the benefit of information technologies, and this course is an example of one of those environments. We think of video conferencing, on-line chat rooms, and shared whiteboards as such tools. But they provide synchronous communication. Such applications as e-mail, newsgroups, and shared work spaces provide asynchronous communication. This latter functionality is significant given geographic distribution of a design team.

Set-based design

For the most part, we have been taught to begin the naval architecture design process with a notional ship as a starting point. From there, we vary different parameters and iterate through different analyses until we have another version of the ship.

If we think of the different parameters of a design as defining a space, in this process, we are essentially moving through a design space from one point to another. If we mapped

Page 50: Chapter 16 SBDED0105

the process to a surface, we would be be moving from our starting design point to the next design point, hopefully moving downhill (or uphill) toward the optimum design.

Another way of working through design was pioneered (as a formal process) by Toyota. Toyota's approach to design begins with a design space which is larger than necessary (i.e. looks at as many design parameters as possible). The Toyota process then shrinks the design space incrementally to discover where the desired design is in the space.

The traditional design process, referred to as point-based design can be imagined as a pipeline. The designer starts at one end of the pipe, then moves through the pipe to the end. The Toyota approach, referred to as set-based design, can be viewed as a funnel. In the beginning of a set-based design process, ranges of parameters are established and narrowed as design decisions are made.

Set-based design offers several advantages over conventional approaches. For example, in point-based design the evaluation of a point design can be expensive, and/or time-consuming requiring a detailed finite-element analysis which may take days or require construction of a mockup for testing. Furthermore, point-based design serializes design decisions, causing further delays. In set-based design, a shrinking design space assures convergence on a design among a distributed team working in parallel. If all of the players on the design team are shrinking their own subspace, the system as a whole must be converging. In distributed set-based design, the options discarded by one designer guide other designers. That is, the flow of design options across the organization boundaries of the design expertise helps organize the community.

The agent metaphor in design (Session Reading 5 and 6)

Current design software or tools offer no support for set-based design. Further, designers and management have little experience or organizational structures to support set-based design. Research is showing that the extension of a psychological model of behavior can provide some answers.

One of the more popular buzzwords is the term agent. If you talk to a software developer, he'll tell you that an agent is an autonomous piece of software that monitors the environment, waits for a change in the environment, and then does something in response which changes the environment. If you talk to a psychologist, an agent is an entity that participates in the world, responds to that environment, and as result, changes that environment. The point here is that an agent is not just a type of software.

If we take the psychological definition of an agent and specialize it to the design environment, we have the start of the use of agents to manage the design process. We can then constrain the environment to be one in which the design team will interact and communicate (e-mail, chats, etc.). The real trick is finding a mechanism that transcends the special focus of different disciplines and allows for a common language to measure movement through the design space. What is emerging is the use of a market economy as the mediation mechanism for design.

A market can then be created in which design characteristics can be traded. In this way, low prices would identify less important characteristics of the product and high prices

Page 51: Chapter 16 SBDED0105

would identify the more important characteristics. As designers buy and sell characteristics, the relative prices change, shrinking the design space.

I've introduced a number of topics in this session which I hope show how the underlying technologies and processes are converging to create what has been (or is being) called SBD. Our final session in this course will be a survey of applications of all of the technologies introduced in this course. The resulting set of technologies and processes is (inappropriately) called simulation-based design.

Simulation-based Design: Case Studies

What has been referred to as simulation-based design is the use of the technologies presented in this course to create an integrated, collaborative electronic design environment. No two implementations of these technologies are the same as the needs and practices of the different industries and business environments are the same. Hopefully, the case studies in this session will demonstrate this variation.

Implementing SBD

There is a tremendous emphasis on taking the information technologies we are developing and finding ways to exploit them to competitive advantage. We have gone beyond (or at least fully exploited) the notion of information technologies as mechnisms for streamlining transactional processes. Where we are going is toward the use of these technologies to support making decisions.

The tools are all available and maturing rapidly to accomodate almost anything we want to do. But we cannot use these technologies to best advantage in existing business processes. Their real advantage comes when we adopt new, nontraditional, and in some cases unconventional business models.

Introduction

As I've said before in this course, SBD as a concept encompasses a number of technologies and similar concepts. What constitutes simulation-based design depends on your perspective. The terms concurrent engineering, collaborative engineering, modeling and simulation, and integrated design are used to refer to similar situations or even substituted. It is important to recognize that the industrial world treats these terms and concepts as synonyms.

DARPA SBD Program (Reference 1)

The Defense Advanced Research Projects Agency (DARPA) started the SBD program five years ago. The SBD program has developed and tested a prototype digital knowledge environment for representing physical, mechanical, and operational characteristics of a complex system.

The deliverables from this program have been a set of tools which make integrating design, analysis, and evaluation software tools into an integrated environment. These tools have been based on emerging standards (CORBA), commercial technologies (Web browser, Java, COTS design tools), and legacy tools. The program tested and

Page 52: Chapter 16 SBDED0105

demonstrated that integration distributed interactive simulation, physics-based modeling, and virtual environments can be applied them to the design, acquisition, and life cycle support processes of systems.

The environment demonstrated by Lockheed and the other participants (General Dynamics Electric Boat and Newport News Shipbuilding among them) has changed the DoD view of the acquisition process for large, complex warfighting systems. The SBD program ended in August 1998, but has evolved into DoD's simulation-based acquisition (SBA) effort.

Lockheed SBD Project

After a proof-of-concept phase, Lockheed Martin was awarded the contract to develop and demonstrate the tools to provide enterprises with resources for a geographically distributed, standards-based synthetic environment framework for planning, developing, and optimizing products through improved information integration and virtual prototyping.

At various stages in the development of the tool set, Lockheed and it partners demonstrated integrated sets of design tools which:

allowed system and subsystem performance, cost, and schedule insights and tradeoffs

integrated domain solutions to product requirements

provided designers with continuous visibility into systems options

provide total life-cycle value assessment of design options

The Lockheed SBD team released an Alpha versions of their SBD environment framework Core Processing System (CPS), and finally a Beta release in May 1998. Throughout the development of the tools, the team conducted user training and experimented on distributed object computing response time performance, tool wrapping, tools, and linking to product model data. The SBD environment was meant to be general purpose and allow for customizing to meet domain-specific needs. The team ultimately released a version an SBD environment that integrated a number of design tools for satellite design.

The Lockheed team developed CORBA servers which provided distributed functionality. For the SBD enterprise, they wrote two types of CORBA servers which provide interfaces where the API of a design tool are identified:

general-purpose SBD CORBA servers

domain specific CORBA servers

The SBD CORBA servers were developed by the SBD Core Processing System (CPS) provide basic functionality for communication, workflow management, and data storage for the SBD enterprise.

The User Domain Specific CORBA servers are tools that are written by an expert in the domain of choice. The CORBA server provides an interface to legacy, COTS, or SBD-specific applications. In any case, a programmer is responsible for creating an SBD-

Page 53: Chapter 16 SBDED0105

compliant IDL file to describe the API on the tool to be included. The SBD tools provided a library of IDLs from which a programmer may choose to create an interface for his tool. The programmer can also extend a selected IDL and/or populate the methods within a selected IDL with tool specific instructions.

Simulation-based Acquisition (Session Reading 1, Additional Reading 1, and Reference 2)

DoD Directive 5000.1, Part D.2.f states that "Models and Simulations (M&S) shall be used to reduce the time, resources, and risks of the acquisition process and to increase the quality of the systems being acquired." This is the signal to industry that the government is very serious about changing the way that it does business. The challenge is to develop models that facilitate acquisition and apply M&S tools.

There have been a number of SBA-related studies performed since 1994 that cite the benefits of SBD/SBA-like practices. The most recent document authored by the Joint Simulation Based Acquisition (JSBA) Task Force is "A Road Map for Simulation Based Acquisition" which was released in draft form in September 1998. In this report, the JSBA Task Force has developed a notional architecture for SBA, including operational, systems, and technical views.

The Task Force advocates the use of collaborative environments at multiple levels, with initial focus on product and mission areas. The notional architecture includes a top-level reference systems architecture to guide the flexible implementation of these collaborative environments to facilitate interoperability and reuse. The Task Force has proposed additional architectural concepts for product descriptions within this top-level framework, and for a joint DoD/Industry distributed resource repository, with access controls, to facilitate sharing of models, simulations, tools, data, and information, as appropriate, among the stakeholders in the acquisition process.

Rapid Design Exploration & Optimization (RaDEO) (Reference 3)

RaDEO is a another DARPA program that is supporting research, development and demonstration of enabling technologies, tools, and infrastructure for the next generation of design environments for complex electromechanical systems. The RaDEO program is focused on information modeling and design tools needed to support rapid design of electromechanical systems. This program supports rapid design by developing technologies and tools to provide cognitive support to engineers in a design team.

RaDEO emphasizes the notion of a "tag team" design process where each designer performs the functions they are most expert. They contribute design information accessible through a project web for other designers to pick up wherever the others left off.

The RaDEO program's goal is to establish an infrastructure for an agile design environment. This design environment includes tools for comprehensive design and for collaboration across space and time. Some of the ongoing projects are:

Design Space Colonization, Stanford University

Automated Capture of Design Rationale, Stanford Research Institute

Page 54: Chapter 16 SBDED0105

Design Information Retrieval Using Geometric Content, Virage, Inc.

Model-based Support of Distributed Collaborative Design, Stanford University

Flexible Environments for Conceptual Design Rockwell, Palo Alto

A Collaborative multidiscipline Optimization System, GE CR&D/Engineous Software, Inc.

Active Models in Support of Collaborative Design, Cornell University

CODES: Collaborative Design System for Integration of Information Webs with Design and

Manufacturing Tools, Carnegie Mellon University

Integrated Product Definition Environment, Boeing Defense and Space Group

Multiphase Integrated Engineering Design (MIND), University of Utah

Responsible Agents for Prod/Proc Integ. Development, Industrial Technology Institute

Manufacturing Simulation Driver, Raytheon Electronic Systems and Deneb Robotics

Conceptual Process Design for Composite Materials, Michigan State University

Non-Marine case studies

U.S. Army Tank-Automotive Command (TACOM) (Reference 4)

The US Army Tank Command has established the TARDEC Virtual Prototyping Business Group (VPBG) within the parent organization. This group was officially formed as a business group in FY97 to bring together several M&S projects.

The VPBG is applying advanced computer modeling and simulation to reduce time, cost and maximize quality during all phases of the vehicle system or product life cycle. They are using computer generated representations of physical properties for engineering analysis and evaluation instead of physical prototypes and mockups. This method is enabling the Army to evaluate new vehicle systems or products without actually building a physical object earlier in the product development cycle. The process includes continuous customer participation in product development resulting in a high degree of user and developer agreement prior to building hardware.

The virtual prototyping process as practiced by the Tank Command is grouped into two phases or environments:

the conceptual environment, and

the product environment.

This system was developed based on a military vehicle system life-cycle, however the principles and methods shown can be applied to any product. Any segment or portion of the methodology can be utilized individually depending on the need, encompassing the

Page 55: Chapter 16 SBDED0105

total life-cycle, from initial requirement through development, production, support and disposal.

NASA: Jet Propulsion Laboratory (JPL) Project Design Center (Session Reading 2, Reference 5)

Under Daniel Goldin, NASA has adopted the motto "cheaper, better, faster" resulting in such amazing successes such as the Mars Pathfinder project. This project was delivered for $167M dollars in about 30 months from concept to launch. Compare this to Viking which took 10 years and $3B to land on Mars with less capability than Pathfinder.

An example of this changed approach to project development, NASA established the JPL Project Design Center (PDC). The PDC is a facility which allows system-level design of missions, spacecraft, and mission operations systems in a concurrent/collaborative engineering team environment. It began operations in June of 1994 with the objectives of improving the quality and reducing the cost of JPL proposals and pre-projects.

This process established at the PDC differs from traditional aerospace design process, by allowing real-time decision making by a design team. It provides and assembles supporting information needed to make design decisions using networked tools and databases. The design team at the PDC does the conceptual mission design and determines the associated costs in a minimum of two 3 hour sessions. PDC metrics show that the time necessary to produce mission designs is reduced to one tenth of the time and one half the cost of prior efforts.

One of the keys to the success of the PDC is that the team is a standing design team -- one team designs many missions. In FY96 the team (called Team-X) completed 57 such studies. As a result the team has the opportunity to learn to work together through in a practiced collaborative process. Team X has used a software tool called CEM (Concurrent Engineering Methodology) which is a set of linked Excel spreadsheets. This allows data sharing between designers of spacecraft subsystems, streamlining mission design.

The CEM software is used during the conceptual part of the proposal development process. An information system architecture is being developed to support data sharing in later phases of project development. The challenge is that not only does the amount of shared information increase, but detailed design also occurs in diverse design tools on different platforms. This data sharing between detailed design tools is enabled by capturing design data in several non-redundant relational databases. These database applications perform data archiving, configuration management, and requirements checking, so that all design tools can access the latest version of the design.

NASA : Integrated Mission Design Center (Reference 6)

The JPL PDC is not an isolated example of the way NASA is embracing integrated collaborative design practices. In 1997, it established the Integrated Mission Design Center (IMDC) at the Goddard Space Flight Center. Discipline Engineers are assigned to the IMDC, bringing their expertise and experience from their home organizations within Goddard Space Flight Center. The IMDC then enables their collaboration by providing an engineering environment that facilitates real-time tradeoffs and seamless integration of

Page 56: Chapter 16 SBDED0105

engineering capabilities using Web-based information system links. These links integrate existing discipline tools resident on PC, Macintosh, and UNIX workstations into a cohesive tool for mission design sharing information seamlessly during the design process. These tools include:

a mission archive which houses mission profiles and design data from mission phase to mission phase or from mission to mission,

a component catalog which stores information about commonly used components, and

a spacecraft catalog which includes costs, programmatic tools, and models.

Sandia National Laboratories (Session Reading 3, Reference 7)

Another example of SBD-type technologies is IMPRESARIO (Integrated Multiple Platform for REmote-sensing Simulation And Real-time Interactive Operation), software from the Decision Support Systems Department in Sandia's Information Systems Engineering Center. This software is based on CORBA-compliant distributed software marketed by Expersoft. IMPRESARIO provides an integrated modeling, simulation, and data visualization framework allowing coupling of independently created models of different physical processes into a unified system. It manages the interactions of these models, providing standard data sharing and input/output interfaces.

One demonstration of IMPRESARIO was in the coupling of a solid dynamics code (ALGEBRA) written in C++ used to perform large-deformation structural response calculations, with a thermal analysis code (COYOTE) written in FORTRAN used to perform conduction and radiation heat transfer calculations. In a demonstration thermomechanical problem, ALEGRA calculated mesh coordinate displacements and mechanical work in response to applied forces. COYOTE calculated temperature changes in response to heat conduction and enclosure radiation.

The physics coupling was accomplished by developing abstract data types which represented mesh coordinates, temperatures, and mechanical work. Objects of these types were exchanged through the IMPRESARIO interface between the codes at the beginning of each computational step. The applications started with identical finite element meshes and performed their respective calculations in parallel with each other. After the applications finish a step, ALEGRA sent updated mesh coordinates and mechanical work to COYOTE. COYOTE then updated its mesh and volumetric heating data before starting its next step. COYOTE then sent new temperatures to ALEGRA, which updated its material states before starting its next step. This way the applications maintained consistency with each other, running in lockstep.

Madefast (Reference 8)

Another DARPA program which was the Manufacturing Automation and Design Engineering (MADE) which included a project called Madefast . This was an exercise in geographically distributed design and prototyping conducted by members of the MADE research community which tested software and protocols for engineering collaboration over the Internet.

Page 57: Chapter 16 SBDED0105

Madefast became an extensive project web that included documentation about the design, the design process, and pointers to information sources, tools and services. The Madefast pages became a large and complex set of interconnected web of subprojects. On-line documentation contained mixes of formal information including geometric models, circuit diagrams, analyses and test results, and informal information in the form of electronic mail messages, sketches, photographs and video clips.

Madefast had no formal management structure or central authority. Discussion and consensus determined which groups took responsibility for which aspects of the design and subsystems. Madefast was a research program which are typically more open and share more information than commercial interactions.

Many of the software tools used in the MADE community and accessed through Madefast were sophisticated special purpose tools demanding experienced users. Madefast showed that when design and analysis tools are made available as services on the Internet, human expertise must also be provided to support the use of those tools.

An example cited was one of the manufacture of a composite part which was to be made at the Michigan State University composites center. The easiest way to manufacture the mold was to have it machined at the University of Utah which had the manufacturing library to generate CNC milling programs from a solid model. it was actually faster to have MSU send a FAX of the mold to Utah and have the group at Utah build the mold from scratch, rather than converting the MSU CAD design into an IGES file and shipping it to Utah for conversion.

Madefast concluded that there are needs in a distributed collaborative design environment for:

methods to navigate and organize on-line documentation,

methods for coordinating and managing the efforts of cooperating groups,

methods for helping groups to assimilate the organizational, domain, and sometimes national cultures of team members

addressing the issue of security as companies will not send confidential information (design data, project information, or billing information) over the Internet to a service provider if they think it can be intercepted, and

standards that allow companies to use their in-house tools with any service provider.

methods for easy interaction between users and services provided in the design environment.

Marine industry case studies

General Dynamics Electric Boat Division (Reference 9)

Electric Boat (EB) was one of the earliest proponents of SBD and deserves much of the credit for creating the concept. Their business situation dictated that they find a way to maintain the ability to design and build submarines, doing so at the lowest practical costs to the government. The result was the notion of building digital prototypes.

Page 58: Chapter 16 SBDED0105

EB has worked at integrating their design systems into a coherent environment and process. They have done this not so much in a seamless way, but in the rationalization of their design process. EB primarily uses CATIA to develop their design concepts which it then tests using over 300 analysis applications. The 3D model of their designs are transferred to simulation tools which also provide visualization of the behaviors and arrangements of the product.

In addition to SBD, EB has participated in extending the information from the design environment into the rest of the shipyard and into their supplier base. In the MARITECH Shipbuilding Industrial Information Protocol (SHIIP) and Shipbuilding Partners and Suppliers (SPARS) is using National Industrial Information Infrastructure Protocols (NIIIP) to create an agile ship design and construction environment shipbuilding by:

integrating multiple incompatible CAD, PDM, ERP, MES, and EDI systems,

eliminating redundant data entry efforts,

meeting the constraints of limited IT budgets and expertise,

shortening project life span, rapid reconfiguration, and the need for rapid data/information exchange, and

improving productivity by enabling business processes.

The SHIIP and SPARS solutions are built using the results of the National Industrial Information Infrastructure (NIIIP) Project which has worked to develop open industry software protocols for interoperability among heterogeneous computing environments across the national manufacturing base. NIIIP technology is built upon a set of emerging, existing, and defacto standards including:

Internet -- for connectivity,

OMG-CORBA -- for application interoperability,

ISO-STEP -- for representing and passing product data,

Workflow -- for managing processes across the Virtual Enterprise.

NSRDC Leading Edge Advanced Prototyping for Ships (LEAPS) Demonstration (Session Reading 4)

The LEAPS demonstration project was developed to meet the needs of the Navy ship concept assessment community. Many of the ship concept assessments are performed in isolation, with inconsistent data, with inconsistent models, resulting in inconsistent answers because they do not account for technology coupling.

The LEAPS developers implemented an integrated virtual prototyping process for ship concept assessments which encompassed:

mission requirements identification;

concepts development;

performance modeling;

warfare analysis; and,

Page 59: Chapter 16 SBDED0105

to a lesser extent, detailed design.

LEAPS enabled the ship concepts analysis process by discipline area experts with an understanding of the capabilities and limitations of their respective technical software. The project demonstrated integration of software from the following areas:

mission and system/equipment selection applications,

early-stage ship design/CAD modeling tools,

cost analysis tools,

signature analysis applications,

vulnerability analysis applications,

stability analysis applications, and

warfare analysis applications.

The LEAPS effort produced:

an object-oriented, flexible, extensible information meta-model (product model, or "smart product

model") which can support collaborative teams performing ship and submarine concept studies

an application programmer's interface (API) to support development of "wrappers" linking applications

software to the information model

an initial example of applications software linked via the LEAPS product model

an initial behavior model capability to link results of engineering analyses to dynamic simulations.

various graphical user interfaces, automated meshing tools, and visualization tools.

The LEAPS product model contains that is built-up during the progression of the study. This allowed decision-makers could query the information, and make concept performance and cost comparisons. The study product model also contains notes from the participants providing documentation of design decisions.

The product model operated over multiple distributed computers and operating systems while specific applications operated on only one type of computer. LEAPS did not mandate which analysis tools were used, leaving that decision to the analyst. C++ software wrappers were written to provide access to legacy codes linking those applications to the product model.

MARITECH COMPASS (Reference 10)

Reflecting some of the results of the SBD program and changes in the role of CAD systems in the design environment, MARITECH funded the COMPASS (Component Object Model of Products/Processes for an Advanced Shipbuilding System) project. This

Page 60: Chapter 16 SBDED0105

is a design application based on the "smart" product model and building a design application around a central product model. The central product model provides all participants in a design effort with a comprehensive understanding of the ship's geometry, arrangements, and other attributes. The COMPASS Team is exploring and implementing infrastructure technologies based on Microsoft and Intergraph standards and products. The project goal is to move the shipbuilding industry toward an integrated product and process development (IPPD) environment.

The projects mentioned in this lecture are just a few of the efforts going on to establish the next generation of engineering. Each has some unique, domain specific problems, but in many cases are addressing the same problems. Many of these problems are not technical, but organizational, administrative, and process problems.

Also note that, in building to the discussion of SBD type projects, we have covered the technologies that make SBD possible and practical. If you were to go looking for SBD you would find a limited set of software that are buzzword compliant, and miss the whole point of what the concept really involves.