the - computer science & engineering at washuschmidt/pdf/spie.pdf · subsystem virtual memory...

48

Upload: trandan

Post on 27-Jul-2018

217 views

Category:

Documents


0 download

TRANSCRIPT

The Performance of

Object-Oriented Components for

High-speed Network Programming

Douglas C. Schmidt

[email protected]

Washington University, St. Louis

1

Introduction

� Distributed object computing (DOC) frame-works are well-suited for certain communi-cation requirements and certain network en-vironments

{ e.g., request/response or oneway messaging over

low-speed Ethernet or Token Ring

� However, current DOC implementations ex-hibit high overhead for other types of re-quirements and environments

{ e.g., bandwidth-intensive and delay-sensitive stream-

ing applications over high-speed ATM or FDDI

2

Outline

� Outline communication requirements of dis-

tributed medical imaging domain

� Compare performance of several network pro-gramming mechanisms:

{ Sockets

{ ACE C++ wrappers

{ CORBA (Orbix)

{ Blob Streaming

� Outline Blob Streaming Architecture and

Related Patterns

� Evaluation and Recommendations

3

Distributed Medical Imaging in

Project Spectrum

DIAGNOSTIC

STATIONS

ATMATMMANMAN

ATMATM

LANLAN

ATMATM

LANLAN

MODALITIES

(CT, MR, CR) CENTRAL

BLOB STORE

CLUSTER

BLOB

STORE

DX

BLOB

STORE

4

Distributed Objects in Medical

Imaging Systems

:: BLOB BLOB

DISPLAYDISPLAY

:: NAME NAME

SERVERSERVER:: BLOB BLOB

ROUTERROUTER

:: BLOB BLOB

PROCESSORPROCESSOR

SOFTWARE BUSSOFTWARE BUS

:: NETWORK NETWORK

TIMETIME

:: IMAGE IMAGE

LOCATORLOCATOR

:: IMAGE IMAGE

SERVERSERVER

:: DICOM DICOM

PRINTERPRINTER:: IMAGE IMAGE

SERVERSERVER:: BLOB BLOB

SERVERSERVER

:: BLOB BLOB

LOCATORLOCATOR

� Blob Servers have the following responsibil-ities and requirements:

* E�ciently store/retrieve large medical images (Blobs)

* Respond to queries from Blob Locators

* Manage short-term and long-term blob persistence

5

DOC View of Project Spectrum

DIAGNOSTIC

STATIONS

ATMATMMANMAN

ATMATM

LANLAN

ATMATM

LANLAN

MODALITIES

(CT, MR, CR)

DX

BLOB

STORE

MODALITIES

(CT, MR, CR)

BLOBLOCATOR

NAMESERVER

NAMESERVER

BLOBLOCATOR

BLOBROUTER

CENTRAL

BLOB STORE

CLUSTER

BLOB

STORE

TIMESERVER

6

Motivation for Distributed Object

Computing

� Simplify application development and inter-working, e.g.,

{ CORBA provides higher level integration than tra-

ditional \untyped TCP bytestreams"

{ ACE encapsulates lower-level networking and con-

currency systems programming interfaces

� Provide a foundation for higher-level appli-cation collaboration

{ e.g., Windows OLE and the OMG Common Ob-

ject Service Speci�cation (COSS)

� Bene�ts for distributed programming simi-lar to OO languages for non-distributed pro-gramming

{ e.g., encapsulation, interface inheritance, and object-

based exception handling

7

CORBA Architecture

CLIENTOBJECT

IMPL

DYNAMIC

INVOCATION

INTERRFACE

IDLSTUBS

ORBINTERFACE

IDLSKELETON

OBJECT

ADAPTER

OBJECT

REQUEST BROKER

op(args)

NAMING

SERVICE

EVENT

SERVICE

LIFECYCLE

SERVICE

SECURITY

SERVICE

TRADER

SERVICE

8

CORBA Components

� The CORBA speci�cation is comprised ofseveral parts:

1. An Object Request Broker (ORB)

2. An Interface De�nition Language (IDL)

3. A Static Invocation Interface (SII)

4. A Dynamic Invocation Interface (DII)

5. A Dynamic Skeleton Interface (DSI)

� Other documents from OMG describe com-mon object services built upon CORBA

{ e.g., CORBAServices ! Event services, Name ser-

vices, Lifecycle services

9

ACE Architecture

THREAD

LIBRARY

SYNCHSYNCH

WRAPPERSWRAPPERS

COMMUNICATIONCOMMUNICATION

SUBSYSTEMSUBSYSTEM

VIRTUAL MEMORYVIRTUAL MEMORY

SUBSYSTEMSUBSYSTEM

DYNAMICDYNAMIC

LINKINGLINKING

MEMORYMEMORY

MAPPINGMAPPING

SELECTSELECT//POLLPOLL

SYSTEMSYSTEM

VV IPCIPCSTREAMSTREAM

PIPESPIPES

NAMEDNAMED

PIPESPIPES

SYSVSYSVWRAPPERSWRAPPERS

SPIPE

SAP

GENERAL UNIX AND WIN32 SERVICES

CCAPIAPISS

C++C++WRAPPERSWRAPPERS

FRAMEWORKS

AND CLASS

CATEGORIES

THREAD

MANAGER

PROCESS/THREAD

SUBSYSTEM

SOCKETS/TLI

MEM

MAP

SHARED

MALLOC

ACCEPTOR CONNECTOR

DISTRIBUTED

SERVICESNAME

SERVER

TOKEN

SERVER

LOGGING

SERVER

GATEWAY

SERVER

SOCK_SAP/TLI_SAP

FIFO

SAP

REACTOR

LOG

MSG

SERVICE

CONFIG-URATOR

ADAPTIVE SERVICE EXECUTIVE

(ASX)

SERVICE

HANDLER

CORBA

HANDLER

� A set of C++ wrappers, class categories,

and frameworks based on design patterns

10

Motivation for CORBA and ACE

on Project Spectrum

� Two crucial issues for overall communica-

tion infrastructure exibility and performance

� Flexibility motivates the use of a distributedobject computing framework like CORBAto transport many formats of data

{ e.g., HL7, DICOM, Blobs, domain objects, etc.

� Performance requires we transport this data

as quickly as the current technology allows

11

Key Research Question

Can CORBA and ACE be used to

transfer medical images e�ciently

over high-speed networks?

� Our goal was to determine this empirically

before adopting distributed object comput-

ing wholesale

12

Performance Experiments

� Enhanced version of TTCP

{ TTCP measures end-to-end bulk data transfer with

ackknowledgements

{ Enhanced version tests C, ACE C++ wrappers,

and CORBA, and Blob Streaming

� Parameters varied

{ 100 Mbytes of data transferred in various chunk

sizes

{ Socket queues were 8k (default) and 64k (maxi-

mum)

{ Network was 155 Mbps ATM

� Compiler was SunC++ 4.0.1 using highest

optimization level

13

Network/Host Environment

BAY NETWORKSBAY NETWORKS

LATTISCELLLATTISCELL

ATM SWITCHATM SWITCH

(16(16 PORT PORT,, OC3OC3155155MBPSMBPS//PORTPORT,,

9,1809,180 MTU MTU))SPARCSTATIONSPARCSTATION

2020 MODEL MODEL 712712SS

((ENI ATMENI ATM

ADAPTORSADAPTORS

AND ETHERNETAND ETHERNET))

14

TTCP Con�guration for C and

ACE C++ Wrappers

ATMATM

SWITCHSWITCH

1: write(buf)1: write(buf) 3: read(buf)3: read(buf)

2: forward2: forward

4: ack4: ack

SenderSender ReceiverReceiver

15

TTCP Con�guration for CORBA

Implementation

ATMATM

SWITCHSWITCH

2: forward2: forward

TTCPTTCPImplImpl

3: send(buf)3: send(buf)

4: ack4: ack

SenderSender 1: send(buf)1: send(buf)

TTCPTTCPSkelSkel

TTCPTTCPStubStub

16

TTCP Con�guration for Blob

Streaming

ATMATM

SWITCHSWITCH

SenderSender 1: send(buf)1: send(buf)

2: connect2: connect

BlobBlobStoreStore

6: send(buf)6: send(buf)

7: ack7: ackBlob_XportBlob_Xport

SkelSkel

4: forward4: forwardSrcSrc

BlobBlobProxyProxy

DestDestBlobBlobProxyProxy

Blob_XportBlob_XportStubStub

SrcSrcBlobBlobProxyProxy

DestDestBlobBlobProxyProxy

5: read(buf)5: read(buf)3: write(buf)3: write(buf)

17

Performance over ATM

20

25

30

35

40

45

50

55

60

65

0 5 10 15 20 25 30

Mbi

ts/s

ec

Blob chunk size in megabytes

C, ACE C++, Blob Streaming, and Orbix over ATM

C/64k windowACE/64k window

Blob Streaming/64k windowOrbix/64k window

C/8k windowACE/8k window

Blob Streaming/8k windowOrbix/8k window

18

Primary Sources of Overhead

� Data copying

� Demultiplexing

� Memory allocation

� Presentation layer formatting

19

High-Cost Functions

� C and ACE C++ Tests

{ Transferring 64 Mbytes with 1 Mbyte bu�ers

Test %Time #Calls Name

----------------------------------------------

C sockets 93.9 112 write

(sender) 3.6 110 read

C sockets 93.2 13,085 read

(receiver) 4.5 3 getmsg

ACE C++ wrapper 94.4 112 write

(sender) 3.2 110 read

ACE C++ wrapper 93.9 12,984 read

(receiver) 5.6 3 getmsg

20

High-Cost Functions (cont'd)

� Orbix String and Sequence

Test %Time #Calls Name

----------------------------------------------

Orbix Sequence 53.5 127 write

(sender) 35.1 223 read

7.3 1,108 memcpy

Orbix Sequence 85.6 12,846 read

(receiver) 12.4 1,064 memcpy

Orbix String 45.0 127 write

(sender) 35.1 223 read

10.8 1,315 strlen

6.0 1,108 memcpy

Orbix String 70.7 12,443 read

(receiver) 18.1 2,142 strlen

10.0 1,064 memcpy

21

High-Cost Functions (cont'd)

� Blob Streaming

Test %Time #Calls Name

----------------------------------------------

BlobStreaming 48.8 327 write

(sender) 44.8 232 read

1.3 2,055 memcpy

BlobStreaming 77.2 12,546 read

(receiver) 16.4 12,734 memcpy

1.4 202 write

22

Overview of Blob Streaming

� Blob Streaming provides developers with a

uniform interface for operations on multiple

types of Binary Large OBjects (BLOBs)

� Two primary goals

1. Improved abstraction

{ Shield developers from knowledge of blob loca-

tion (e.g., memory vs. \local" �les vs. remote

network)

2. Maximize performance

{ Transport blobs as e�ciently as current technol-

ogy allows

23

Blob Streaming System

Architecture

CONTROL CHANNEL CONTROL CHANNEL ((EE..GG.,., CORBACORBA OR OR NNETWORK ETWORK OLE)OLE)

DATA CHANNEL DATA CHANNEL ((EE..GG.,., TCPTCP OR LIGHTWEIGHT OR LIGHTWEIGHT ATM)ATM)

ATMATM

SWITCHSWITCH

SenderSender

1: push(image)1: push(image)

2: pull(image)2: pull(image) BlobBlobStreamingStreaming

ServerServer

BLOBBLOB

STORESTORE

ReceiverReceiver

24

Blob Streaming Architecture

� Blob Streaming components allow transpar-

ent use of resources through uniform blob

interfaces

� Blob Streaming support the following:

{ Blob location

. e.g., smart caches to decouple transfers from

location algorithms

{ Blob routing

. e.g., context based routing

{ Source and destination independent Blob trans-

port, e.g.,

. Store and retrieve from remote or local databases

. Abstract operations like reads/writes may use

local �le reads/writes, or remote reads/writesvia sockets

25

Blob Streaming Architecture

Design Goals

� Goal: decouple application from OS plat-form

{ e.g., applications can be shielded from fact that

current version is implemented for UNIX

. Thus, can port Blob Streaming to Windows NT

or OS/2 without changing applications

{ Platform speci�c operations hidden behind abstract

interfaces

. e.g., WIN32 WaitForMultipleObjects and UNIXselect

� Advantages

{ Portability and extensibility

26

Blob Streaming Architecture

Design Goals (cont'd)

� Goal: application independence from trans-port mechanism

{ Switch transports at any stage in the development

without a�ecting application code

. Presently using CORBA and TCP/IP as trans-

port mechanisms

� However, none of these mechanisms are ex-

posed to programmers

� e.g., can use Network OLE

. As transport technology improves, Blob Stream-

ing can change without a�ecting applications

� e.g., \direct ATM"

� Advantages

{ Portability, extensibility, and performance tuning

27

Design Patterns in Blob

Streaming

Active ObjectActive Object

Half-Sync/Half-Sync/Half-AsyncHalf-Async

IteratorIteratorFactoryFactoryMethodMethod AdapterAdapter

Strategy/Strategy/BridgeBridge

TACTICAL

PATTERNS

STRATEGIC

PATTERNS

ConnectorConnector AcceptorAcceptor

RouterRouter

ServiceServiceConfiguratorConfigurator

ServiceServiceStreamStream

Thread SpecificThread SpecificStorageStorage

ReactorReactor

ExternalExternalPolymorphismPolymorphism

� Blob Streaming is based upon a system of

design patterns

28

The Reactor Pattern

� Intent

{ An object behavioral pattern that decouples event

demultiplexing and event handler dispatching from

the services performed in response to events

� This pattern resolves the following forcesfor event-driven software:

{ How to demultiplex multiple types of events from

multiple sources of events e�ciently within a single

thread of control

{ How to extend application behavior without requir-

ing changes to the event dispatching framework

29

Structure of the Reactor Pattern

Reactorhandle_events()register_handler(h)remove_handler(h)expire_timers()

1

1

1

Event_Handler

handle_input()handle_output()handle_signal()handle_timeout()get_handle()

A

1

n

n

ConcreteEvent_Handler

Timer_Queue

schedule_timer(h)cancel_timer(h)expire_timer(h)

1

1

select (handles);foreach h in handles { if (h is output handler) h->handle_output () ; if (h is input handler) h->handle_input (); if (h is signal handler) h->handle_signal ();}this->expire_timers ();

nHandles

1

APPLIC

ATIO

N-

DEPENDENT

APPLIC

ATIO

N-

INDEPENDENT

n

� Participants in the Reactor pattern

30

Collaboration in the Reactor

Pattern

mainprogram

INITIALIZE

REGISTER HANDLER

callback :Concrete

Event_Handler

START EVENT LOOP

DATA ARRIVES

OK TO SEND

reactor: Reactor

handle_events()

FOREACH EVENT DO

handle_input()

select()

Reactor()

register_handler(callback)

handle_output()

SIGNAL ARRIVES

TIMER EXPIRES

handle_signal()

handle_timeout()

get_handle()EXTRACT HANDLE

REMOVE HANDLERremove_handler(callback)

INIT

IAL

IZA

TIO

N

MO

DE

EV

EN

T H

AN

DL

ING

MO

DE

handle_close()CLEANUP

31

Using the Reactor for Blob

Streaming

:: Reactor Reactor

REGISTERED

OBJECTS

: HandleTable

: Blob: BlobHandlerHandler

: Event: EventHandlerHandler

: Blob: BlobProcessorProcessor

2: recv_request(msg)2: recv_request(msg)3: putq(msg)3: putq(msg)

4: getq(msg)4: getq(msg)5:svc(msg)5:svc(msg)

FR

AM

EW

OR

KF

RA

ME

WO

RK

LE

VE

LL

EV

EL

KE

RN

EL

KE

RN

EL

LE

VE

LL

EV

EL

AP

PL

ICA

TIO

NA

PP

LIC

AT

ION

LE

VE

LL

EV

EL

OS EVENT DEMULTIPLEXING INTERFACE

1: handle_input()1: handle_input()

: Blob: BlobHandlerHandler

: Blob: BlobHandlerHandler: Message: Message

QueueQueue

32

The Active Object Pattern

� Intent

{ Decouples method execution from method invoca-

tion and simpli�es synchronized access to shared

resources by concurrent threads

� This pattern resolves the following forcesfor concurrent communication software:

{ How to allow blocking operations (such as read

and write) to execute concurrently

{ How to simplify concurrent access to shared state

33

Structure of the Active Object

Pattern in ACE

EventHandler

handle_input()handle_output()handle_exception()handle_signal()handle_timeout ()handle_close()get_handle()=0

A

SharedObject

init()=0fini ()=0info()=0

A

Service

ServiceObject

A

APPLIC

ATIO

N-

SPECIF

IC APPLIC

ATIO

N-

INDEPENDENT

Task

A

SYNCH

MessageQueue

SYNCH

SYNCH

suspend()=0resume()=0

open()=0close()=0put()=0svc()=0

34

Collaboration in ACE Active

Objects

ACTIVE

: MessageQueue

t2 : Task

2: enqueue (msg)

1: put (msg)

ACTIVE

: MessageQueue

t1 : Task

ACTIVE

: MessageQueue

t3 : Task

5: put (msg)

3: svc ()4: dequeue (msg)

35

Using the Active Object Pattern

for Blob Streaming

:: Reactor Reactor

REGISTERED

OBJECTS

: Handle: HandleTableTable

: Blob: BlobHandlerHandler

: Event: EventHandlerHandler

: Blob: BlobProcessorProcessor

2: recv_request(msg)2: recv_request(msg)3: putq(msg)3: putq(msg)

4: getq(msg)4: getq(msg)5:svc(msg)5:svc(msg)

FR

AM

EW

OR

KF

RA

ME

WO

RK

LE

VE

LL

EV

EL

KE

RN

EL

KE

RN

EL

LE

VE

LL

EV

EL

AP

PL

ICA

TIO

NA

PP

LIC

AT

ION

LE

VE

LL

EV

EL

OS EVENT DEMULTIPLEXING INTERFACE

1: handle_input()1: handle_input()

: Blob: BlobHandlerHandler

: Blob: BlobHandlerHandler: Message: Message

QueueQueue

36

Half-Sync/Half-Async Pattern

� Intent

{ An architectural pattern that decouples synchronous

I/O from asynchronous I/O in a system to simplify

programming e�ort without degrading execution

e�ciency

� This pattern resolves the following forcesfor concurrent communication systems:

{ How to simplify programming for higher-level com-

munication tasks

. These are performed synchronously (via Active

Objects)

{ How to ensure e�cient lower-level I/O communi-

cation tasks

. These are performed asynchronously (via the Re-

actor)

38

Structure of the

Half-Sync/Half-Async Pattern

QU

EU

EIN

G

LA

YE

R

AS

YN

CH

RO

NO

US

T

AS

K L

AY

ER

SY

NC

HR

ON

OU

S

TA

SK

L

AY

ER SYNC

TASK 1

SYNC

TASK 3

SYNC

TASK 2

1, 4: read(data)

3: enqueue(data)

2: interrupt

ASYNC

TASK

EXTERNAL

EVENT SOURCES

MESSAGE QUEUES

39

Collaborations in the

Half-Sync/Half-Async Pattern

EXTERNAL EVENT

PROCESS MSG

read(msg)

EXECUTE TASK

ENQUEUE MSG

ExternalEvent Source

AsyncTask

SyncTask

MessageQueue

enqueue(msg)

work()

DEQUEUE MSG

AS

YN

C

PH

AS

E

QU

EU

EIN

G

PH

AS

E

SY

NC

PH

AS

E

RECV MSG

notification()

read(msg)

work()

� This illustrates input processing (output pro-

cessing is similar)

40

Using the Half-Sync/Half-Async

Pattern for Blob Streaming

: Event: EventHandlerHandler

: Blob: BlobProcessorProcessor

2: recv_request(msg)2: recv_request(msg)3: putq(msg)3: putq(msg)

4: getq(msg)4: getq(msg)5:svc(msg)5:svc(msg)

AS

YN

C

TA

SK

AS

YN

C

TA

SK

LE

VE

LL

EV

EL

SY

NC

H

TA

SK

SY

NC

H

TA

SK

LE

VE

LL

EV

EL

1: handle_input()1: handle_input()

: Blob: BlobHandlerHandler

: Message: MessageQueueQueue

:: Reactor Reactor

QU

EU

EIN

GQ

UE

UE

ING

LE

VE

LL

EV

EL

: Blob: BlobHandlerHandler

: Blob: BlobHandlerHandler

41

The Acceptor Pattern

� Intent

{ Decouple the passive initialization of a service from

the tasks performed once the service is initialized

� This pattern resolves the following forcesfor network servers using interfaces like sock-ets or TLI:

1. How to reuse passive connection establishment code

for each new service

2. How to make the connection establishment code

portable across platforms that may contain sock-

ets but not TLI, or vice versa

3. How to ensure that a passive-mode descriptor is

not accidentally used to read or write data

4. How to enable exible policies for creation, con-

nection establishent, and concurrency

42

Structure of the Acceptor Pattern

ReactorReactor11

AcceptorAcceptor

SVC_HANDLERSVC_HANDLER

PEER_ACCEPTORPEER_ACCEPTOR

ConcreteConcreteAcceptorAcceptor

Concrete_Svc_HandlerConcrete_Svc_Handler

SOCK_AcceptorSOCK_Acceptor11ConcreteConcrete

Svc HandlerSvc Handler

SOCK StreamSOCK Stream

open()

nn

RE

AC

TIV

ER

EA

CT

IVE

LA

YE

RL

AY

ER

CO

NN

EC

TIO

NC

ON

NE

CT

ION

LA

YE

RL

AY

ER

AP

PL

ICA

TIO

NA

PP

LIC

AT

ION

LA

YE

RL

AY

ER

INITSINITS

SvcSvcHandlerHandler

PEER_STREAMPEER_STREAM

open()

AA

sh = make_svc_handler();sh = make_svc_handler();

accept_svc_handler (sh);accept_svc_handler (sh);

activate_svc_handler (sh);activate_svc_handler (sh);

nn

EventEventHandlerHandler

handle_input()

AA

make_svc_handler()accept_svc_handler()activate_svc_handler()open()handle_input()

43

Collaboration in the Acceptor

Pattern

Server

REGISTER HANDLER

START EVENT LOOP

CONNECTION EVENT

REGISTER HANDLER

FOR CLIENT I/O

FOREACH EVENT DO

EXTRACT HANDLE

INITIALIZE PASSIVE

ENDPOINT

acc :Acceptor

handle_input()

handle_close()

reactor :Reactor

select()

sh:Svc_Handler

handle_input()

register_handler(sh)

get_handle()EXTRACT HANDLE

DATA EVENT

CLIENT SHUTDOWN

svc()PROCESS MSG

open()

CREATE, ACCEPT,AND ACTIVATE OBJECT

SERVER SHUTDOWNhandle_close()

EN

DP

OIN

T

INIT

IAL

IZA

TIO

N

PH

AS

E

SE

RV

ICE

INIT

IAL

IZA

TIO

N

PH

AS

E

SE

RV

ICE

PR

OC

ES

SIN

G

PH

AS

E

: SOCKAcceptor

handle_events()

get_handle()

register_handler(acc)

sh = make_svc_handler()accept_svc_handler (sh)activate_svc_handler (sh)

open()

� Acceptor factory creates, connects, and ac-

tivates a Svc Handler

44

Using the Acceptor Pattern for

Blob Streaming

PASSIVEPASSIVE

LISTENERLISTENER

ACTIVEACTIVE

CONNECTIONSCONNECTIONS

: Svc: SvcHandlerHandler

: Blob: BlobHandlerHandler

: Svc: SvcHandlerHandler

: Blob: BlobHandlerHandler

: Svc: SvcHandlerHandler

: Blob: BlobHandlerHandler

: Svc: SvcHandlerHandler

: Blob: BlobHandlerHandler

: Acceptor: Acceptor

: Blob: BlobProcessorProcessor

1: sh = make_svc_handler()1: sh = make_svc_handler()2: accept_svc_handler(sh)2: accept_svc_handler(sh)3: activate_svc_handler(sh)3: activate_svc_handler(sh)

:: Reactor Reactor

45

Evaluation and Recommendations

� Understand communication requirements and

network/host environments

� Measure performance empirically before adopt-ing a communication model

{ Low-speed networks often hide performance over-

head

� Insist CORBA implementors provide hooksto manipulate options

{ e.g., setting socket queue size with ORBeline was

hard

� Increase size of socket queues to largest

value supported by OS

� Tune the size of the transmitted data bu�ers

to match MTU of the network

46

Evaluation and Recommendations

(cont'd)

� Use IDL sequences rather than IDL strings

to avoid unnecessary data access (i.e. strlen)

� Use write/read rather than send/recv on

SVR4 platforms

� Long-term solution:

{ Optimize DOC frameworks

{ Add streaming support to CORBA speci�cation

� Near-term solution for CORBA overhead onhigh-speed networks:

{ e.g., Blob Streaming integrates CORBA with ACE

47

Optimizations

CLIENTOBJECTOBJECT

IMPLIMPL

DYNAMICDYNAMIC

INVOCATIONINVOCATION

INTERFACEINTERFACE

IDLIDLSTUBSSTUBS

ORBORBINTERFACEINTERFACE

IDLIDLSKELETONSKELETON

OBJECT

ADAPTER

OBJECT

REQUEST BROKER

op(args)op(args)

OS KERNEL

NETWORK

ADAPTER

OS KERNEL

NETWORK

ADAPTER

GIOPGIOP TRANSPORT TRANSPORT

PROTOCOLPROTOCOL

OPTIMIZATIONSOPTIMIZATIONS

DATA COPYINGDATA COPYING

OPTIMIZATIONSOPTIMIZATIONS

OPERATIONOPERATION

DEMULTIPLEXINGDEMULTIPLEXING

OPTIMIZATIONSOPTIMIZATIONS

PRESENTATIONPRESENTATION

LAYERLAYER

OPTIMIZATIONSOPTIMIZATIONS

I/OI/O SUBSYSTEM SUBSYSTEM

OPTIMIZATIONSOPTIMIZATIONS

NETWORKNETWORK

ADAPTERADAPTER

OPTIMIZATIONSOPTIMIZATIONS

� To be e�ective for use with performance-

critical applications over high-speed networks,

CORBA implementations must be optimized

48

Obtaining ACE

� The ADAPTIVE Communication Environ-

ment (ACE) is an OO toolkit designed ac-

cording to key network programming pat-

terns

� All source code for ACE is freely available

{ Anonymously ftp to wuarchive.wustl.edu

{ Transfer the �les /languages/c++/ACE/*.gz andgnu/ACE-documentation/*.gz

� Mailing list

{ [email protected]

{ [email protected]

� WWW URL

{ http://www.cs.wustl.edu/~schmidt/ACE.html

49