key design time challenges

48
1 Key design time challenges Convert commander’s intent, along with static/dynamic environment, into QoS policies •Quantitatively evaluate & explore complex & dynamic QoS problem & solution spaces to evolve effective solutions Assure QoS in face of interactive and/or autonomous adaptation to fluid environment Pollux & RACE R&D Challenges: Design Time Goal: Significantly ease task of creating new QoS-enabled information management TSoS & integrating them with existing artifacts in new/larger contexts/constraints . . Artifact Generator Configuration Specification Analysis Tool Code

Upload: kana

Post on 31-Jan-2016

29 views

Category:

Documents


0 download

DESCRIPTION

Pollux & RACE R&D Challenges: Design Time. Key design time challenges Convert commander’s intent , along with static/dynamic environment, into QoS policies Quantitatively evaluate & explore complex & dynamic QoS problem & solution spaces to evolve effective solutions - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Key  design time  challenges

1

Key design time challenges

• Convert commander’s intent, along with static/dynamic environment, into QoS policies

• Quantitatively evaluate & explore complex & dynamic QoS problem & solution spaces to evolve effective solutions

• Assure QoS in face of interactive and/or autonomous adaptation to fluid environment

Pollux & RACE R&D Challenges: Design Time

Goal: Significantly ease task of creating new QoS-enabled

information management TSoS & integrating them with existing

artifacts in new/larger contexts/constraints

..ArtifactGenerator

if (inactiveInterval != -1) { int thisInterval = (int)(System.currentTimeMillis() - lastAccessed) / 1000;

if (thisInterval > inactiveInterval) { invalidate();

ServerSessionManager ssm = ServerSessionManager.getManager();

ssm.removeSession(this); } } }

private long lastAccessedTime = creationTime;

/** * Return the last time the client sent a request associated with this * session, as the number of milliseconds since midnight, January 1, 1970 * GMT. Actions that your application takes, such as getting or setting * a value associated with the session, do not affect the access time. */ public long getLastAccessedTime() {

return (this.lastAccessedTime);

}

this.lastAccessedTime = time;

ConfigurationSpecification

Analysis ToolCode

Page 2: Key  design time  challenges

2

Key run time challenges• Convert commander’s intent, along

with static/dynamic environment, into QoS policies

• Enforce integrated QoS policies at all layers (e.g., application, middleware, OS, transport, network) to support COIs within multiple domains

• Manage resources in the face of intermittent communication connectivity•e.g., power, mission, environments, silence/chatter

• Compensate for limited resources in tactical environments•e.g., bandwidth, compute cycles, primary/secondary storage

Goal: Regulating & adapting to (dis)continuous changes in difficult runtime environments

Pollux & RACE R&D Challenges: Run Time

Page 3: Key  design time  challenges

3

Resource Allocation & Control Engine (RACE)• Resource management framework atop

CORBA Component Model (CCM) middleware (CIAO/DAnCE)

• Motivating Applications:

– NASA’s Magnetospheric Multi-scale (MMS) mission

• Spacecraft constellation

• Adaptation to varying:– Regions of interest

(ROI)– Modes of operation

– Total Ship Computing Environment (TSCE)• ~1000 nodes• ~5000 applications• Task (re)distribution• Switch modes of operation• Adaptation to

– Loss of resources– Changing task priorities

Page 4: Key  design time  challenges

4

RACE MDD Tools – Design Time Challenges• Carry out commander’s intent by

– focusing on generic functionality in Platform Independent Model (PIM)– using transformation engine to generate detailed Platform Specific Model (PSM) Platform Independent Real-Time Policies Model

Platform Specific (CCM) Real-Time Policies Model

Page 5: Key  design time  challenges

5

• Carry out commander’s intent by– focusing on generic functionality in Platform Independent Model (PIM)– using transformation engine to generate detailed Platform Specific Model (PSM)

• Explore prob & soln space by– easily modifying visual model– passing generated artifacts to Bogor Model Checker– getting all possible valid & invalid states

RACE MDD Tools – Design Time Challenges

Page 6: Key  design time  challenges

6

• Carry out commander’s intent by– focusing on generic functionality in Platform Independent Model (PIM)– using transformation engine to generate detailed Platform Specific Model (PSM)

• Assure QoS by– performing safety, validity & behavioral checks– passing set of valid states to RACE middleware

• Explore prob & soln space by– easily modifying visual model– passing generated artifacts to Bogor model checker– getting all possible valid & invalid states

Bogor Model Checker (with RT Extensions)

SS1 SS2 SS3

SS4SS5

B

C

A

Distributed Application with RT Config Description

RACE MDD Tools – Design Time Challenges

Page 7: Key  design time  challenges

7

Allocators Controllers

Applications with time-varying resource and QoS requirements

Application Performance

Data

Resource Utilization

Data

System domain with time-varying resource availability

Resource Monitors

QoS Monitors

QoS-enabled Middleware Infrastructure

RACE

Configurators

Component Deployment Plan

Deploy and Manage Components

RACE Middleware – Run Time Challenges•Carry out commander’s intent by executing deployment plan from mission planner

• Enforce QoS policies (at the middleware level) with:

– multiple, pluggable algorithms for allocation & control– automation of D & C with uniform interfaces

• Manage resources by monitoring & adapting component resource allocation

• Compensate for limited resources by migrating/swapping components– adjusting application parameters (QoS settings)

Page 8: Key  design time  challenges

8

Architectural Overview of RACE Input Adapter

• Application metadata can be represented in various formats

–XML descriptors

–In memory C++/IDL structure

• Converts application metadata into in memory IDL structure used by RACE

Orchestrator

• Type & requirement of resource may very for different application

• Resource utilization overhead may be associated with allocation/control algorithms themselves

• Examines application metadata & selects appropriate allocation & control algorithms based on application characteristics & resource availability

Middleware Target Manager

Allocators Controllers

Historian

Configurators

Centralized QoS Monitor

ApplicationQoS

System Resource Utilization

Input Adapter

System domain with time-varying resource availability

Application Monitors

Resource Monitors

Applications

DeploymentPlan

Deploy Components

Orchestrator

Conductor

Page 9: Key  design time  challenges

9

Architectural Overview of RACE

Middleware Target Manager

Allocators Controllers

Historian

Configurators

Centralized QoS Monitor

ApplicationQoS

System Resource Utilization

Input Adapter

System domain with time-varying resource availability

Application Monitors

Resource Monitors

Applications

DeploymentPlan

Deploy Components

Orchestrator

Conductor

Allocators• Implementations of resource allocation algorithms

–Simple bin-packing algorithms

–Resource constraint partitioning bin-packing

–Time & space overheadControllers• Implementations of run-time resource

management algorithms –EUCON – rate adaptation

–FMUF – flexible maximum urgency first

• Adapts system behavior in response to varying operating conditions

Configurators• Configure middleware, operating system,

and network parameters–Middleware threading model

–Priority policy

–OS Priority

–Network diffserv priority

Page 10: Key  design time  challenges

10

RACE’s Monitoring and Control Framework Controller

Target Manager

Centralized Effector

Centralized QoS Monitor

Resource Utilization

Application QoS

System Wide Adaptation Decisions

Per Node System Parameters

Resource Utilization

System QoS

Monitors• Resource utilization monitors

–Measures CPU, memory, &n/w bandwidth utilization

• QoS Monitors–Measures application end-

to-end latency–Other application specific

monitors can be “hooked in”Controller• Responds to variations in resource utilization and application QoS• Computes system-wide adaptation

Effectors• Centralized Effector

–Decomposes system-wide adaptation decisions into per node adaptation decisions

• Nodal Effectors–Modify nodal parameters based on per-node adaptation decisions

• Modifies OS priority of processes hosting application components

Page 11: Key  design time  challenges

11

Performance Evaluation of RACE

• Overhead of the RACE framework:– Monitoring overhead : 37.97 micro seconds

– Control overhead: 800 nano seconds

Baseline System Performance (Without RACE) System Performance with RACE

Page 12: Key  design time  challenges

12

Applying RACE to DDS-Based DRE Systems

• All DRE systems have architectural features in common

• Adapting RACE to DDS-based DRE systems won’t require major mods

• DDS will make some of RACE’s tasks simpler

– QoS validation & matching

– QoS enforcement

QoS Enabled Network

OS Kernel

RT SchedulerRT CPU Reservation

Mechanism

RT Network Subsystem

Data Distribution Service

OS Kernel

RT SchedulerRT CPU Reservation

Mechanism

RT Network Subsystem

Publisher Subscriber

Registered Topics Listeners Waitsets History Caches

OS Kernel

RT SchedulerRT CPU Reservation

Mechanism

RT Network Subsystem

RT QoS Enabled CCM Middleware (CIAO/DAnCE)

OS Kernel

RT SchedulerRT CPU Reservation

Mechanism

RT Network Subsystem

Stub Skeleton

Container

Executor

Container

Executor

QoS Enabled Network

ApplicationQoS

Parameters

Middleware QoS

Parameters

OS Priorities, Processor

Reservation Quota

Bandwidth Reservation Mechanism /

Diffserv Codepoints

Page 13: Key  design time  challenges

13

DDS Implementation Architectures

• Decentralized Architecture

–embedded threads to handle communication, reliability, QoS etc

node nodeNetworkNetwork

Page 14: Key  design time  challenges

14

DDS Implementation Architectures

• Decentralized Architecture

–embedded threads to handle communication, reliability, QoS etc

• Federated Architecture

–a separate daemonprocess to handle communication, reliability, QoS, etc.

node nodeNetworkNetwork

node

NetworkNetworkdaemon

node

daemon

Page 15: Key  design time  challenges

15

node

DDS Implementation Architectures

• Decentralized Architecture

–embedded threads to handle communication, reliability, QoS etc

• Federated Architecture

–a separate daemonprocess to handle communication, reliability, QoS, etc.

• Centralized Architecture

–one single daemonprocess for domain

node nodeNetworkNetwork

node

NetworkNetworkdaemon

node node

NetworkNetwork

daemon

node

daemon

control control

datadata

Page 16: Key  design time  challenges

16

Pub/Sub Benchmarking Lessons Learned

DDS/GSOAP/JMS/Notification Service Comparison - Latency

10

100

1000

10000

100000

4 8 16 32 64 128 256 512 1024 2048 4096 8192 16384

Message Size (bytes)

Avg

. Lat

ency

(use

cs)

DDS1 DDS2

DDS3 GSOAP

JMS Notification Service

• Performance of DDS is significantly faster than other pub/sub architectures

• Even the slowest was 2x faster than other pub/sub services

• DDS scales better to larger payloads, especially for simple data types

Page 17: Key  design time  challenges

17

Pub/Sub Benchmarking Lessons Learned

• Performance of DDS is significantly faster than other pub/sub architectures

• Even the slowest was 2x faster than other pub/sub services

• DDS scales better to larger payloads, especially for simple data types

• DDS implementations are optimized for different use cases & design spaces

• payload size

• # of subscribers

• collocation

DDS/GSOAP/JMS/Notification Service Comparison - Latency

10

100

1000

10000

100000

4 8 16 32 64 128 256 512 1024 2048 4096 8192 16384

Message Size (bytes)

Avg

. Lat

ency

(use

cs)

DDS1 DDS2

DDS3 GSOAP

JMS Notification Service

http://www.dre.vanderbilt.edu/DDS/DDS_RTWS06.pdf

Page 18: Key  design time  challenges

18

Configuration Aspect Problems

Middleware developers

Documentation & capability synchronization

Semantic constraints & QoS evaluation of specific configurations

XML Configuration Files

XML Property Files

CIAO/CCM provides ~500 configuration options

Application developers

Must understand middleware constraints & semantics

– Increases accidental complexity

Different middleware uses different configuration mechanisms

21 interrelated QoS policies

Page 19: Key  design time  challenges

19

QoS Policies Supported by DDS

• DCPS entities (e.g., topics, data readers/writers) configurable via QoS policies

• QoS tailored to data distribution in tactical information systems

• Request/offered compatibility checked by DDS at Runtime

• Consistency checked by DDS at Runtime

– DEADLINE

• Establishes contract regarding rate at which periodic data is refreshed

– LATENCY_BUDGET

• Establishes guidelines for acceptable end-to-end delays

– TIME_BASED_FILTER

• Mediates exchanges between slow consumers & fast producers

– RESOURCE_LIMITS

• Controls resources utilized by service

– RELIABILITY (BEST_EFFORT, RELIABLE)

• Enables use of real-time transports for data

– HISTORY (KEEP_LAST, KEEP_ALL)

• Controls which (of multiple) data values are delivered

– DURABILITY (VOLATILE, TRANSIENT, PERSISTENT)

• Determines if data outlives time when they are written

– … and 15 more …

• Implications for Trustworthiness

Page 20: Key  design time  challenges

20

DDS QoS Policies

Interactions of QoS Policies have implications for:

• Consistency/Validitye.g., Deadline period < TimeBasedFilter minimum separation (for a DataReader)

• Compatibility/Connectivitye.g., best-effort communication offered (by DataWriter), reliable communication requested (by DataReader)

DataWriter

Durability-Volatile

Durability-Transient

Reliability- Best EffortReliability-

Reliable

Deadline-10ms

Deadline-20ms

Liveliness-Manual By Topic

Liveliness-Automatic

Topic

Will Settings Be Consistent?Or Will QoS Settings Need Updating?

Timebased-15ms

DataWriter

DataReader

Will Data Flow?Or Will QoS Settings Need Updating?

DataReader

Page 21: Key  design time  challenges

21

DDS Trustworthiness Needs (1/2)

• Compatibility and Consistency of QoS Settings– Data needs to flow as intended

• Close software loopholes that might be maliciously exploited

– Fixing at code time untenable• Implies long turn-around times• Code, compile, run, check status, iterate• Introduces accidental complexity

• DDS QoS Modeling Language (DQML) models QoS configurations and allows checking at design/modeling time– Supports quick and easy fixes by “sharing”

QoS policies– Supports correct-by-construction

configurations

– Fixing at run-time untenable• Updating QoS settings on the fly• Introduces inherent complexity• Unacceptable for certain systems (e.g., RT,

mission critical, provable properties)

Page 22: Key  design time  challenges

22

DDS Trustworthiness Needs (2/2)

• QoS configurations generated automatically– Eliminate accidental complexities

• Close configuration loopholes for malicious exploitation

– Decouple configurations from application logic

• Refinement of configuration separate from refinement of code

• DQML generates QoS settings files for DDS Applications– Creates consistent configurations– Promotes separation of concerns

• Configuration changes unentangled with business logic changes

– Increases confidence

QoS Settings

Page 23: Key  design time  challenges

23

Typical DDS Application Development

• Business/application logic mixed with QoS configuration code– Accidental complexity– Obfuscation of configuration concerns

• DQML decouples QoS configuration from business logic– Facilitates configuration

analysis– Reduces accidental

complexity

DataWriter QoS configuration & datawriter creation

Publisher QoS configuration & publisher creation

QoS Configuration Business logic

=Higher confidence DDS application

Page 24: Key  design time  challenges

24

DQML Design Decisions

No Abortive Errors• User can ignore constraint errors• Useful for developing pieces of a

distributed application• Initially focused on flexibility

QoS Associations vs. Containment

• Entities and QoS Policies associated via connections rather than containment

• Provides flexibility, reusability• Eases resolution of constraint

violations

Page 25: Key  design time  challenges

25

Use Case: DDS Benchmark Environment (DBE)

• Part of Real-Time DDS Examination & Evaluation Project (RT-DEEP)

• http://www.dre.vanderbilt.edu/DDS

DataReader

DataReader

DataReader

DataWriter

DataWriter DataWriter

DataWriter

QoSQoS

QoS

QoSQoS

QoS

QoS

DataReader

QoS

• Developed by DRE Group at ISIS

• DBE runs Perl scripts to deploy DataReaders and DataWriters onto nodes

• Passes QoS settings files (generated by hand)

• Requirement for testing and evaluating non-trivial QoS configurations

Page 26: Key  design time  challenges

26

DBE Interpreter

Model the DesiredQoS Policies via DQML

Compicon.icoInvoke the DBEInterpreter

Generates One QoS Settings File for Each DBEDataReader and DataWriter to Use

DBE

DataReader

DataWriter

Have DBE Launch DataReadersand DataWriters with Generated

QoS Settings Files

No Manual Intervention

QoS Settings

QoS Settings

Page 27: Key  design time  challenges

27

DQML Demonstration

• Create DDS entities, QoS policies, and connections

• Run constraint checking

• consistency check

• compatibility check

• fix at design time

• Invoke DBE Interpreter

• automatically generate QoS settings files

Page 28: Key  design time  challenges

28

Future Work

• Incorporate into Larger Scale Tool Chains– e.g., Deployment and

Configuration Engine (DAnCE) in CoSMIC Tool Chain

• Incorporate with TRUST Trustworthy Systems– Combine QoS polices and patterns to provide higher level

services• Build on DDS patterns1

– Continuous data, state data, alarm/event data, hot-swap and failover, controlled data access, filtered by data content

1 Gordon Hunt, OMG Workshop Presentation, 10-13 July, 2006

• Fault-tolerance service (e.g., using ownership/ownership strength, durability policies, multiple readers and writers, hot-swap and failover pattern)

• Security service (e.g., using time based filter, liveliness policies, controlled data access pattern)

• Real-time data service (e.g., using deadline, transport priority, latency budget policies, continuous data pattern)

Page 29: Key  design time  challenges

29

Component QoS Modeling

Platform Independent Component Modeling Language (PICML)

– Captures CCM application development lifecycle

– e.g., Design, Assembly, Packaging, Deployment, etc.

Component QoS Modeling Language (CQML)

– Enhances PICML (uses it as a library)

– Captures Component QoS requirements

– Defines 4 different types of QoS for Port, Component, Connection & Component Assembly (Application)

– Any new type of QoS should conform to these QoS types i.e., CQML in general

29

Security QoS Modeling Language (SQML)

Leverage and enhance to capture security requirements for eDRE applications

CORBA’s Secure Invocation Model

Page 30: Key  design time  challenges

CORBA Security Model v1.8

30

• Security protection based upon policy– Policy may be domain specific– Policy is enforced by ORB

CCM Security adopts the EJB Security Specification

• The ORB enforces– Access Control– Message Protection– Audit Policy

• The ORB implements– PEP (Policy Evaluation Points)– PDP (Policy Enforcement

Points)

• The ORB Services implement– Policy Repository– Security Protocols– Authentication Methods– Cryptographic Algorithms

Page 31: Key  design time  challenges

CCM QoS Levels for Security

31

Address three CCM QoS levels – ports, components and assemblies

SQML provides fine-grained as well as coarse-grained access control and security guarantees

Detector1

Detector2

Planner3 Planner1

Error Recovery

Effector1

Effector2

Config

LEGEND

Receptacle

Event Sink

Event Source

Facet

A CCM Assembly

The CORBA Component Model

Configure Security QoS

Properties

Page 32: Key  design time  challenges

Access Control Granularity

32

• Fine-grained:– Interface Operation

– Assembly Property

– Component Attribute

• Coarse-grained:– Interface

– Set of Operations

– Class of Operations (based on Required Rights - corba:gsum)

– Inter-Component Execution Flow (Path in an Assembly)

Page 33: Key  design time  challenges

User-Role-Rights Mapping (Effective Rights)

33

• Responsibility of the System Administrator and defined in the application server deployment site through access control policies

• The roles can be application specific or platform (CCM) specific

• CCM Specific roles: Designer, Developer, Implementer, Assembler, Packager, Deployer, End-User

• Application Specific roles: Administrator, User, Director, Programmer, Manager, etc.

The Users & Groups are shown for completeness. User/Group → Role

mappings are defined in the application access policies

Page 34: Key  design time  challenges

Operation/Interface Classification (Required Rights)

34

• Responsibility of the Component Developer

• Operations/Interfaces are classified according to the standard CORBA family rights [corba:gsum]

• Well-defined Component Interfaces• Allows for coarse-grained control

over operation access• Used underneath in the container to

determine access decisions to the operations

• Effective Rights vs. Required Rights

Rights assignment on a two-way

method

Rights assignment on an interface

Page 35: Key  design time  challenges

Policy Definition Rules

35

Allow/Deny access to all operations with

same rights

Two-Level evaluation: Operation name & Required Rights

Two-Level Evaluation: Operation name & Required Rights

Component Attributes have implicit get/set

rights

Critical path in the system (part of a

functionality/workflow)

Two-Level Evaluation: Operation name & Required Rights

Page 36: Key  design time  challenges

Security QoS Interpreter

• Generate User → Role → Granted-Rights mappings defined by the system administrator

• Generate Operation → Required-Rights mappings for an interface that are determined by the component designer.

• Generate security policy definition files • Generate method permissions based on the mappings

and policy rules • Generate additional metadata to configure the container

36

Page 37: Key  design time  challenges

Benefits of Security QoS Modeling

• Express cross-cutting concerns that can be implemented at the interface level, component level as well as component assembly (application) level.

• Shed some responsibility from the ORB through definition of well-formed policies, rule combining and conflict resolution.

• A higher level tool for declarative security specification for the deployment of large-scale component-based systems.

• Incorporates security into the QoS aspects of component systems which is an important step towards complete QoS modeling of such systems enabling their trustworthiness.

• Allows modeling of security QoS with much more generality and flexibility than existing solutions (OpenPMF)

37

Page 38: Key  design time  challenges

Future Work

• Define efficient rule and policy validation and rule combining algorithms.

• Extend the critical path functionality to provide Business Process & Workflow security

• Provide middleware infrastructure support for security in the CCM container through container portable interceptors, leveraging the facilities of the CORBA security service implementation available with TAO

• Enable D & C tools like DAnCE to integrate security QoS properties with application deployment and configure the CCM middleware to enforce them

• Unified QoS Modeling through CQML• FT, RT, Security, NetworkQoS, Event Chanel Configuration

conform to CQML• Any new QoS requirement model should conform to CQML• DQML can conform to CQML enabling different platforms to be

38

Page 39: Key  design time  challenges

39

MDD Solutions for Configuration

Options Configuration Modeling Language (OCML) ensures semantic consistency of option configurations

• OCML is used by

• OCML metamodel is platform- independent• OCML models are platform- specific

– Application developers to configure the middleware for a specific application

– Middleware developers to design the configuration model

• Configuration model validates application model

Page 40: Key  design time  challenges

40

Applying OCML

– Configuration space – Constraints

•OCML generates config model

•Middleware developers specify

Page 41: Key  design time  challenges

41

Applying OCML

•Middleware developers specify – Configuration space – Constraints

•OCML generates config model•Application developers provide a model of desired options & their values, e.g.,– Network resources– Concurrency & connection management strategies

Page 42: Key  design time  challenges

42

Applying OCML

•Middleware developers specify – Configuration space – Constraints

•OCML generates config model•Application developers provide a model of desired options & their values, e.g.,– Network resources– Concurrency & connection management strategies

•OCML constraint checker flags incompatible options & then– Synthesizes XML descriptors for middleware configuration

– Generates documentation for middleware configuration

– Validates the configurations

Page 43: Key  design time  challenges

43

Supporting DDS QoS Modeling With OCML• Integrate OCML with DRE

system modeling languages

CIAO Pub Port

DDS Option Set

• More generation options– Other config file formats– Parameters for simulations– Code blocks

– Enable association of option sets with system model elements

• PICML– ORB/POA/Container– Ports using DDS

(proposed DDS-4-LWCCM spec)

• DDS-specific ML– DDS entities

<assemblyImpl>

<instance xmi:id="ScaleQosket">

<name>ScaleQosket</name>

<package href="ScaleQosket.cpd"/>

</instance>

<instance xmi:id="LocalResourceManagerComponent">

<name>LocalResourceManagerComponent</name>

<package href="LocalResourceManagerComponent.cpd"/>

</instance>...<connection>

<name>incoming_image_outgoing_image_Evt</name>

<internalEndpoint>

<portName>outgoing_image_Evt</portName>

<instance xmi:idref="LocalReceiver"/>

</internalEndpoint>

<internalEndpoint>

<portName>incoming_image_Evt</portName>

<instance xmi:idref="MultiReceiver"/>

</internalEndpoint>

</connection>

</assemblyImpl>

XML

C++

Listener_var subscriber_listener = new MyListener();foo_reader->set_listener(subscriber_listener);

MyListener::on_data_available(DataReader reader){FooSeq_var received_data;SampleInfoSeq_var sample_info;

reader->take(received_data.out (), sample_info.out (),ANY_SAMPLE_STATE,ANY_LIFECYCLE_STATE);

// Use received_data……

}

Page 44: Key  design time  challenges

44

Modeling QoS With Design Patterns

Continuous Data• constant updates

• many-to-many

• last value is best

• seamless failover

• Reliability = BEST_EFFORT

• Time-Based Filter = X

• Use keys & multicast

• History = KEEP_LAST, 1

• Ownership = EXCLUSIVE

• Deadline = X

Page 45: Key  design time  challenges

45

Modeling QoS With Design Patterns

State Information• persistent data

• occasional mods

• latest & greatest

• must deliver

• must process

• Durability = PERSISTENT

• Lifespan = X

• Reliability = RELIABLE

• Pub History = KEEP_ALL

• Sub History = KEEP_LAST, n

Page 46: Key  design time  challenges

46

Modeling QoS With Design Patterns

Alarms & Events• asynchronous

• must deliver

• authorized sender

• Liveliness = MANUAL

• Reliability = RELIABLE

• Pub History = KEEP_ALL

• Ownership = EXCLUSIVE

Page 47: Key  design time  challenges

47

Pollux MDD Tools –Design Time Challenges

• Carry out commander’s intent by automated mapping of familiar scenarios to models

• Assure QoS by– explicit representation in model– automatic consistency checks

• Explore prob & soln space with– easily grokable/modifiable visual language– multiple artifact generators

Alarms & Events

• asynchronous

• must deliver

• authorized sender

• Liveliness = MANUAL

• Reliability = RELIABLE

• Pub History = KEEP_ALL

• Ownership = EXCLUSIVE

Alarms & Events

• asynchronous

• must deliver

• authorized sender

• Liveliness = MANUAL

• Reliability = RELIABLE

• Pub History = KEEP_ALL

• Ownership = EXCLUSIVE

<assemblyImpl>

<instance xmi:id="ScaleQosket">

<name>ScaleQosket</name>

<package href="ScaleQosket.cpd"/>

</instance>

<instance xmi:id="LocalResourceManagerComponent">

<name>LocalResourceManagerComponent</name>

<package href="LocalResourceManagerComponent.cpd"/>

</instance>...<connection>

<name>incoming_image_outgoing_image_Evt</name>

<internalEndpoint>

<portName>outgoing_image_Evt</portName>

<instance xmi:idref="LocalReceiver"/>

</internalEndpoint>

<internalEndpoint>

<portName>incoming_image_Evt</portName>

<instance xmi:idref="MultiReceiver"/>

</internalEndpoint>

</connection>

</assemblyImpl>

XML

C++

if (inactiveInterval != -1) {

int thisInterval =

(int)(System.currentTimeMillis() -lastAccessed) / 1000;

if (thisInterval > inactiveInterval) {

invalida;

/*** Return the last time the client sent a

request associated with this* session, as the number of milliseconds

since midnight, January 1, 1970* GMT. Actions that your application

takes, such as getting or setting* a value associated with the session, do

not affect the access time.*/

public long getLastAccessedTime() {

return (this.lastAccessedT

Page 48: Key  design time  challenges

48

Pollux Perf. Eval. –Run Time Challenges

• Carry out commander’s intent by DDS getting the right information to the right place at the right time

• Enforce QoS policies - built in to DDS implementations

• Manage resources with– Resource Limits policy– Time-Based Filter policy– Lifespan policy– History policy– filter migration to source

• Compensate for limited resources by– leveraging mutable QoS policies– detecting & acting on meta- events (built-in QoS policies)

Pressure Temperature

Data Reader

R

Data Writer

R

Publisher Subscriber

S1

S2

S3

S4

S5

S6

S7

S6 S5 S4 S3 S2 S1

Topic

R

S7 S7X

HISTORY

RELIABILITYCOHERENCY

RESOURCE LIMITS

LATENCY

Notification of new data objects

no notification notification

time-based filter

deadline timeout

Data Reader

R

Data Writer

R

Publisher Subscriber

Topic

R

NEW TOPIC

NEW

SUBSCRIBER

NEW

PUBLISHER