openstack and private cloud

50
OpenStack & Private Cloud 오오오오오오오 오오오 오오오 ([email protected]) 2016 오 3 오 24 오

Upload: openstack-korea-community

Post on 12-Apr-2017

1.601 views

Category:

Technology


3 download

TRANSCRIPT

Page 1: OpenStack and private cloud

OpenStack & Private Cloud

오픈스택코리아 부대표안승규 ([email protected])2016 년 3 월 24 일

Page 2: OpenStack and private cloud

OpenStack History• Austin (2010.1) : Oct 21, 2010 – Deprecated

- NASA : Nova, Rackspace : Swift• Bexar (2011.1) : Feb 3, 2011 - Deprecated• Cactus (2011.2) : Apr 15, 2011 - Deprecated• Diablo (2011.3, 2011.3.1) : Sep 22, 2011 - EOL• Essex (2012.1 - 2012.1.3) : Apr 5, 2012 - EOL• Folsom (2012.2 - 2012.2.4) : Sep 27, 2012 - EOL• Grizzly (2013.1 - 2013.1.5) : Apr 4, 2013 - EOL• Havana (2013.2 - 2013.2.3) : Oct 17, 2013 - EOL• Icehouse (2014.1 - 2014.1.5) : Apr 17, 2014 - EOL• Juno (2014.2 - 2014.2.4) : Oct 16, 2014 - EOL• Kilo (2015.1.1 - 2015.1.3) : Apr 30, 2015 - Security-supported (EOL : 2016.5.2)• Liberty (Nova version 12.0.2) : Oct 15, 2015 - Current stable Release (EOL : 2016.11.17)• Mitaka (Nova version 13.0.0.0) : Apr 7, 2016• Newton, Ocata

Page 3: OpenStack and private cloud

OpenStack Projects

• Nova (Compute) : austin• Swift (Object Storage) : austin• Glance (Image service) : bexar• Keystone (Identity) : essex• Horizon (Dashboard) : essex• Neutron (Networking) : folsom• Cinder (Block Storage) : folsom• Telemetry (Telemetry) : havana• Heat (Orchestration) : havana• Trove (Database service) : icehouse• Sahara (Data processing service) : juno• Ironic (Bare metal) : havana

• Zaqar (Message service) : icehouse• Barbican (Key management service) : juno• Designate (DNS service) : juno• Manila (Shared File System service) : juno• Monasca (Monitoring)

: 서비스에 적용해 본 프로젝트

정식 Release 에 포함된 프로젝트

Page 4: OpenStack and private cloud

OpenStack Conceptual Diagram

Page 5: OpenStack and private cloud

OpenStack Nova Features• VM Instances

– list, show– create, delete– reboot, start, stop– pause, unpause– rebuild, resize, resize-confirm, resize-revert

• Security– secgroup-create, secgroup-delete– secgroup-add-rule, secgroup-delete-rule– secgroup-list, secgroup-list-rules

• Flavor– flavor-list, flavor-create, flavor-delete

• IP– floating-ip-list, floating-ip-create, floating-ip-delete, floating-ip-associate, floating-ip-disassociate

Page 6: OpenStack and private cloud

OpenStack Cinder Features• Volume

– list, show– create, delete– extend

• Backup– backup-list, backup-show– backup-create, backup-delete, bakcup-restore

• Snapshot– snapshot-list– snapshot-create, snapshot-delete

• QoS– qos-list, qos-show– qos-create, qos-delete– qos-associate, qos-disassoicate, qos-get-association

Page 7: OpenStack and private cloud

OpenStack Neutron Features• Network

– net-list, net-show, net-create, net-delete– subnet-list, subnet-show, subnet-create, subnet-delete– net-gateway-list, net-gateway-show, net-gateway-create, net-gateway-delete– net-gateway-connect, net-gateway-disconnet

• Router– router-list, router-show, router-create, router-delete– router-interface-add, router-interface-delete– router-gateway-set

• Port– port-list, port-show– port-create, port-delete

• Loadbalancer– lb-pool-list, lb-pool-show, lb-pool-create, lb-pool-delete– lb-vip-list, lb-vip-show, lb-vip-create, lb-vip-delete– lb-member-list, lb-member-show, lb-member-create, lb-member-delete

Page 8: OpenStack and private cloud

OpenStack Neutron Overview• Neutron

– L3 High Availability : Virtual Router Redundancy Protocol (VRRP) – Bottle-net for East-west traffic : Distributed Virtual Router (DVR) or VDR

Page 9: OpenStack and private cloud

OpenStack Neutron DVR - OVS

http://docs.openstack.org/liberty/networking-guide/scenario-dvr-ovs.html

Page 10: OpenStack and private cloud

OpenStack Neutron Provider Network - OVS

http://docs.openstack.org/liberty/networking-guide/scenario-provider-ovs.html

Page 11: OpenStack and private cloud

Application architecture (nova)• Daemon process (using nova-network)

– nova-api (controller node)– nova-conductor (controller node)– nova-scheduler (controller node)– nova-compute (compute node)– nova-network (compute node)

• Source directory– api.py : 타 프로세스 모듈의 함수를 import 하여 호출– rpcapi.py : 타 프로세스 모듈을 MQ 를 통하여 호출 (publisher)– manager.py : MQ 로 호출된 함수가 invoke 됨 (subscriber)– driver.py : Abstract Class

• Communication way– 서로 다른 프로젝트 : Rest API– 같은 프로젝트내의 서로 다른 프로세스 : MQ

Page 12: OpenStack and private cloud

Create instance flow chart (1/2)

nova.api.openstack.com-pute.servers

ServersController.create()

nova-api

nova.compute.apiAPI.create()

nova.conductor.apiComputeTaskAPI.build_instanc

e()

nova.conductor.rpcapiComputeTaskAPI.build_instanc

e()

MQ

nova.conductor.managerComputeTaskManager.build_instan

ce()

nova.scheduler.rpcapi.SchedulerAPI.select_destinations()

nova.scheduler.manager.SchedulerManager.select_destinatio

ns()

nova.compute.rpcapiComputeAPI.build_and_run_instan

ce()

③④ cast

⑤⑥

⑦call

⑨⑩cast

nova-conductor

nova-scheduler

Page 13: OpenStack and private cloud

Create instance flow chart (2/2)

nova-compute

MQ

①②

③④Rest API

neutron-apinova.compute.managerComputeManager.build_and_run_insta

nce()

nova.network.neutronv2.apiAPI.setup_networks_on_host

()

neutronclient.v2_0.clientClient.list_ports(),

update_port()

nova.virt.libvirt.driverLibvirtDriver.spawn()

⑥Rest API

glance

⑦nova.virt.libvirt.firewall

IptablesFirewallDriver.setup_basic_filtering()

⑧ nova.virt.libvirt.firewallIptablesFirewallDriver.apply_instance_filt

er()

Page 14: OpenStack and private cloud

Continuous Integration

nova/master

nova/master

nova/bp-localdisk

nova/bp-localdisk

blueprints.launchpad.net/nova

Jenkins

Gerrit

2. clone

3. branch

4. developmentrun unit tests git commit

Review (+2, +1/-1)

5. git push

6. trigger

1. Create issue

7. mergeTrack changes

Your RepoServer

run unit tests & tempest

Page 15: OpenStack and private cloud

Gerrit (Code review system)

Page 16: OpenStack and private cloud

How to Contribute• Making an account at launchpad.net• Join the OpenStack developers mailing list & #openstack-dev IRC Channel• Confirming to code review system information• Agreeing to the CLA (contributors License Agreement)• Writing Blueprints (Gerrit & blueprints.launchpad.net/nova)• Getting the OpenStack code (git clone)• Setting up gerrit environment

– git remote add gerrit ssh://[email protected]:29418/openstack/nova.git

• Making a git new branch (git branch)• Pushing the your code (git push)

Page 17: OpenStack and private cloud

Private Cloud 상품 구성

• 기본 상품– Compute– Object Storage– Portal– Monitoring– Metering & Billing– Operations– Load Balancing– Security (Anti-DDos, IPS, Firewall, IDS, WebFirewall, etc)

• 확장 상품– DNS, Queue, Database, Hadoop– Content Delivery network (CDN), Shared File System (SFS)– Virtual Private Cloud (VPC), Hybrid Cloud

Page 18: OpenStack and private cloud

Hybrid Cloud Architecture

AWS Direct Connect(USA East Region)

(BGP / VLAN)

1Gbps Dual LineActive/

Active(Standby)

Internet

Private

DC

Public GW

Private GW

AWSDC

AWS US RegionPrivate US Region

VM

Subnet B

IP : 10.22.2.0/24

VM

VM

Subnet A

IP : 10.22.1.0/24

VM

Private : 10.22.0.0/16

VM

Subnet A

IP : 10.0.1.0/24

VM

VPC

VM

Subnet B

IP : 10.0.2.0/24

VM

L3 Switch

VM

EC2 / S3

IP : 10.123.123.xx

VM S3

VPC : 10.0.0.0/16

Page 19: OpenStack and private cloud

프로젝트 시작은 ?• 네트워크는 큰 그림을 그리고 시작• 개발은 최소의 인력 (pizza team) 으로 필요한 기능부터• Product roadmap 은 항상 현행화하여 공유• 운영은 자동화로 인력을 최소화• 소규모 프로젝트 단위로 개발자와 네트워크 엔지니어로 구성• S/W, N/W, Architecture 를 모두 아는 인력이 적어도 한 명은 필요• Product Manager == 의사결정권자• 개발자는 다른 개발자의 소스도 이해할 수 있어야 함

Page 20: OpenStack and private cloud

바꾸고 싶은 생각들

• Virtual Machine 은 Dedicated physical server 가 아니다 .– 언제든지 down 될 수 있다 . 빨리 살리는 것이 중요– 애플리케이션 아키텍처로 이중화

• Physical server network 이중화는 필요 없다 .– 규모의 경제

• 작은 규모는 Virtualization 으로 해결하는 것이 더 효율적이다 .– Physical server 100, 200 대 정도는 cloud 가 필요 없다 .

• Cloud management system 간의 성능 비교는 무의미하다 .– VM 의 성능은 H/W, Hypervisor, OS 에 영향을 받는다 .

Page 21: OpenStack and private cloud

별첨 : OpenStack 에서 사용한 Python 활용법

Seungkyu AhnJohn Haan

Yoon DoyoulSean Lee

HyangiiInhye Park

Joseph Park ([email protected])

Page 22: OpenStack and private cloud

OpenStack Application Architecture• 프로젝트 간 통신 : Rest API (HTTP Request)• 프로젝트 내 프로세스 ( 모듈 ) 간 통신 : AMQP (Advanced Message Queuing Protocol)

Nova-API

Nova-Scheduler

Queue

Nova-Compute

Glance-APIRest API

AMQP

Python-glance-client

Nova-Conductorfrom nova.compute import apiself._api = api.API()

AMQP- nova - api - compute - api.py - manager.py

Page 23: OpenStack and private cloud

OpenStack Application Architecture ( 계속 )• API 패키지 : 프로세스 모듈 (nova-api)

• api.py : 다른 프로세스 queue 호출을 위한 파일 (nova-conductor nova-compute)

• manager.py : queue subscribe 파일 (nova-conductor nova-compute)

Page 24: OpenStack and private cloud

Dynamically importing modules• import os

def import_module(import_str): __import__(import_str) # ImportError return sys.modules[import_str] # KeyError

os = import_module(“os”)os.getcwd()

• import versioned_submodulemodule = ‘mymodule.%s’ % versionmodule = ‘.’.join((module, submodule))import_module(module)

• import classimport_value = “nova.db.sqlalchemy.models.NovaBase”mod_str, _sep, class_str = import_value.rpartition(‘.’)novabase_class = getattr(sys.modules[mod_str], class_str)novabase.class()

Page 25: OpenStack and private cloud

Data Access Object (DAO)• Nova.db.base.Base

def __init__(…): self.db = import_module(db_driver)

• nova.db.api.py_BACKEND_MAPPING = {‘sqlalchemy’: ‘nova.db.sqlalchemy.api’}IMPL = concurrency.TpoolDbapiWrapper(CONF, backend_mapping=_BACKEND_MAPPING )…def instance_update(…) IMPL.instance_update(…)

• Manager(base.Base)…self.db.instance_update(…)

db_driver 는 “ nova.db” 패키지nova.db 패키지의 __init__ 은 from nova.db.api import *그러므로 self.db = nova.db.api

Page 26: OpenStack and private cloud

Configuration 활용• nova.db.sqlalchemy.api.py

Import oslo.config import cfg…CONF = cfg.CONFCONF.compute_topic 과 같이 사용

• oslo.config.cfg.pyCONF = ConfigOpts()

• Optname = 이름type = StrOpt, IntOpt, FloatOpt, BoolOpt, List, DickOpt, IPOpt, MultiOpt, MultiStrOptdest = ConfigOpts property 와 대응되는 이름default = 기본값

• ConfigOpts(collections.Mapping)def __init__(self): self._opts = {} # dict of dicts of (opt:, override:, default: )

def __getattr__ (self.name): # property 가 실제 존재하지 않으면 호출됨 return self._get(name)

Page 27: OpenStack and private cloud

Policy administration for OpenStack (1/3)

Keystone 을 통해 사용자별 role 을 부여 .policy.json 파일을 통해 API 별로 실행할 수 있는 role 을 구분 .nova 및 cinder 소스에서 실제 API 가 수행 되기 전 policy 를 검사하는 기능 .

[ 예 ] nova 의 shelve 기능

# keystone - keystone 을 통해 user 를 생성 - 부여할 role 을 생성 - user 에 role 을 부여

root@MGMT-SET2:~# keystone role-list+----------------------+----------------------+| id | name |+----------------------+----------------------+| admin | admin || user | user |+----------------------+----------------------+

root@MGMT-SET2:~# keystone user-role-list+-------+-------+----------------------------------+----------------------------------+| id | name | user_id | tenant_id |+-------+-------+----------------------------------+----------------------------------+| admin | admin | 210b71.. | 7559375.. || user | user | 210b71.. | 7559375.. |+-------+-------+----------------------------------+----------------------------------+

1. Role 정의 2. user 마다 role 을 적용

John Haan

Page 28: OpenStack and private cloud

# nova - /etc/nova/policy.json 에 API 별 role 권한을 설정

- /nova/policy.py 에서 API action 을 check

Policy administration for OpenStack (2/3)

def check_policy(context, action, target, scope='compute'): _action = '%s:%s' % (scope, action) nova.policy.enforce(context, _action, target)

"context_is_admin": "role:admin","admin_or_owner": "is_admin:True or project_id:%(project_id)s”,…"compute:shelve": " admin_or_owner ",

Page 29: OpenStack and private cloud

Policy administration for OpenStack (3/3) # nova - nova/compute/api.py 에서 policy 를 check 하는 decorator 함수를 정의

- shelve method 앞에 policy check 함수를 적용

def policy_decorator(scope): """Check corresponding policy prior of wrapped method to execu-tion.""" def outer(func): @functools.wraps(func) def wrapped(self, context, target, *args, **kwargs): check_policy(context, func.__name__, target, scope) return func(self, context, target, *args, **kwargs) return wrapped return outer

wrap_check_policy = policy_decorator(scope='compute')

@wrap_check_policydef shelve(self, context, instance): """Shelve an instance...

Page 30: OpenStack and private cloud

Decorator Function in OpenStack (1/3)

데코레이터 함수란 ? - 함수 자체를 인자로 받아서 원래의 함수는 변경하지 않고 다른 기능 들을 추가해주는 기능

import timedef elapsed_time(func):     def decorated(*args, **kwargs):             start = time.time()             func(args, kwargs)             end = time.time()             print "Elapsed time: %f" % (end-start)     return decorated  @elapsed_timedef hello():     print 'hello'

Example

Return re-sponse

Get ref, a,b,c

ref(a1,b1,c1)Get response

Bypass the ac-tual function

CallerCall Function(a,b,c)

Statements

Actual Function--

Return response

Process

Process Decora-tor

▶ elapsed_time() 함수가 hello() 함수 기능 자체를 변경하지 않고 , decorator로 앞 뒤에 시간을 기록해 준다 .

John Haan

Page 31: OpenStack and private cloud

Decorator Function in OpenStack (2/3) OpenStack 에서의 활용 - API 함수가 실행되기 전에 check 하는 기능을 decorator 로 추가 - 예를 들어 , shelve() 함수가 수행되기 전에 instance 의 lock 여부를 decorator 로 check [ 대상 API 함수 ]

[ decorator 함수 ]

@check_instance_lockdef shelve(self, context, instance): """Shelve an instance.

def check_instance_lock(function): def inner(self, context, instance, *args, **kwargs): if instance['locked'] and not context.is_admin: raise exception.InstanceIsLocked(instance_uuid= instance['uuid']) return function(self, context, instance, *args, **kwargs) return inner

▶ nova 의 API 에서 각 method 가 실행되기 전에 decorator 함수를 통해 check 기능을 수행한다 .

▶ API method 를 인자로 받고 내부 method(inner) 를 호출하고 그 안에서 instance 가 locked 되어 있으면 exception 처리를 해준다 .

Page 32: OpenStack and private cloud

Decorator Function in OpenStack (3/3)• def require_admin_context(f):

def wrapper(*args, **kwargs): nova.context.require_admin_context(args[0]) return f(*args, **kwargs)return wrapper

• @require_admin_contextdef service_get_by_compute_host(context, host):…

• OOP Decorator 패턴과 비교GoF 의 Decorator 와는 조금 다른 방법GoF Decorator 패턴 : 상속을 받으면서 Decorate 를 추가하는 방법 (OOP)GoF 의 Template method pattern 과 오히려 더 유사 ( 이것 역시 OOP 적임 )AspectJ 혹은 Spring AOP (Aspect Oriented Programming) 와 유사 (Python 이 더 간단함 )

Page 33: OpenStack and private cloud

Routes.Mapper.resource (1/2)• REST API Request 와 API module 의 Controller 를 mapping• routes.mapper.resource 는 단 몇 줄로 REST API(GET, POST, PUT, DELETE) 에 대한 mapper 를 손쉽게 등록 가능• 아래 예제는 resource 에 tests 를 등록하고 tests 로 REST API Request 가 들어올 경우 testController 로 전달

>>> from routes import mapper>>> test_map = Mapper()>>> test_map.resource("test", "tests", controller="testController")>>> print test_mapRoute name          Methods Path                    POST    /tests.:(format)                    POST    /testsformatted_tests     GET     /tests.:(format)tests               GET     /testsformatted_new_test  GET     /tests/new.:(format)new_test            GET     /tests/new                    PUT     /tests/:(id).:(format)                    PUT     /tests/:(id)                    DELETE  /tests/:(id).:(format)                    DELETE  /tests/:(id)formatted_edit_test GET     /tests/:(id)/edit.:(format)edit_test           GET     /tests/:(id)/editformatted_test      GET     /tests/:(id).:(format)test                GET     /tests/:(id)

Yoon Doyoul

Page 34: OpenStack and private cloud

Routes.Mapper.resource (2/2)

if init_only is None or 'limits' in init_only:    self.resources['limits'] = limits.create_resource()    mapper.resource("limit", "limits",                    controller=self.resources['limits'])

if init_only is None or 'flavors' in init_only:    self.resources['flavors'] = flavors.create_resource()    mapper.resource("flavor", "flavors",                    controller=self.resources['flavors'],                    collection={'detail': 'GET'},                    member={'action': 'POST'})

• Openstack Nova 에서의 Mapper 사용• nova-api service 가 로딩되면서 API 에 정의된 모든 controller 들을 mapper 에 등록• 기본 정의된 API 뿐만 아니라 nova/api/openstack/compute/contrib 디렉토리 밑에 정의된 controller 들도 자동으로 등록nova/api/openstack/compute/__init__.py

Page 35: OpenStack and private cloud

API Extensions (1/2)

• Openstack Nova API 는 extensions API 라는 명칭으로 손쉽게 API 추가 가능

• nova/api/openstack/contirb 디렉토리 밑에 정해진 형식에 맞춰 API 를 추가하게 되면 자동으로 인식하여 등록됨

• /contrib 디렉토리 밑에 파일을 생성하고 , nova.api.openstack.extensions 의 ExtensionDe-scriptor 객체를 상속받는 class 를 만들고 , 그에 맞는 Controller 부분 구현 필요

• nova-api service 가 로딩되면서 ExtensionManager 를 실행하고 /contrib 디렉토리 밑에 정의된 파일들을 API 에 자동으로 등록하는 방식

Yoon Doyoul

Page 36: OpenStack and private cloud

API Extensions (2/2)

from nova.api.openstack import extensionsfrom nova.api.openstack import wsgi

class TestController(object):    def create(self, req, body):    def delete(self, req, id):    def show(self, req, id):    def index(self, req):

class TempController(wsgi.Controller): @wsgi.action(‘os-stop’) def _stop_test(self, req, id, body):

class Tests(extensions.ExtensionDescriptor):

    """Test Code."""    name = "Tests"    alias = "os-tests"

    def get_resources(self):        resources = []        res = extensions.ResourceExtension('os-tests',TestController())        resources.append(res)        return resources

    def get_controller_extensions(self):        controller = TempController()        extension = extensions.ControllerExtension(self, ‘os-tests', controller)        return [extension]

Page 37: OpenStack and private cloud

Usages Pipeline for OpenStack API Servers(1/3)

1. 사용자의 실제 Request 를 처리하기 이전 Pre-Process 를 수행한다 .2. API 서버의 필요자원 사전 Load3. Rate-limit, Health-Check 같은 부가기능 구현

APIServerRest Request

인증 로그 제약사항

FilterReq 1Req

2Req 3Req

4Req ...

Cache Meta

요청제한 …?

? ?DB LDAP

Sean Lee

Page 38: OpenStack and private cloud

Usages Pipeline for OpenStack API Servers(2/3)OpenStack API 에서의 활용 - api-paste.ini 파일에 사용할 각각의 Filter 를 선언하고 , 이에 대한 Pipeline 을 구성하여 차례대로 처리 - 어떠한 Pipeline 을 사용할 지에 대해서는 각 OpenStack Component 의 configuration 에서 설정 ex) auth_strategy = keystone [ 예 ]api-paste.ini for Nova

[ 예 ]Pipeline 에 선언된 Filter

[composite:openstack_compute_api_v2]use = call:nova.api.auth:pipeline_factorynoauth = faultwrap sizelimit noauth ratelimit osapi_compute_app_v2keystone = faultwrap ratelimit sizelimit authtoken keystonecontext ratelimit osapi_compute_app_v2keystone_nolimit = faultwrap sizelimit authtoken keystonecontext osapi_compute_app_v2

[filter:ratelimit]paste.filter_factory = nova.api.openstack.compute.limits:RateLimitingMiddleware.factorylimits =(POST, "*", .*, 15, MINUTE);(POST, "*/servers", ^/servers, 50, DAY);(PUT, "*", .*, 10, MINUTE);(GET, "*changes-since*", .*changes-since.*, 3, MINUTE);(DELETE, "*",.*, 100, MINUTE)

[filter:sizelimit]paste.filter_factory = nova.api.sizelimit:RequestBodySizeLimiter.factory

Page 39: OpenStack and private cloud

Usages Pipeline for OpenStack API Servers(3/3)

def pipeline_factory(loader, global_conf, **local_conf): """A paste pipeline replica that keys off of auth_strategy.""" pipeline = local_conf[CONF.auth_strategy] if not CONF.api_rate_limit: limit_name = CONF.auth_strategy + '_nolimit' pipeline = local_conf.get(limit_name, pipeline) pipeline = pipeline.split()return _load_pipeline(loader, pipeline)

def _load_pipeline(loader, pipeline): filters = [loader.get_filter(n) for n in pipeline[:-1]] app = loader.get_app(pipeline[-1]) filters.reverse() for filter in filters: app = filter(app) return app

class RequestBodySizeLimiter(wsgi.Middleware): """Limit the size of incoming requests."""

def __init__(self, *args, **kwargs): super(RequestBodySizeLimiter, self).__init__(*args, **kwargs)

@webob.dec.wsgify(RequestClass=wsgi.Request) def __call__(self, req): if req.content_length > CONF.osapi_max_request_body_size: msg = _("Request is too large.") raise webob.exc.HTTPRequestEntityTooLarge(explanation=msg) if req.content_length is None and req.is_body_readable: limiter = LimitingReader(req.body_file, CONF.osapi_max_request_body_size) req.body_file = limiter return self.application

nova/api/auth.py nova/api/sizelimit.py

1. use=call:nova.api.auth:pipeline_factory 실제 nova/api/auth.py 의 pipeline_factory 호출

2. Configuration 의 ‘ auth_strategy’ 값을 load 하여 어떠한 pipeline 을 사용할지 결정3. Pipeline 에 선언된 순서 별로 각각의 filter_factory load ex) paste.filter_factory = nova.api.sizelimit:RequestBodySizeLimiter.factory 실제 nova/api/sizelimit.py 의 RequestBodySizeLimiter class 호출

Page 40: OpenStack and private cloud

Lambda in Python (1/3)

• 축약함수 , 이름이 없는 함수• 일반적인 def 의 경우 함수 이름을 정하고 , 이를 나중에 재사용• Lambda 의 경우 이름을 정하지 않고 , 한 줄에 함수를 정의함 .

>>> square = lambda x: x*x*x>>> cube = lambda x: x*x*x*x>>> print square(2)8>>> print cube(2)16

Hyangii

Page 41: OpenStack and private cloud

Lambda in Python (2/3)

• 인자 없는 Lambda • Lazy Evaluation, 계산이 필요한 경우에만 호출할 때 사용

x = lambda: sum(range(1, 4))print x()

Page 42: OpenStack and private cloud

Lambda in Python (3/3)

• Token 획득을 위해 인자 없는 lambda 사용• Ceilometerclient/client.py

def _do_authenticate(self, http_client): token = self.opts.get('token') or self.opts.get('auth_token') endpoint = self.opts.get('endpoint') if not (token and endpoint): project_id = (self.opts.get('project_id') or self.opts.get('tenant_id')) project_name = (self.opts.get('project_name') or self.opts.get('tenant_name')) ks_kwargs = { … }

# retrieve session ks_session = _get_keystone_session(**ks_kwargs) token = lambda: ks_session.get_token() endpoint = (self.opts.get('endpoint') or _get_endpoint(ks_session, **ks_kwargs)) self.opts['token'] = token self.opts['endpoint'] = endpoint

def token_and_endpoint(self, endpoint_type, service_type): token = self.opts.get('token') if callable(token): token = token() return token, self.opts.get('endpoint')

Page 43: OpenStack and private cloud

Load Drivers/Extensions/Filter using stevedore (1/3)

OpenStack stevedore Library 를 사용하여 추가로 Driver, Extension API, Filter 를 구현하기 위한 방법

step 1 : 새로 추가될 Driver 파일을 구현한다 .

.

step 2 : 새로 추가될 Extension/Filter 파일을 구현한다 .

.

# cinder/volume/driver/new_driver.py

class SimpleDriver: def get_name(self): return “ This is Simple Driver”

# cinder/scheduler/filters/new_filter.py

class SimpleFilter: def get_name(self): return “ This is Simple Filter”

Inhye Park

Page 44: OpenStack and private cloud

step 3 : 앞에서 구현한 파일을 Stevedore 를 사용하여 Load 시킨다 . .

.

- namespace 가 “ cinder.volume.driver” 인 Driver “simple_driver" 를 로딩한다는 것을 의미함- namespace 가 “ cinder.scheduler.filters” 인 Filter “simple_filter" 를 로딩한다는 것을 의미함

step 4 : setup.cfg 파일에 새로 추가될 Extension/Filter 정보를 연결해준다 .

.

Load Drivers/Extensions/Filter using stevedore (2/3)

from stevedore import driverfrom stevedore import extension

mydriver = driver.DriverManager(namespace=“cinder.volume.driver", name='simple_driver')myextension = extension.ExtensionManager(namespace=“cinder.scheduler.filters“, name=‘simple_filter’)

#setup.cfg [metadata]name = cinder[files]packages = cinder

[entry_points]cinder.volume.driver = simple_driver = cinder.volume.driver.new_driver:SimpleDrivercinder.scheduler.filters = simple_filter = cinder.scheduler.filters.new_filter:SimpleFilter

Page 45: OpenStack and private cloud

step 5 : 해당 프로젝트를 소스로 인스톨시킨다 .

.

step 6 : 소스 디렉토리에 다음과 같이 entry_points 가 자동으로 생성된다 .

step 7 : 새로 구현한 filter 는 config 에서 지정하여 사용한다 .

Load Drivers/Extensions/Filter using stevedore (3/3)

# cd cinder# python setup.py install

# /usr/lib/python2.7/dist-packages/cinder-2014.1.3.egg-info/entry_points.txt[cinder.volume.driver]simple_driver = cinder.volume.driver.new_driver:SimpleDriver

[cinder.scheduler.filters] =simple_filter = cinder.scheduler.filters.new_filter:SimpleFilter

# /etc/cinder/cinder.conf

Scheduler_default_filters = simple_filter

Page 46: OpenStack and private cloud

neutron DVR (distributed virtual router) (1/5) before DVR: qRouter is only on network node

external

networknode

qROUTER

network1 network2

E-W tra

ffic E-W trafficN-

S tra

ffic N-S traffic

N-S traffic

Joseph Park

Page 47: OpenStack and private cloud

neutron DVR (distributed virtual router) (2/5)

computenode1

after DVR

external

qROUTER(distributed)

network1

N-S

VM1with fip

computenode2

qROUTER(distributed)

network2

N-S

VM121without fip

internal

network

qROUTER(central)

networkN-S N-S

N-SN-S

E-W

E-WE-W

Page 48: OpenStack and private cloud

neutron DVR (distributed virtual router) (3/5)

CN1

Neu-tron Server

user(CLI)

CN1CN1

add new routers

add router ports to br-int[message queue]

[DEFAULT]debug = Truel3_agents_per_router = 3interface_driver = neutron.agent.linux.interface.OVSInter-faceDriverovs_use_veth = Falseuse_namespaces = Trueexternal_network_bridge = br-exagent_mode = dvrl3_agent_manager = neutron.agent.l3_agent.L3NATAgentWithStateReportvrrp_confs = $state_path/vrrp

and gather subnet info

[ neutron/agent/l3/agent.py ]def router_added_to_agent(self, context, payload): LOG.debug('Got router added to agent :%r', payload) self.routers_updated(context, payload)…

def routers_updated(self, context, routers): """Deal with routers modification and creation RPC message.""" LOG.debug('Got routers updated notification :%s', routers) if routers: # This is needed for backward compatibility if isinstance(routers[0], dict): routers = [router['id'] for router in routers] for id in routers: update = queue.RouterUpdate(id, queue.PRIORITY_RPC) self._queue.add(update)

[ neutron/agent/l3/agent.py ] def get_ports_by_subnet(self, context, **kwargs): """DVR: RPC called by dvr-agent to get all ports for subnet.""" subnet_id = kwargs.get('subnet_id') LOG.debug("DVR: subnet_id: %s", subnet_id) filters = {'fixed_ips': {'subnet_id': [subnet_id]}} return self.plugin.get_ports(context, filters=filters)

Page 49: OpenStack and private cloud

neutron DVR (distributed virtual router) (4/5)

compute node 1

ethNethX

br-tun

qROUTER

br-ex

VM112with fip

VM121

br-int

30.1.0.31to NETNODEto INTERNET

FIPVM112 : 30.1.1.3 with floating ip 10.1.0.145VM121 : 30.1.2.7 without floating ipVM with floating ip sends all packets toward outside via FIP agent(, br-ex and ethX)VM without floating ip sends all packets toward outside via SNAT agent

CN1 configured L3: agent_mode = dvr OVS: enable_distributed_routing = True

Page 50: OpenStack and private cloud

neutron DVR (distributed virtual router) (5/5)

compute node 1 network node

ethNethX

br-tun

qROUTER

br-ex

VM112with fip VM121

br-int

to NETNODE

to INTERNET

FIP

ethX

br-tunbr-ex

br-int

NETethN

qDHCP

to INTERNET

qROUTER SNAT

30.1.1.1

30.1.1.3

(10.1.0.145)

10.1.0.149

10.1.0.32

30.1.2.730.1.2.1

30.1.2.1 30.1.2.4

10.1.0.148

10.1.0.21

NETWORK node configured agent_mode = dvr_snat OVS: enable_distributed_routing = True

packet flow without fixpacket flow with fip