resource on openstack
TRANSCRIPT
Resource on OpenStack
• OpenStack은 Compute node의자원으로 Instance의자원을할당
• OpenStack의자원관련
1. Quata – 사용자별사용가능한논리적자원
2. Flavors – 인스턴스생성단위
3. Over Commit – 가상화기술로물리적리소스를뻥튀기해서사용할수있는논리적자원을지원하는기능
OpenStack Quota
• Quatas are operational limits.
• 각 Tenant 별로클라우드자원을최대한활용하기위한개념
• Tenant와 Tenant User 레벨에서의논리적개념
• Quota management for the Other Service
- OpenStack Compute Service
- OpenStack Block Storage Service
- OpenStack Networking Service
• 일반적으로 Quota는 Compute node에서 Tenant가 10개이상의 volume 또는 1TB를필요로할때,
default 설정을변경
▼ OpenStack DashBoard – Quotas Usage
OpenStack Quota
• Manage Compute Service Qoutas
nova quota-* 라는명령으로 management
# to view&update qouta values for an existing tenant
$ nova quota-defaults
$ nova quota-class-update --key value default
$ nova quota-class-update --instances 15 default
$ tenant=$(keystone tenant-list | awk '/tenantName/ {print $2}')
$ nova quota-show --tenant $tenant
$ tenant=$(keystone tenant-list | awk '/tenantName/ {print $2}')$ nova quota-update --quotaName quotaValue tenantID
$ nova quota-update --floating-ips 20 $tenant $ nova quota-show --tenant $tenant
$ nova help quota-update
OpenStack Quota
• Manage Compute Service Qoutas
# to view$update qouta values for an existing tenant user
“nova absolute-lismits” 명령은현재 quota 값과사용현황을확인할때사용
$ nova absolute-limits --tenant tenantName
$ tenant=$(keystone tenant-list | awk '/tenantName/ {print $2}')
$ nova quota-show --user $tenantUser --tenant $tenant
$ nova quota-update --user $tenantUser --quotaName quotaValue $tenant
$ nova quota-update --user $tenantUser --floating-ips 12 $tenant
$ nova quota-show --user $tenantUser --tenant $tenant
OpenStack Quota
• Manage Compute Service Qoutas – default Quotas
▲ Compute Quota description
Quota name Description
cores Number of instance cores (VCPUs) allowed per tenant.
fixed-ipsNumber of fixed IP addresses allowed per tenant. This number must be equal to or greater than the number of allowed instances.
floating-ips Number of floating IP addresses allowed per tenant.
injected-file-content-bytes
Number of content bytes allowed per injected file.
injected-file-path-bytes
Length of injected file path.
injected-files Number of injected files allowed per tenant.
instances Number of instances allowed per tenant.
key-pairs Number of key pairs allowed per user.
metadata-items Number of metadata items allowed per instance.
ram Megabytes of instance ram allowed per tenant.
security-groups Number of security groups per tenant.
security-group-rules Number of rules per security group.
OpenStack Quota
• Manage Compute Service Qoutas – default Quotas on DashBoard
1. 관리자로로그인2. 관리자3. 시스템패널4. 시스템정보5. 기본 Qoutas
1
2
4
35
OpenStack Quota
• Manage Compute Service Qoutas - test
- 설정한 Quotas 스펙에서 over되는동작을시도하면오류메시지출력과함께해당작업이수행되지않음
- 설정한 Quotas 스펙에서꽉찼을경우, 버튼이 disable 되어더이상의인스턴스생성이불가능함
OpenStack Quota
• Manage Block Storage Service Qoutas
cinder quota-* 라는명령으로 management
▼ Block Storage Quotas
# to view qouta values
$ cinder quota-defaults TENANT_ID
$ cinder quota-show TENANT_NAME
$ cinder quota-usage tenantID
roperty name Defines the number of
gigabytes Volume gigabytes allowed for each tenant.
snapshots Volume snapshots allowed for each tenant.
volumes Volumes allowed for each tenant.
OpenStack Quota
• Manage Block Storage Service Qoutas
# to edit & update qouta values
1. Clear per-tenant quota limits
$ cinder quota-delete tenantID
2. for a new project
/etc/cinder/cinder.conf, “quota” section
3. for a exsiting tenant
$ tenant=$(keystone tenant-list | awk '/tenantName/ {print $2}')
$ cinder quota-update --quotaName NewValue tenantID
$ cinder quota-update --volumes 15 $tenant
$ cinder quota-show tenant01
◀ /etc/cinder/cinder.conf
OpenStack Quota
• Manage Netwoking Service Qoutas
1. Basic quota configuration
2. Configure per-tenant quotas
Basic quota configuration
: all tenants have the same quota values
▼ neutron.conf
OpenStack Quota
• Manage Netwoking Service Qoutas
Configure per-tenant quotas
1. Set the quota_driver option in the neutron.conf
quota_driver = neutron.db.quota_db.DbQuotaDriver
(이설정을해주면 command API를사용할수있음)
2. quotas extension
# to list the Networking extension & show information for quotas extension
$ neutron ext-list -c alias -c name
$ neutron ext-show quotas
$ neutron quota-list
$ neutron quota-show --tenant_id 6f88036c45344d9999a1f971e4882723$ neutron quota-show
OpenStack Quota
• Manage Netwoking Service Qoutas
# Update puota values for a specified tenant
$ neutron quota-update --tenant_id 6f88036c45344d9999a1f971e4882723 --network 5$ neutron quota-update --tenant_id 6f88036c45344d9999a1f971e4882723 --subnet 5 --port 20
$ neutron quota-update --tenant_id 6f88036c45344d9999a1f971e4882723 -- --floatingip 20
neutron quota-update --tenant_id 6f88036c45344d9999a1f971e4882723 --network 3 --subnet 3 --port 3 -- --floatingip 3 --router 3
# Update puota values for a specified tenant
$ neutron quota-delete --tenant_id 6f88036c45344d9999a1f971e4882723
$ Deleted quota: 6f88036c45344d9999a1f971e4882723
$ $ neutron quota-show --tenant_id 6f88036c45344d9999a1f971e4882723 (확인)
OpenStack Flavors
• Nova Computing Instance들에게할당되는사용가능한하드웨어의크기를설정 (가상하드웨어의템플릿정의)
• Instance가생성될때 “size”로정의
▲ default Flavors
Flavor VCPUs Disk (in GB) RAM (in MB)
m1.tiny 1 1 512
m1.small 1 20 2048
m1.medium 2 40 4096
m1.large 4 80 8192
m1.xlarge 8 160 16384
OpenStack Flavors
• 최적의 Flavor (예시)
물리자원 : 4cores, 60GB memory
1. cpu HyperThreading 지원시로가정 : 4cores * 2 = 8cores vCPU
2. ram 60GB(사실은 60GB – 오버헤드GB값) = 8cores * 7.5GB (ram 7.5GB per core)
3. overcommit : (default 1:16)
- 8cores * 16 = 64cores
- 7.5GB / 16 = 480MB
64 vCPU, 480MB per VM
Nice Flavors
: 64 vCPU, RAM 60GB(480MB per vCPU), local storage 10GB
1vCPU / 480MB / 10GB(small) ◀ default(Basic)
2vCPU / 960MB / 20GB(medium)
4vCPU / 1.8GB / 40GB(large)
8vCPU / 3.6GB / 80GB(extra large)
16vCPU / 7.2GB / 160GB(extra extra large)
OpenStack Flavors
# Create a flavor
$ nova flavor-list
$ nova flavor-create FLAVOR_NAME FLAVOR_ID RAM_IN_MB ROOT_DISK_IN_GB NUMBER_OF_VCPUS
$ nova flavor-create --is-public true m1.extra_tiny auto 256 0 1 --rxtx-factor .1
$ nova help flavor-create
$ nova flavor-access-add FLAVOR TENANT_ID
# Delete a flavor
$ nova flavor-delete FLAVOR_ID
OpenStack OverCommit
• Compute nodes에서 CPU와 RAM에대한 overcommit을지원
• 인스턴스들에게물리적자원보다더많은자원을할당할수있음
• OpenStack 클라우드환경에서 Instance들의퍼포먼스감소를허용하고많은수의인스턴스들을운용할수 있음
▼ default overcommit
CPU allocation ratio – 16:1
RAM allocation ratio – 1.5:1
DISK allocation ratio – 1:1
• Compute node에서의생성가능한 instance의수
(OR * PC) / VCOR - CPU overcommit ratio(virtual cores per physical core)
PC - Number of physical cores
VS – Number of cores per instances
OpenStack OverCommit
• nova.conf (*-allocation-ratio)
cpu_allocation_ratio = 16.0
ram_allocation_ratio = 1.5
disk_allocation_ratio = 1.0
• Overcommit & Nova Scheduler
- Overcommit은 Nova Scheduler에서관리
- Scheduler는 Filter라는개념으로 Scheduling을수행
OverCommit과관련한 Filter
CoreFilter (cpu_allocation_ratio)
RamFilter (ram_allocation_ratio
DiskFilter (disk_allocation_ratio)
• Configure
CPU overcommitting ^2로 사용 (1:2, 1:4, 1:8 …)
RAM overcommitting 메모리 overcommitting은사용하지않는것을추천 memory o
Disk overcommitting 사용을추천하지않음
• OverCommit 고려사항
OpenStack 레벨에서의 overcommittingKVM 레벨에서의 overcommittinglibvirt 레벨에서의 overcommitting
• 간단 Test
▲ OpenStack DashBoard
메모리 over : OverCommit된메모리 23GB를넘기자인스턴스생성시, 오류가발생
OpenStack OverCommit
OpenStack OverCommit
• 예시
▼ hardware spec
▼ OverCommited Resource
호스트 OS CPU RAM DISK Hypervisor 구분
Test PC X 4 cores 16GB 1.8TB EXSi 5.5 Host
Openstack
Controller node
Ubuntusrever
14.04 LTS
4 cores 16GB 80GB
KVM(libvirt)
Guest
Network node 1 cores 2GB 80GB Guest
Compute node 4 cores 60GB 80GB Guest
Block storage 1 cores 2GB 61GB Guest
vCPU
: 4(physical) * 16 = 64개 (losical)
Memory
: 15(phsical) * 1.5 = 23GB (losical)
Disk
: 1.8T(phsical) * 1 = 1.8TB (losical)
OpenStack OverCommit
• 예외사항및참고
http://docs.fedoraproject.org/en-US/Fedora/13/html/Virtualization_Guide/sect-Virtualization-Tips_and_tricks-
Overcommitting_with_KVM.html
https://doc.opensuse.org/products/draft/SLES/SLES-kvm_sd_draft/book.kvm.html
http://serverascode.com/2013/02/20/overcommitting-with-kvm.html
OpenStack Scheduling & Filter
• Compute는 nova-scheduler를사용하여 compute 및 volume서비스를수행
• 여러가지옵션이있음
▲ nova
OpenStack Scheduling & Filter
• Scheduler - Nova-scheduler는 Queue와 central DB를통하여다른 nova component들과통신
• Queue
- Queue는 scheduling에있어필수적임
- 주기적으로모든 compute node는 nova-scheduler에서가용자원및하드웨어사양에대해 Queue를통해전달
- Compute Scheduler는 Filter Scheduler의 configure를진행
AvailabilityZoneFilter
– Are in the requested availability zone
RamFilter
– Have sufficient RAM available
ComputeFilter
– capable of servicing the reqquest
Nova VM Provisioning ▶
• Filter Schedulter - Filtering과 weighting으로새인스턴스가생성될수있도록하는정보를만듬
OpenStack Scheduling & Filter
• Filtering & Weights
- 필터속성을사용하여 Filtering을수행
- 표준필터클래스 (nova.scheduler.filters)
- Filter Scheduler는 weights hosts 베이스로동작하며 scheduler_weight_classe 옵션을 configure
* Weights hosts Base : RamWeigher 큰 weight인호스트가우선
- 반복적인 filtering과 weighting을찾는일은리소스가들어가므로이를적절히조절하는기능도있음
- Filter Algorithm을직접만들어적용할수있음
OpenStack Scheduling & Filter
• http://docs.openstack.org/icehouse/training-guides/content/operator-computer-node.html
• http://docs.openstack.org/trunk/config-reference/content/section_compute-scheduler.html
OpenStack Scaling
• HORIZONTAL SCAILING
• Designed to be gorizontally saclable, Itself
• Scale-out과 Load Balancing시에 Message Bus를통해그룹간에통신
• Flavors는자원효율적인 Scale-out을수행하도록도옴
▼ OpenStack default flavors
• http://docs.openstack.org/openstack-ops/content/scaling.html
Flavor VCPUs Disk (in GB) RAM (in MB)
m1.tiny 1 1 512
m1.small 1 20 2048
m1.medium 2 40 4096
m1.large 4 80 8192
m1.xlarge 8 160 16384