nejsem nasr … štvanej !

Download Nejsem  nasr …  štvanej !

If you can't read please download the document

Upload: garin

Post on 10-Jan-2016

58 views

Category:

Documents


2 download

DESCRIPTION

Nejsem nasr … štvanej !. PostgreSQL replikace master- slave. Slony I. Proč replikovat databázi?. Od toho máme předmět KIV/DS :-). - PowerPoint PPT Presentation

TRANSCRIPT

Snmek 1

Nejsem nasr tvanej!Slony IPostgreSQL replikacemaster-slave

Pro replikovat databzi?Od toho mme pedmt KIV/DS :-)Replikace pro PostgreSQL byla odjakiva velice lkav. Lkala nejen programtory, ale pedevm provozovatele databzovch systm, kte ji vyuvaj k zlohm a k distribuci zte.

Projekty zabvajc se replikac databze v prosted PostgreSQL PostgreSQL Replicator PG Replication DB Balancer Usogres eRServer RServ improvedV contrib adresi standardn distribuce PostgreSQL najdeme jet projekty rserv a dbmirror.

Pro?Pro existuje tolik projekt, kter dlaj tm tot?

Nen lep podn odladit jeden projekt ne zase zanat nov?Inu, programtoi (a DB zvl) jsou znan vybrav, pokud jde o pouit programovac jazyk

Slony IA mme tu dal projekt!

je napsn v prax ovenm a v opensource komunit bezproblmovm jazyce C

- dobe pijat programtorskou i uivatelskou komunitou

- Slony by tak ml sjednotit souasn prce na tma replikace

Slony je zajmav..Databze se mohou replikovat nejen z master databze, ale i jedna od druh

Z hlediska vkonnosti je nejlep pouvat master databzi pouze pro write access

Replikan dotazy slave databz zbyten master databzi zatuj, a co je hor, mohou dret njak zmky bhem sv replikace, kter brn master databzi v zpisovch operacch do replikanho urnlu.

Hlavn rysy Slony Instupce eRServeruasynchron replikaceNen zarueno, e ve stejnm asovm okamiku budou na vech uzlech clusteru shodn data.trigger basedNad replikovanmi tabulkami jsou vytvoeny triggery provdjc zznam mnnch dat do replikanho urnlu. Opakem je replikan mechanismus zabudovan v DB serveru.cascading slavesdatabze se mohou replikovat nejen z master databze, ale i jedna od druhbatch modePi zmn dat na slave uzlech nen postupovno po jednotlivch transakcch tak, jak byly provdny na master uzlu. Transakce jsou pehrvny na slave uzlech po dvkch. Jedna dvka obvykle reprezentuje urit asov sek na master uzlu.store and forwardZmny jsou nejprve ukldny do urnlu a teprve protom jsou peneny na dal uzly. Uzly pistupuj pouze k urnlu a ne k originlnm tabulkm.

Hlavn rysy Slony Ipull basedSlave uzel d master uzel o zasln balku transakc. Master uzel sm od sebe nic nikam neposl.optimalizace pro LANPedpokld se velk ka penosovho psma mezi jednotlivmi uzly a neprovdj se operace, kter nsledn sniuj velikost balku transakc urenho k penosu. Pokud je jedna dka mnna v jedn dvce transakc nkolikrt, nebudou redundantn zmny ped penosem dat zrueny.online replikan systmSouvis s pedchozm bodem. Pi nvrhu se pedpokldala vysok vzjemn dostupnost jednotlivch uzl bhem normlnho provozu.manuln failoverUzel je mon v ppad havrie petransformovat z pozice slave na pozici master. Tato operace nen provdna automaticky.switchoverPodobn operace jako v pedchozm ppad, ovem beze ztrty dat v podob nejnovjch transakc.switchbackOpak operace switchover.

ClusteryDatabze sptelen za elem replikace jsou organizovny do clusterJedna databze me bt souasn lenem vce cluster, naopak cluster me mt prv jednu dc master databzi.V terminologii slony se jednotliv databze zapojen do clusteru nazvaj uzly NODEMaster databze clusteru mus bt na uzlu s poadovm slem 1. a je velmi dleit

Jednotliv slave uzly lze do clusteru pidvat a odebrat, nelze vak pehodit status clustermaster z jednoho uzlu na druh, ani bychom museli zlikvidovat cel cluster.Znovuvytvoen clusteru nen zase tak nron operace. Status Master databze v clusteru nem nic spolenho se zdrojem dat pro replikaci, data lze replikovat i mezi jednotlivmi slave databzemi nebo smrem slave master. Slony je single-master replikan systm. To znamen, e pokud replikujeme set tabulek, je mon do tohoto setu zapisovat prv na jednom NODE. Nezle na tom, zda m tento node v clusteru status master, nebo slave.

AgentiProgram, kter pedv zprvy mezi jednotlivmi databzemi a zpracovv zprvy pro nj uren. Ve sv innost se podob nap. innosti SMTP mailserveru.

Pro kadou databzi potebujeme prv jednoho agenta na cluster. Pokud je databze lenem vce cluster, budete potebovat spustit vce agent.Kad agent se pipoj do sv lokln databze 4 a kadch 10 sekund kontroluje tabulku udlost, kter se nsledn sna zpracovvat.Standardn jsou SYNC udlosti generovny kadch 60 sekund.Protoe oba dva agenty mme v souasn dob pipojeny pouze k jejich loklnm databzm, hromad se nm v tabulce lokln generovan eventy, kter nikdo nezpracovv.Krom toho agent jet provd kadch pt minut vymazn pedanch event a po 30 minutch dokonce i s nslednm VACUUM.

A te pr fakt z oficilnho webu..Master to multiple cascaded slavesThe basic structure of the systems combined in a Slony-I installation is a master with one or more slaves nodes. Not all slave nodes must receive the replication data directly from the master. Every node that receives the data from a valid source can be configured to be able to forward that data to other nodes. There are three distinct ideas behind this capability.

The first is scalability (klovatelnost). One database, especially the master that receives all the update transactions from the client applications, has only a limited capability to satisfy the slave nodes queries during the replication process. In order to satisfy the need for a big number of read-only slave systems it must be possible to cascade. Mater neobsluhuje vechny servery.

The second idea is to limit the required network bandwidth for a backup site while keeping the ability to have multiple slaves at the remote location.

The third idea is to be able to configure failover scenarios. In a master to multiple slave configuration, it is unlikely that all slave nodes are exactly in the same synchronization status when the master fails. To ensure that one slave can be promoted to the master it is necessary that all remaining systems can agree on the status of the data. Since a committed transaction cannot be rolled back, this status is undoubtly the most recent sync status of all remaining slave nodes. The delta between this one and every other node must be easily and fast generated and applied at least to the new master (if thats not the same system) before the promotion can occur. Nevad kdy nkdo spadne.Nodes, Sets and forwardingThe Slony-I replication system can replicate tables and sequence numbers.Replicating sequence numbers is not unproblematic. Table and sequence objects are logically grouped into sets. Every set should contain a group of objects that is independant from other objects originating from the same master. In short, all tables that have relationships that could be expressed as foreign key constraints and all the sequences used to generate any serial numbers in these tables should be contained in one and the same set.

Exchanging messagesAll configuration changes like adding nodes, subscribing or unsubscribing sets, adding a table to a set and so for the are communicated through the systm as events. An event is generated by inserting the event information into a table and notifying all listeners on the same. SYNC messages are communicated with the same mechanism. The Slony-I system configuration contains information for every node which other it will query for which events.

ZvremSlony umouje topologii s jednm vydavatelem a vce odbrateli. Jedn se tedy o jednosmrnou distribuci zmn. Je vak mon, aby jedna instance PostgreSQL byla jak odbratelem, tak vydavatelem. Slony tak blokuje zmny dat u odbratel. Tato funkcionalita je zabezpeena pidnm spout vyuvajc funkci deny_access(), ta blokuje vechny operace INSERT, DELETE, UPDATE, kter nejsou volny programem Slony.