Back to articles list
- 10 minutes read

Slony-I Replication – Your Initial Setup

Slony-I is a popular PostgreSQL replication system. Basically, it allows you to automatically copy data from one database to other databases. It can also do a couple of other things, but in this article we will concentrate on replication and provide you with a step by step guide on how to configure your first Slony-I cluster in Windows. It will involve a master node and two slave nodes. The master node will be the database that will be replicated and the slaves will get updated as the master gets modified.

Terminology

Node Each database is referred to as a node. Think of a node as a specific database running in a specific database instance.
Cluster A collection of nodes.
Daemon A process that handles communications for a node.
Node’s configuration file A text file containing database connection information.
Engine An instance of a daemon, identifiable by its node’s configuration file.
Service A windows service that can run multiple engines.
Replication set A replication set holds information about the database objects that will participate in the replication.
Event An event represents any novelty in a cluster’s state such as modifications to the cluster’s configuration or modifications to data that is under replication. These events are then exchanged between the nodes via the daemons allowing them to update the nodes’ state.

Architecture

Slony-I is trigger-based which means that there will be triggers running on every replicated table that will intercept every incoming DML operation. These triggers will store the new data into an internal event queue and will be later communicated to the subscribed nodes. The communication between the nodes is done by the daemons.

Configuration

slon.exe

This process represents a daemon. You need to run an instance of slon.exe for each node that you want in your replication network. In this example we have 3 nodes, so we will be running 3 instances of slon.exe, one on each node.

Example of Slony-I cluster setup

There are two ways to run this daemon, either manually from the command line or by setting up a windows service. Both of them require an association to a daemon/node configuration file.

Daemon/Node configuration file

The configuration file holds information relevant to establishing a connection to a database.

  • master.conf:

cluster_name='slony_cluster'
conn_info='dbname=master host=192.168.1.1 port=5432 user=postgres password=postgres'

  • slave1.conf:

cluster_name='slony_cluster'
conn_info='dbname=slave1 host=192.168.1.2 port=5432 user=postgres password=postgres'

  • slave2.conf:

cluster_name='slony_cluster'
conn_info='dbname=slave2 host=192.168.1.3 port=5432 user=postgres password=postgres'

Alternative 1 – starting slon.exe from the command line

This is the most basic form of running the daemon. The only parameter that it needs to start is -f followed by a configuration file.

From the master node's command line you should run:

slon.exe -f master.conf

From slave 1’s command line you should run:

slon.exe -f slave1.conf

From slave 2’s command line you should run:

slon.exe -f slave2.conf

Alternative 2 – starting slon.exe as a service

If you don’t want to manually run the slon.exe executable every time your node starts, you can instead configure it as a service. slon.exe comes with additional functionality to configure itself as a service.

This procedure involves two steps. Service registration and engine registration.

1. Service registration

In order to register your Slony-I service, you must issue the following command:

slon.exe -regservice servicename

where servicename will be the name of your service. You should be aware that Windows has some limitations on the characters allowed for service names, though. For this reason and also for the sake of consistency, I’d recommend that you stick to some convention. At my company, for example, we name the services sl-nodename where nodename corresponds to the name of the node it’s running on.

From the master node’s command line you should run:

slon.exe -regservice sl-master

From slave 1’s command line you should run:

slon.exe -regservice sl-slave1

From slave 2’s command line you should run:

slon.exe -regservice sl-slave2

2. Engine registration

If you managed to follow the above steps correctly, you should have one service registered per node. That’s good. However, you might have noticed that we haven’t mentioned anything about any connection to any database so far and the services won’t be of much use until we do. The way to do this is by “adding engines” to the service. What are engines you may ask. Think of engines as different instances of the slon.exe daemon with each instance having its own configuration file. To add an engine:

From the master node’s command line you should run:

slon.exe -addengine sl-master master.conf

From slave 1’s command line you should run:

slon.exe -addengine sl-slave1 slave1.conf

From slave 2’s command line you should run:

slon.exe -addengine sl-slave2 slave2.conf

When the commands run successfully, they will respond with a message: “Engine added”.

From now on, all your services administration can be done from the standard windows service management console, which can be accessed from the command line of each node by invoking the following command:

services.msc

From there you can start and stop your service along with all the engines it manages.

slonik.exe

Once you have the services up and running for all your nodes, it’s time to start telling slony how you want your nodes interacting with each other. So far, they are all just nodes with no specific roles and no replication rules.

All the configuration is done by means of slonik scripts that are written in a scripting language called slonik. This scripting language comes with its own interpreter called slonik.exe that you will feed with configuration scripts. The idea would be to build one configuration script per node, and run it through slonik.exe. If everything goes well, that will be all you need to get your replication going.

The master node’s script

The master's configuration script will create and configure the only replication set we will use for this example but you can have different replication sets living together.

The script for this is quite simple:

  • master.slonik:

cluster name = slony_cluster;
node 1 admin conninfo = 'dbname=master host=192.168.1.1 port=5432 user=postgres password=postgres';
init cluster (id=1, comment = 'master node');
create set (id=1, origin=1, comment= 'my replication set');
set add table (set id=1, origin=1, id=1, fully qualified name = 'public.table_a', comment= '');

A brief explanation of the commands involved:

cluster name = slony_cluster;

This command defines the name of the cluster.

node 1 admin conninfo = 'dbname=master host=192.168.1.1 port=5432 user=postgres password=postgres';

This command defines that node number 1 (we usually assign this value to the master) can be reached at the location specified by conninfo.

The two commands above belong to the preamble of the script and they do not cause any direct action by themselves. They define the context on which the entire script will execute.

init cluster (id=1, comment = 'master');

This command initializes the first node in a new Slony-I replication cluster. The initialization process consists of creating the cluster namespace, loading all the tables, functions, procedures and initializing the node.

create set (id=1, origin=1, comment= 'my replication set');

This command creates the replication set. The replication set can be thought of a bunch of tables grouped together by some criteria. You first create the set, and then add tables to it.

The id parameter is the ID of the set to be created.

The origin parameter refers to the origin node of the set. In this case it originates in node 1.

set add table (set id=1, origin=1, id=1, fully qualified name = 'public.table_a', comment= '');

This command adds a table to a given replication set.

The set id parameter specifies the id of the replication set, in this case 1.

The origin parameters specifies the origin node for the set, in this case 1.

The id parameter specifies the id for the table. This id has to be unique amongst all your replication sets.

With this simple script, we initialized the cluster, created the first node, and told Slony-I that this node should be considered as the origin for our only replication set. We also added a single table called table_a to our replication set.

The slave node’s script

  • slave1.slonik:

cluster name=slony_cluster;
node 1 admin conninfo = 'dbname=master host=192.168.1.1 port=5432 user=postgres password=postgres';
node 2 admin conninfo = 'dbname=slave1 host=192.168.1.2 port=5432 user=postgres password=postgres';
store node (id = 2, comment = 'slave1', EVENT NODE = 1);
store path (server = 1, client = 2, conninfo = 'dbname=master host=192.168.1.1 port=5432 user=postgres password=postgres');
store path (server = 2, client = 1, conninfo = 'dbname=slave1 host=192.168.1.2 port=5432 user=postgres password=postgres');
store listen (origin = 1, provider = 1, receiver = 2);
store listen (origin = 2, provider = 2, receiver = 1);
subscribe set (id = 1, provider = 1, receiver = 2, forward = yes);

A brief explanation of the commands involved:

cluster name=slony_cluster;
node 1 admin conninfo = 'dbname=master host=192.168.1.1 port=5432 user=postgres password=postgres';
node 2 admin conninfo = 'dbname=slave1 host=192.168.1.2 port=5432 user=postgres password=postgres';

The first 3 commands belong to the preamble. They define which cluster this script should apply to, and the connection information for the different nodes.

store node (id = 2, comment = 'slave1', EVENT NODE = 1);

This command initializes a new node and adds it to the configuration of an existing cluster.

The id and comment parameters are self explanatory.

The event node parameter is the ID of the node used to create the configuration event that tells all existing nodes about the new node. In this case, the ID of the master, which is 1.

store path (server = 1, client = 2, conninfo = 'dbname=master host=192.168.1.1 port=5432 user=postgres password=postgres');

This command teaches the daemon at node 2 (slave1) how to communicate to the node 1 (master).

store path (server = 2, client = 1, conninfo = 'dbname=slave1 host=192.168.1.2 port=5432 user=postgres password=postgres');

This command teaches the daemon at node 1 (master) how to communicate to the node 2 (slave1).

store listen (origin = 1, provider = 1, receiver = 2);

This command tells slony that the node 2 wants to listen to events originating from node 1, and take them from node 1.

store listen (origin = 2, provider = 2, receiver = 1);

This command tells slony that the node 1 wants to listen to events originating from node 2, and take them from node 2.

The commands above basically teach slony how to communicate between the nodes, what to expect and where to expect it from.

subscribe set (id = 1, provider = 1, receiver = 2, forward = yes);

This command tells slony that we want node 2 to be a receiver of events provided by the node 1 related to the replication set 1, and that node 2 should store log information during replication, even though it’s not strictly necessary for this basic setup.

I leave it up to the reader to build up the configuration script for the second slave. It should be as easy as replacing the connection information and the node id with the correct ones.

Running the scripts

From the master node's command line you should run:

slonik.exe master.slonik

From slave 1’s command line you should run:

slonik.exe slave1.slonik

From slave 2’s command line you should run:

slonik.exe slave2.slonik

And that would be pretty much it.

Once the scripts are executed, the nodes will be configured, and the replication should have already started.

Please bear in mind that before running any scripts, all the nodes should already have an empty skeleton of the master’s database model. In order to do this, you can run an only structure pg_dump on the master, and restore it in the slaves. Or you can run Vertabelo-generated scripts against master and slave nodes :)

Final Words

Well, this is basically it for a simple, single master, multiple slave Slony-I configuration.

There are many other interesting things that can be done with Slony-I like cascaded replication and defining real failover procedures, but we can dig into those subjects in a future article.

I hope you found it useful and please don’t hesitate to contact me if you have any doubts, problems, or comments.

go to top