Vagrant box with Severalnines database cluster deployment scripts

Prerequisites

General

Using Vagrant's Multi-VM environment makes launching several VMs to form a cluster painless and it's easy to quickly start playing with the various database clusters and our Severalnines's ClusterControl in a contained environment.

NOTE: The vagrant box file's size is 390MB and can take some time to download.
(Mar/14): ClusterControl 1.2.5, MySQL Cluster 7.3, Percona XtraDB Cluster 5.6, Codership 5.6, MongoDB 2.4.x
(May/13): MySQL Cluster 7.2.12, Codership Galera 5.5.29(Source distribution, wsrep_23.7.3.r3853), Percona XtraDB Cluster 5.5.30-30.2(Percona Server (GPL), Release 30.2, wsrep_23.7.4.r3843), Mongodb 2.4.x

This vagrant box comes with a set of Severalnines's database cluster deployment scripts to bootstrap the following:

  • 3 node Galera Cluster for MySQL (Codership and Percona builds).
  • 4 node MySQL Cluster. 2 SQL nodes, 2 MGM nodes and 2 Data nodes.
  • 1 sharded MongoDB Cluster with config servers and routers on 3 hosts.

This vagrant box provides database deployment scripts pre-configured for these hosts:

  • 10.10.10.10 db1
  • 10.10.10.20 db2
  • 10.10.10.30 db3
  • 10.10.10.40 db4
  • 10.10.10.100 cc (ClusterControl node)

You can change the initial database configuration settings to adjust up/down for the VM instance size. The default virtual disk limit is set to 50GB.

  • The MySQL installation dir is /usr/local/mysql, root password is root123. The initial InnoDB buffer pool is set to 128M for the Galera cluster and DataMemory is 256M with IndexMemory set to 25M for the MySQL Cluster.

  • The ClusterControl application is available from http://localhost:8080/clustercontrol and you logon with admin@localhost.xyz and password admin.

  • MySQL ports

localhost:3306 -> db1:3306
localhost:3307 -> db2:3306
localhost:3308 -> db3:3306
localhost:3309 -> db4:3306
localhost:3310 -> db5:3306

You can grant access to the db nodes from localhost by:
mysql> grant all privileges on . to 'root'@'10.0.2.%' identified by 'root123';

  • Mongo ports (router)
localhost:17017 -> db1:27017
localhost:17018 -> db2:27017
localhost:17019 -> db3:27017
localhost:17020 -> db4:27017
localhost:17021 -> db5:27017
  • A HAProxy node should to be deployed on 10.10.10.100 and use the default setup for port 33306 (firewall forwarding rules). You can access the HAProxy admin page on http://localhost:9600

  • Galera Cluster requires 3 instances + 1 for ClusterControl

  • MySQL Cluster requires 4 instances + 1 for ClusterControl

  • MongoDB sharded requires 3 instances + 1 for ClusterControl

Deploy Galera Cluster

Add the S9s's vagrant box to your collection.

$ vagrant box add s9s_cc https://drive.google.com/uc?export=download&id=0B-J3zLLFd9HMS2Zfak96UUhoa0U

Create a vagrant environment.

$ mkdir s9s_galera
$ cd s9s_galera

Create a multi-VM Vagrantfile which needs to launch at least 4 instances.

# VAGRANT v2 file
VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  # All Vagrant configuration is done here. The most common configuration
  # options are documented and commented below. For a complete reference,
  # please see the online documentation at vagrantup.com.

  # Every Vagrant virtual environment requires a box to build off of.
   1.times do |n|
    config.vm.define "cc" do |cc|
        cc.vm.box = "clustercontrol"
        cc.vm.host_name = "clustercontrol"
        cc.vm.network :forwarded_port, guest: 80, host: (8080)
        cc.vm.network :private_network, ip: "10.10.10.100"
        config.vm.provider :virtualbox do |vb|
            vb.customize ["modifyvm", :id, "--memory", "512"]
        end
    end
  end

  3.times do |n|
    config.vm.define "db"+(1+n).to_s do |cc|
        cc.vm.box = "clustercontrol"
        cc.vm.host_name = "db" + (1+n).to_s
        cc.vm.network :forwarded_port, guest: 3306, host: (3306+n)
        cc.vm.network :forwarded_port, guest: 27017, host: (17017+n)
        cc.vm.network :private_network, ip: "10.10.10." + ((1+n)*10).to_s
        config.vm.provider :virtualbox do |vb|
            vb.customize ["modifyvm", :id, "--memory", "512"]
        end
    end
  end
end
Vagrant::Config.run do |config|
  # clustercontrol, haproxy
  config.vm.define :cc do |cc_config|
    cc_config.vm.host_name = "clustercontrol"
    cc_config.vm.box = "s9s_cc"
    cc_config.vm.forward_port 80, 8080
    cc_config.vm.forward_port 33306, 33306
    cc_config.vm.forward_port 9600, 9600
    cc_config.vm.network :hostonly, "10.10.10.100"
    cc_config.vm.customize ["modifyvm", :id, "--memory", 512]
  end

  config.vm.define :db1 do |db1_config|
    db1_config.vm.host_name = "db1"
    db1_config.vm.box = "s9s_cc"
    db1_config.vm.forward_port 3306, 3306
    db1_config.vm.forward_port 27017, 17017
    db1_config.vm.network :hostonly, "10.10.10.10"
    db1_config.vm.customize ["modifyvm", :id, "--memory", 512]
  end

  config.vm.define :db2 do |db2_config|
    db2_config.vm.host_name = "db2"
    db2_config.vm.box = "s9s_cc"
    db2_config.vm.forward_port 3306, 3307
    db1_config.vm.forward_port 27017, 17018
    db2_config.vm.network :hostonly, "10.10.10.20"
    db2_config.vm.customize ["modifyvm", :id, "--memory", 512]
  end

  config.vm.define :db3 do |db3_config|
    db3_config.vm.host_name = "db3"
    db3_config.vm.box = "s9s_cc"
    db3_config.vm.forward_port 3306, 3308
    db1_config.vm.forward_port 27017, 17019
    db3_config.vm.network :hostonly, "10.10.10.30"
    db3_config.vm.customize ["modifyvm", :id, "--memory", 512]
  end
end

Start the instances.

$ vagrant up

Wait until all instances are up and running and then logon to the ClusterControl node.

$ vagrant ssh cc

Deploy Galera Cluster.

$ cd s9s-galera-2.2.0-codership/mysql/scripts/install
$ ./deploy.sh 2>&1 | tee -a cc.log

Enter yes when prompted and wait until the deployment script completes and you should see something like below.

Also, have a look at ClusterControl, our monitoring and management tool.
Open your browser to http:///cmonapi to register you cluster.
Logon with your email address and password 'admin'.
Enter this ClusterControl API token: <API TOKEN> when prompted.
If you still prefer the classic/legacy CMON GUI then open your browser to http:///cmon

You need the ClusterControl API token later to register/add your Galera cluster to the ClusterControl application.

Next open you browser to http://localhost:8080/clustercontrol to use the ClusterControl UI.

Logon with user admin@localhost.xyz and password admin .

On the top menu there is link called "Cluster Registrations" click on that and enter the ClusterControl API token above and the ClusterControl API URL as https://10.10.10.100/cmonapi .

Galera Cluster

Install HAProxy

$ cd s9s-galera-2.2.0-codership/mysql/scripts/install/haproxy
$ ./install_haproxy.sh 10.10.10.100 debian galera

Deploy MySQL Cluster

If you already added the vagrant box you can skip the below step.

$ vagrant box add s9s_cc https://drive.google.com/uc?export=download&id=0B-J3zLLFd9HMS2Zfak96UUhoa0U

Create a new test environment

$ mkdir s9s_cluster
$ cd s9s_cluster

Create a multi-vm Vagrantfile which need to launch at least 5 instances.
For the data nodes we need a little more base memory, so we set up those instances to allocate 768MB.

Vagrant::Config.run do |config|
  # clustercontrol, haproxy
  config.vm.define :cc do |cc_config|
    cc_config.vm.host_name = "clustercontrol"
    cc_config.vm.box = "s9s_cc"
    cc_config.vm.forward_port 80, 8080
    cc_config.vm.forward_port 33306, 33306
    cc_config.vm.forward_port 9600, 9600
    cc_config.vm.network :hostonly, "10.10.10.100"
    cc_config.vm.customize ["modifyvm", :id, "--memory", 512]
  end

  config.vm.define :db1 do |db1_config|
    db1_config.vm.host_name = "db1"
    db1_config.vm.box = "s9s_cc"
    db1_config.vm.forward_port 3306, 3306
    db1_config.vm.forward_port 27017, 17017
    db1_config.vm.network :hostonly, "10.10.10.10"
    db1_config.vm.customize ["modifyvm", :id, "--memory", 512]
  end

  config.vm.define :db2 do |db2_config|
    db2_config.vm.host_name = "db2"
    db2_config.vm.box = "s9s_cc"
    db2_config.vm.forward_port 3306, 3307
    db1_config.vm.forward_port 27017, 17018
    db2_config.vm.network :hostonly, "10.10.10.20"
    db2_config.vm.customize ["modifyvm", :id, "--memory", 512]
  end

  config.vm.define :db3 do |db3_config|
    db3_config.vm.host_name = "db3"
    db3_config.vm.box = "s9s_cc"
    db3_config.vm.forward_port 3306, 3308
    db1_config.vm.forward_port 27017, 17019
    db3_config.vm.network :hostonly, "10.10.10.30"
    db3_config.vm.customize ["modifyvm", :id, "--memory", 768]
  end

  config.vm.define :db4 do |db4_config|
    db4_config.vm.host_name = "db4"
    db4_config.vm.box = "s9s_cc"
    db4_config.vm.forward_port 3306, 3309
    db1_config.vm.forward_port 27017, 17020
    db4_config.vm.network :hostonly, "10.10.10.40"
    db4_config.vm.customize ["modifyvm", :id, "--memory", 768]
  end
end

Start the instances.

$ vagrant up

Wait until all instances are up and running and then logon to the ClusterControl node.

$ vagrant ssh cc

Deploy MySQL Cluster.

$ cd mysqlcluster-72/cluster/scripts/install
$ ./deploy.sh 2>&1 | tee -a cc.log

Enter yes when prompted and wait until the deployment script completes and you should see something like below.

Also, have a look at ClusterControl, our monitoring and management tool.
Open your browser to http:///cmonapi to register you cluster.
Logon with your email address and password 'admin'.
Enter this ClusterControl API token: <API TOKEN> when prompted.
If you still prefer the classic/legacy CMON GUI then open your browser to http:///cmon

You need the ClusterControl API token later to register/add your Galera cluster to the ClusterControl application.

Next open you browser to http://localhost:8080/clustercontrol to use the ClusterControl UI.

Logon with user admin@localhost.xyz and password admin .

On the top menu there is link called "Cluster Registrations" click on that and enter the ClusterControl API token above and the ClusterControl API URL as https://10.10.10.100/cmonapi .

Deploy MongoDB Cluster

If you already added the vagrant box you can skip the below step.

$ vagrant box add s9s_cc https://drive.google.com/uc?export=download&id=0B-J3zLLFd9HMS2Zfak96UUhoa0U

Create a new test environment

$ mkdir s9s_mongo
$ cd s9s_mongo

Create the same Vagrantfile as used by the Galera cluster earlier.

Start the instances.

$ vagrant up

Wait until all instances are up and running and then logon to the ClusterControl node.

$ vagrant ssh cc

Deploy MongoDB Cluster.

$ cd s9s-mongodb-1.0.0/mongodb/scripts/install
$ ./deploy.sh 2>&1 | tee -a cc.log

Enter yes when prompted and wait until the deployment script completes and you should see something like below.

Also, have a look at ClusterControl, our monitoring and management tool.
Open your browser to http:///cmonapi to register you cluster.
Logon with your email address and password 'admin'.
Enter this ClusterControl API token: <API TOKEN> when prompted.
If you still prefer the classic/legacy CMON GUI then open your browser to http:///cmon

You need the ClusterControl API token later to register/add your Galera cluster to the ClusterControl application.

Next open you browser to http://localhost:8080/clustercontrol to use the ClusterControl UI.

Logon with user admin@localhost.xyz and password admin .

On the top menu there is link called "Cluster Registrations" click on that and enter the ClusterControl API token above and the ClusterControl API URL as https://10.10.10.100/cmonapi .

Limitations

  • You will not receive any database configuration packages with the user 'admin@localhost.xyz'. Instead download the database configuration package instead by copying the link and use for example 'wget'.

  • There are features in ClusterControl that will require a license

  • The default root disk volume limit is 50GB

Tags: