Setup Internal and External Load Balancers
Next we are going to add two AWS load balancers, one internal for our database nodes and one external for our clients load balancing requests across our http servers.
You can opt to install HAProxy (https://github.com/severalnines/haproxy) instead however you would need to handle failover for it.
Using Amazon’s elastic load balancers (ELB) we don’t need to worry about that however the ELBs are less configurable than HAProxy and lack other features as well.
Install the Galera http health check scripts
First on each Galera instances we are going to install a HTTP health check script that uses a Galera node’s state to determine whether the node should be classified as up or down.
$ cd ~/s9s-galera-2.1.0/mysql/scripts/install $ wget http://severalnines.com/downloads/install-lbcheck.tar.gz $ tar zxvf install-lbcheck.tar.gz $ cd install-lbcheck $ ./install-lbcheck.sh debian galera
Next step is to setup the internal load balancer and making sure the internal LB has 3 instances and that the health check is working fine.
Internal Load Balancer
The internal load balancer will load balance MySQL connections across our Galera nodes.
Make sure to enable ‘Create an Internal load balancer’.
The HTTP health check ping port for the Galera instances is on port 9200. The ping path is not relevant in our case so just leave at default.
Select the private subnet for the internal load balancer. Next create a security group.
Since we're load balancing MySQL connections you need to allow port 3306 to be opened.
Select your three Galera instances for the load balancer pool.
Review and finally create your internal load balancer.
In a few minutes you should see that all of your Galera instances have been registered and is now in the load balancer's pool of heathly instances.
Note: Because the set of IP addresses associated with a LoadBalancer can change over time,
to create a hosted zone.
This applies to the MySQL user and the hostname that is granted for the load balancer. In our example we are going to use the IP address however in a production environment a CNAME is much easier to use.
Note 2: The hostname for MySQL grants can be at most 60 characters. The AWS autogenerated part of the internal LB name if often longer than 50 characters that sits ontop of your specified LB name. So in our case we only had about 7 characters left for the LB if we wanted to use the internal DNS name for grants.
Create a MySQL User for the Internal Load Balancer
Next we create a user for the load balancer node which the web/http server should use.
On one of the Galera nodes install the mysql client (if not available)
$ sudo apt-get install mysql-client-core-5.5
Lookup the IP address for the load balancer host
$ nslookup internal-int-lb-vpc-1396244671.ap-northeast-1.elb.amazonaws.com Server: 10.0.0.2 Address: 10.0.0.2#53 Non-authoritative answer: Name: internal-int-lb-vpc-1396244671.ap-northeast-1.elb.amazonaws.com Address: 10.0.1.17
Add a MySQL user
$ mysql -uroot -p<your root password> mysql> grant all privileges on *.* to 'lb'@'10.0.1.17' identified by 'lb';
NOTE: IP address used instead of internal DSN name since it is longer than 60 characters...
Verify that the load balancer works from the ClusterControl node by:
$ mysql -ulb -plb -hinternal-int-lb-vpc-1396244671.ap-northeast-1.elb.amazonaws.com $ mysql -ulb -plb -h10.0.1.17
Before creating the external load balancer lets create one or two web servers to be used for our external load balancer pool.
Allocate elastic IPs and associate it with the instances in order to reach the internet.
Install apache with a MySQL php driver to test the DB connection
$ sudo apt-get update $ sudo apt-get install apache2 libapache2-mod-php5 php5-mysql $ sudo service apache2 restart
In the next final post we'll create a simple php test file on each web instance to verify that the external load balancer, web servers and database cluster are working properly.
External Load Balancer
Leave health check as is.
Select the public subnet.
Select the previous created 'Web' security group.
Select your web servers/instances for the load balancer's pool.
Review the configuation and create the external load balancer.
In a few minutes you should see that your web instances are registered and marked as healthy in the load balancer's pool.