Deploy three MySQL 9.7 EA standalone instances concurrently with dbbot v0.14.0
The previous article explained what dbbot is and where the project fits. This article starts the real lab work: deploying three MySQL 9.7 EA standalone instances concurrently on three hosts.
There is a small coincidence this time: Rocky Linux 9.7 + MySQL 9.7. A “double 9.7”. It is just a nice matching number, but both are new enough to be interesting, and both fit within dbbot’s support boundaries. I will use this double-9.7 setup as the main test environment for later posts.
This environment will stay around. Later, I will use it to look through the 9.7 changes that DBAs should care about early. So this article is really a lab setup article: put the baseline machines in place first.
But one engineering problem comes first: MySQL 9.7 is still an EA package, and it is not in dbbot’s official support matrix. We need to solve how a package outside the matrix can still go through automation.
1. Why EA packages used to break automation
Many DBAs know this pattern:
- Oracle releases an EA package first, and its name differs from the GA package.
- A company may rename, repackage, or add extra checks around that EA package internally.
- Existing automation scripts only recognize “official package names in the support matrix”, so they do not recognize the new file.
In the end, there were usually only two options: temporarily edit the playbook, or distribute the binary manually. It can work, but it leaves an ugly review trail. Which package did this host actually use? Was the checksum verified? Did basedir overwrite an old directory? Was replication validated automatically? All of that ends up relying on human memory.
By default, dbbot does not allow a package name it does not recognize to enter the main deployment flow. That is deliberate restraint for stability. But completely blocking the path is not realistic either. DBAs often need to validate EA packages or internally repackaged builds.
2. What controlled opening did v0.14.0 add?
v0.14.0 added several variables that explicitly tell the playbook: “For this run, allow a MySQL Server main package outside the matrix.”
fcs_allow_custom_mysql_package: allow a custom package. The default isfalse; it must be enabled explicitly.mysql_custom_package: the package filename in the local downloads directory..tar.gzand.tar.xzare supported.mysql_custom_package_checksum_type/mysql_custom_package_checksum: checksum validation still applies.mysql_software_dir: isolate basedir so the EA package does not overwrite an existing directory.mysql_version: the real version must still be specified; it is not guessed from the package name.
This does not turn off safety checks: the switch only overrides package-name recognition for the MySQL Server main package. Version checks, target OS checks, and topology checks still apply. In InnoDB Cluster / Router scenarios, MySQL Shell and Router packages still come from the support matrix and cannot be bypassed through this switch.
The easily underestimated variable here is mysql_software_dir. I strongly recommend giving every custom package its own basedir instead of sharing 9.7.0 with a formal package. On the three hosts in this test, there was already:
/database/mysql/base/9.7.0
/database/mysql/3306
After deployment, each host gained:
/database/mysql/base/9.7.0-EA
/database/mysql/3308
In the final state, both 3306 and 3308 systemd services are active on each machine. Ports and directories do not collide. That is the value of directory isolation: when something goes wrong, you can tell at a glance whether an instance uses a formal package or an EA package.
3. Lab environment
| Node | IP | OS | Deployment |
|---|---|---|---|
| node-1 | 192.168.161.11 | Rocky Linux 9.7 | MySQL 9.7 EA standalone, port 3308 |
| node-2 | 192.168.161.12 | Rocky Linux 9.7 | MySQL 9.7 EA standalone, port 3308 |
| node-3 | 192.168.161.13 | Rocky Linux 9.7 | MySQL 9.7 EA standalone, port 3308 |
There is no replication relationship between the three machines. This run simply applies the same single_node.yml concurrently to three hosts, producing one independent EA instance per host. Later, different feature articles can use one host at a time without affecting the others.
Why 3308? These hosts already had 3306 instances running. I kept them intentionally, which also validates that dbbot multi-instance deployment does not conflict across ports.
The public demo passwords use dbbot defaults (Dbbot_<user>@8888 / Dbbot_<linux_user>@9999) for the lab only. In production, use your own SSH keys and database passwords.
1. Download dbbot v0.14.0
cd /tmp
dbbot_version="v0.14.0"
curl -fL -O "https://github.com/fanderchan/dbbot/releases/download/${dbbot_version}/dbbot-${dbbot_version}.tar.gz"
tar -zxvf "dbbot-${dbbot_version}.tar.gz" -C /usr/local/
/usr/local/dbbot/bin/dbbotctl env setup
source ~/.bashrc
ansible-playbook --version
In production, always pin the version instead of letting CI run
latest. The same “script that worked last time” can produce different results across releases.
2. Prepare the MySQL 9.7 EA package
MySQL 9.7 EA is not currently in the regular dev.mysql.com/get/Downloads directory, so fetch it from snapshots:
cd /usr/local/dbbot/mysql_ansible/downloads
curl -fL -O "https://downloads.mysql.com/snapshots/pb/mysql-9.7.0-csa/mysql-9.7.0-csa-linux-glibc2.28-x86_64.tar.xz"
md5sum mysql-9.7.0-csa-linux-glibc2.28-x86_64.tar.xz
The MD5 from this test run:
bf7a16834c99d447630fd917b5f32529 mysql-9.7.0-csa-linux-glibc2.28-x86_64.tar.xz
If you have an internally repackaged file, for example a renamed build from an artifact repository, put it in the same downloads directory and change mysql_custom_package plus the corresponding checksum. You do not need to edit the playbook. This matters: automation scripts should not degrade into one-off scripts just because of one EA test.
3. Write the inventory
/usr/local/dbbot/mysql_ansible/inventory/hosts.ini:
[dbbot_mysql]
192.168.161.11 ansible_user=root ansible_ssh_pass="'Dbbot@2026'"
192.168.161.12 ansible_user=root ansible_ssh_pass="'Dbbot@2026'"
192.168.161.13 ansible_user=root ansible_ssh_pass="'Dbbot@2026'"
[all:vars]
ansible_python_interpreter=auto_silent
Check connectivity first:
cd /usr/local/dbbot/mysql_ansible/playbooks
ansible -i ../inventory/hosts.ini dbbot_mysql -m ping
Continue only after all three hosts return pong. Automation deployment should never push forward with unresolved SSH, Python, or sudo issues. Once a later playbook task fails, the root cause is often several layers away from the error message and becomes much harder to debug.
4. Write an isolated variable file
Do not edit default variables for one EA test. Create a standalone file and load it with -e @vars/mysql-9-7-ea-single-node.yml during deployment. This keeps the experiment isolated from future standard deployments.
/usr/local/dbbot/mysql_ansible/playbooks/vars/mysql-9-7-ea-single-node.yml:
mysql_version: "9.7.0"
mysql_port: 3308
mysql_software_dir: "/database/mysql/base/9.7.0-EA"
fcs_auto_download_packages: false
fcs_allow_custom_mysql_package: true
mysql_custom_package: "mysql-9.7.0-csa-linux-glibc2.28-x86_64.tar.xz"
mysql_custom_package_checksum_type: "md5"
mysql_custom_package_checksum: "bf7a16834c99d447630fd917b5f32529"
fcs_allow_dbbot_default_passwd: true
dbbot_confirmation_input: "confirm"
The variable file is intentionally short. Except for the port, basedir, and custom package settings that this run must change, everything else uses defaults. In the lab, fcs_allow_dbbot_default_passwd: true temporarily allows the public default password, which can be changed after deployment.
These lines are the ones to watch:
mysql_software_dir: "/database/mysql/base/9.7.0-EA"
fcs_allow_custom_mysql_package: true
mysql_custom_package: "mysql-9.7.0-csa-linux-glibc2.28-x86_64.tar.xz"
mysql_custom_package_checksum_type: "md5"
fcs_allow_custom_mysql_package must be enabled explicitly. mysql_custom_package must be a local filename, without any path separators, and the suffix must be .tar.gz or .tar.xz. This may look fussy, but it prevents arbitrary paths from being injected into the extraction process. A controlled switch still needs boundaries.
5. Deploy all three hosts concurrently with one command
cd /usr/local/dbbot/mysql_ansible/playbooks
ansible-playbook \
-i ../inventory/hosts.ini \
single_node.yml \
-e @vars/mysql-9-7-ea-single-node.yml
For CI or scripts, use the explicit portable Ansible path to avoid interference from any other Python or Ansible installed in the environment:
cd /usr/local/dbbot/mysql_ansible/playbooks
ANSIBLE_HOST_KEY_CHECKING=False \
python3 /usr/local/dbbot/portable-ansible/ansible-playbook \
-i ../inventory/hosts.ini \
single_node.yml \
-e @vars/mysql-9-7-ea-single-node.yml
single_node.yml runs against the three hosts in [dbbot_mysql]. Ansible runs hosts concurrently by default, so the three machines move forward at the same time.
During execution, dbbot shows that it is using the custom package path:
TASK [Use custom MySQL Server package metadata (local)] ************************
ok: [192.168.161.11 -> localhost]
ok: [192.168.161.12 -> localhost]
ok: [192.168.161.13 -> localhost]
TASK [Check mysql-9.7.0-csa-linux-glibc2.28-x86_64.tar.xz checksum when configured (local)] ***
ok: [192.168.161.11 -> localhost]
ok: [192.168.161.12 -> localhost]
ok: [192.168.161.13 -> localhost]
TASK [../roles/mysql_server : Unarchive MySQL install package to /database/mysql/base/9.7.0-EA] ***
changed: [192.168.161.11]
changed: [192.168.161.12]
changed: [192.168.161.13]
Final PLAY RECAP:
192.168.161.11 : ok=86 changed=26 unreachable=0 failed=0
192.168.161.12 : ok=86 changed=26 unreachable=0 failed=0
192.168.161.13 : ok=86 changed=26 unreachable=0 failed=0
Playbook run took 0 days, 0 hours, 1 minutes, 32 seconds
The ok and changed counts match across the three hosts. That is what a concurrent standalone deployment should look like: every host runs the same playbook, and none of them is “primary”. Post-deploy checks also complete:
MySQL instance post-check passed on 192.168.161.11.
MySQL instance post-check passed on 192.168.161.12.
MySQL instance post-check passed on 192.168.161.13.
Ninety-two seconds from empty environment to three independent MySQL 9.7 EA standalone instances with post-checks. There is no gray area of “the task ran, but we do not know whether the result is right.”
6. Verify the result
First check services on each host. The original 3306 and the new 3308 should both be active:
systemctl is-active mysql3306 mysql3308
All three hosts return:
active active
Then confirm the version, port, and directories of the 3308 instance on each host:
/database/mysql/base/9.7.0-EA/bin/mysql \
-h127.0.0.1 -P3308 -uadmin -pDbbot_admin@8888 \
-NBe "select @@hostname, @@port, @@version, @@basedir, @@datadir;"
The three hosts return:
r9-01.iaas.local 3308 9.7.0 /database/mysql/base/9.7.0-EA/ /database/mysql/3308/data/
r9-02.iaas.local 3308 9.7.0 /database/mysql/base/9.7.0-EA/ /database/mysql/3308/data/
r9-03.iaas.local 3308 9.7.0 /database/mysql/base/9.7.0-EA/ /database/mysql/3308/data/
The command above puts basedir, the admin user, and the password directly on the command line. That is clear for a demo, but daily login does not need to be this clumsy. By default, dbbot can create fast login commands for the
mysqluser. Seefcs_create_mysql_fast_login. The idea is similar to switching to the PostgreSQL user and runningpsql:su mysql db3308Switch to the
mysqluser, typedb<port>, and enter the matching instance directly. No basedir path to remember, and no password written on the command line. This is especially useful when multiple ports coexist.
The basedir lands cleanly in 9.7.0-EA and does not pollute the 9.7.0 directory used by the original 3306 instance. That is the practical meaning of the earlier directory-isolation point.
At this point, the three independent MySQL 9.7 EA 3308 standalone instances are running. More importantly, the variable file, package name, checksum, port, directory, and execution log are all preserved. When you need to review this later, one variable file explains how the host was created.
Boundaries of the custom package capability
To close the loop, this switch only affects the MySQL Server main package. Everything else stays the same:
| Item | Affected | Notes |
|---|---|---|
| MySQL Server main package name | Yes | Custom local package allowed; must be .tar.gz or .tar.xz |
mysql_version | No | The real version must still be specified; it is not guessed from the package name |
| Target OS checks | No | A host that does not satisfy the 9.7 glibc2.28 requirement still fails |
| Topology checks | No | Replication, MGR, and InnoDB Cluster keep their own prerequisites |
| MySQL Shell / Router packages | No | Still come from the support matrix |
| Checksum validation | Yes, if configured | Recommended every time |
My own habits are simple:
- Prefer standard package names and checksums from the support matrix for formal production deployments.
- Enable
fcs_allow_custom_mysql_packageonly for EA, internal tests, and internally repackaged builds. - Always set a separate
mysql_software_dirfor custom packages. Do not share a basedir with a formal package. - Configure checksums whenever possible, especially when packages have moved through object storage, artifact repositories, or bastion hosts.
The three 9.7 EA lab machines are ready
At this point, the lab environment is stable:
- Three Rocky Linux 9.7 hosts deployed concurrently
- One independent MySQL 9.7 EA standalone instance per host, on port
3308 - basedir, datadir, binlog, and config all follow the dbbot default layout
- Existing
3306instances still run beside them, with no version conflict
Later articles in the 9.7 series will build on this environment. That avoids reinstalling everything in every post and avoids repeating why 9.7 EA is available for the tests. The three hosts are independent, so different feature tests can run separately without interfering with each other. The next article will start digging into the 9.7 changes I care about, beginning with the items already called out in the release notes.