ClickHouse Variables and Parameters
This article summarizes the variables that are most frequently changed and most likely to affect the results among the public capabilities of clickhouse_ansible, to help you quickly locate the configuration entry before deployment, backup, and recovery.
1. General variables
File: playbooks/vars/common_config.yml
| Variable | Default value | Function |
|---|---|---|
clickhouse_version | 23.6.1 | ClickHouse version, supports three or four paragraph writing |
clickhouse_install_debug_package | false | Whether to additionally prepare and install clickhouse-common-static-dbg |
clickhouse_default_password | Dbbot_default@8888 | default user password |
clickhouse_uid / clickhouse_gid | 18123 / 18123 | Fixed UID/GID for ClickHouse users and groups |
fcs_auto_download_packages | true | Control whether the node automatically downloads the ClickHouse installation package |
fcs_set_hostname | true | Whether to modify the target host hostname |
fcs_allow_dbbot_default_passwd | false | Whether to allow continued execution with a public default password |
use_clickhouse_keeper | false | false uses standalone ZooKeeper, true uses ClickHouse Keeper |
clickhouse_enable_ssl | false | Whether to enable SSL |
deploy_require_manual_confirm | true | Whether to require console confirmation before deployment |
Description:
clickhouse_install_debug_package=falseis the current recommended default and can significantly reduce download and distribution times.- It is recommended to unify
clickhouse_uid/clickhouse_gidbetween the source cluster and the disaster recovery cluster to avoid permission misalignment during NFS recovery. - The public default password of
clickhouse_default_passwordisDbbot_default@8888. - Default
fcs_allow_dbbot_default_passwd: false, so deployment, backup-, and restore-related playbooks will intercept the exposed default password in thepre_tasksstage.
2. Cluster variables
File: playbooks/vars/cluster_config.yml
| Variable | Default value | Function |
|---|---|---|
clickhouse_cluster_name | example_3shards_2replicas | cluster name |
clickhouse_tcp_port_base | 9000 | ClickHouse TCP port baseline |
clickhouse_http_port_base | 8123 | ClickHouse HTTP port baseline |
clickhouse_interserver_http_port_base | 9009 | Inter-replica HTTP port baseline |
clickhouse_mysql_port_base | 9004 | MySQL protocol compatible port baseline |
clickhouse_postgresql_port_base | 9005 | PostgreSQL protocol compatible port baseline |
clickhouse_aux_port_stride | 10 | Multi-instance protocol compatible port stride |
Port calculation rules:
- The main port increases
1one by one according toinstance_id - MySQL/PostgreSQL protocol compatible port by
instance_idmultiplied byclickhouse_aux_port_stride
3. Back up variables
File: playbooks/vars/backup_config.yml
| Variable | Default value | Function |
|---|---|---|
backup_databases / backup_tables | [] | Backup object, set at least one item |
backup_mode | full | Backup mode, supports full / incremental |
backup_base_batch_id | "" | Incremental backup baseline batch number |
backup_storage_disk | backup_nfs | ClickHouse side backup disk name |
backup_mount_dir | /backup | NFS mount directory |
backup_checkpoint_mode | file | safe_ts recording mode |
backup_require_replicated_tables | true | Whether to block non-replicated local table backups |
backup_allow_partial_cluster | false | Whether to allow --limit to back up only some nodes |
Production suggestions:
- The production environment prefers to use fixed
backup_batch_id - Complete
setup_nfs_client_mount_rc_local.ymlandprepare_backup_disk.ymlbefore backing up
4. Restore variables
File: playbooks/vars/restore_config.yml
| Variable | Default value | Function |
|---|---|---|
restore_batch_id | "" | Target recovery batch number, required |
restore_manifest_file | "" | manifest path, automatically located by default |
restore_to_all_replicas | true | Whether all replicas participate in recovery |
restore_allow_non_empty_tables | false | Whether to allow recovery to non-empty tables |
restore_enable_two_phase_mv_compat | true | Whether to enable MV two-phase compatible recovery |
restore_allow_partial_cluster | false | Whether to allow --limit to restore only some nodes |
restore_require_manual_confirm | true | Whether to require console confirmation before restoring |
restore_mount_dir | /backup | Restore the target side mount directory |
Be sure to do two things before restoring:
- Mount NFS on the recovery target cluster
- Execute
prepare_backup_disk.ymlin the recovery target cluster
5. NFS and cleaning variables
The default values of NFS server/client are written in the playbook. The core values are as follows:
| Variable | Default value | Function |
|---|---|---|
nfs_server_ip | 198.51.100.162 | NFS server IP |
nfs_export_dir | /srv/nfs/clickhouse_backup | Server export directory |
nfs_mount_point | /backup | Client mount directory |
dbbot_inventory_purpose | nfs_server / backup | NFS server/client guard |
Cleanup variables are located at: playbooks/vars/uninstall_config.yml
| Variable | Default value | Function |
|---|---|---|
uninstall_require_manual_confirm | true | Whether to require confirmation before uninstalling |
uninstall_purge_clickhouse_config | true | Whether to delete ClickHouse configuration |
uninstall_purge_clickhouse_data | true | Whether to delete the ClickHouse data directory |
uninstall_purge_clickhouse_logs | true | Whether to delete the ClickHouse log directory |
uninstall_remove_backup_mount | false | Whether to uninstall /backup and remove the rc.local mount script |
uninstall_purge_zookeeper | true | Whether to delete independent ZooKeeper services and directories |
6. Restore validation variables
File: playbooks/vars/validate_restore_config.yml
| Variable | Default value | Function |
|---|---|---|
validate_source_group | clickhouse_backup | Source cluster group |
validate_target_group | clickhouse_restore | Target cluster group |
validate_source_query_host | 127.0.0.1 | Source query address |
validate_target_query_host | 127.0.0.1 | Target query address |
validate_fail_on_missing_pairs | true | Whether to fail when the source/target lacks a corresponding shard copy |
validate_checks | [] | List of validation items |
When using this playbook, you need to pass in two sets of inventory at the same time, for example:
ansible-playbook \
-i ../inventory/hosts.backup.ini \
-i ../inventory/hosts.restore.ini \
validate_restore_consistency.yml
For TTL tables, it is recommended to write the where condition of a fixed time window in validate_checks instead of directly comparing the total number of the entire table.