docker-compose搭建3主3从redis集群

Redis集群是Redis提供的一个核心特性,可以将多个Redis实例组成一个分布式数据库,实现数据的自动分片和负载均衡。在Redis集群中,数据会被分成16384个槽位,每个槽位都会被分配给不同的Redis实例,当客户端发送数据请求时,Redis集群会自动将请求路由到相应的Redis实例,从而实现数据的分布式存储和查询。使用Redis集群可以提高系统的性能和可靠性,同时也可以提供更灵活的扩展方案,满足不同的业务需求。但是需要注意的是,Redis集群需要对数据进行分片,可能会导致一些操作的复杂性和性能问题,需要根据具体业务场景进行评估和选择。以下文章内容介绍使用docker-compose搭建3主3从redis集群配置,由于仅用于测试,故我这里只用1台服务器进行模拟。

部署Redis集群

1、创建一个redis-cluster目录,用来存放docker-compose.yaml文件和redis配置文件。

[root@localhost ~]# cd /
[root@localhost /]# mkdir redis-cluster
[root@localhost /]# cd /redis-cluster
[root@localhost redis-cluster]#

2、创建数据挂载目录

[root@localhost redis-cluster]# mkdir -p redis-master-1/{conf,data}
[root@localhost redis-cluster]# mkdir -p redis-master-2/{conf,data}
[root@localhost redis-cluster]# mkdir -p redis-master-3/{conf,data}
[root@localhost redis-cluster]# mkdir -p redis-slave-1/{conf,data}
[root@localhost redis-cluster]# mkdir -p redis-slave-2/{conf,data}
[root@localhost redis-cluster]# mkdir -p redis-slave-3/{conf,data}

3、在该目录下创建一个名为 docker-compose.yaml 的文件,并添加以下内容:

[root@localhost redis-cluster]# vim docker-compose.yaml
version: "3"
services:
redis-master-1:
  image: redis:latest
  container_name: redis-master-1
  command: redis-server /redis-config/redis-master-1.conf
  volumes:
     - ./redis-master-1/data:/data
     - ./redis-master-1/conf/redis-master-1.conf:/redis-config/redis-master-1.conf
  ports:
     - "6379:6379"
     - "16379:16379"
  networks:
    redis-cluster:

redis-master-2:
  image: redis:latest
  container_name: redis-master-2
  command: redis-server /redis-config/redis-master-2.conf
  volumes:
     - ./redis-master-2/data:/data
     - ./redis-master-2/conf/redis-master-2.conf:/redis-config/redis-master-2.conf
  ports:
     - "6380:6380"
     - "16380:16380"
  networks:
    redis-cluster:

redis-master-3:
  image: redis:latest
  container_name: redis-master-3
  command: redis-server /redis-config/redis-master-3.conf
  volumes:
     - ./redis-master-3/data:/data
     - ./redis-master-3/conf/redis-master-3.conf:/redis-config/redis-master-3.conf
  ports:
     - "6381:6381"
     - "16381:16381"
  networks:
    redis-cluster:

redis-slave-1:
  image: redis:latest
  container_name: redis-slave-1
  command: redis-server /redis-config/redis-slave-1.conf
  volumes:
     - ./redis-slave-1/data:/data
     - ./redis-slave-1/conf/redis-slave-1.conf:/redis-config/redis-slave-1.conf
  ports:
     - "6382:6382"
     - "16382:16382"
  networks:
    redis-cluster:

redis-slave-2:
  image: redis:latest
  container_name: redis-slave-2
  command: redis-server /redis-config/redis-slave-2.conf
  volumes:
     - ./redis-slave-2/data:/data
     - ./redis-slave-2/conf/redis-slave-2.conf:/redis-config/redis-slave-2.conf
  ports:
     - "6383:6383"
     - "16383:16383"
  networks:
    redis-cluster:

redis-slave-3:
  image: redis:latest
  container_name: redis-slave-3
  command: redis-server /redis-config/redis-slave-3.conf
  volumes:
     - ./redis-slave-3/data:/data
     - ./redis-slave-3/conf/redis-slave-3.conf:/redis-config/redis-slave-3.conf
  ports:
     - "6384:6384"
     - "16384:16384"
  networks:
    redis-cluster:

networks:
redis-cluster:
  driver: bridge

该docker-compose.yaml文件定义了6个Redis服务,其中三个为主节点(redis-master-1、redis-master-2、redis-master-3),三个为从节点(redis-slave-1、redis-slave-2、redis-slave-3)。同时定义了一个名为redis-cluster的网络。

4、在各个redis集群配置conf目录下分别创建名为redis-master-1.conf、redis-master-2.conf、redis-master-3.conf、redis-slave-1.conf、redis-slave-2.conf、redis-slave-3.conf的Redis配置文件,并将以下内容复制到对应配置文件中:

  • redis-master-1.conf

    port 6379
    protected-mode no
    bind 0.0.0.0
    cluster-enabled yes
    cluster-config-file nodes-6379.conf
    cluster-node-timeout 15000
    cluster-announce-ip 192.168.0.180
    cluster-announce-port 6379
    cluster-announce-bus-port 16379
    appendonly yes
  • redis-master-2.conf

    port 6380
    protected-mode no
    bind 0.0.0.0
    cluster-enabled yes
    cluster-config-file nodes-6380.conf
    cluster-node-timeout 15000
    cluster-announce-ip 192.168.0.180
    cluster-announce-port 6380
    cluster-announce-bus-port 16380
    appendonly yes
  • redis-master-3.conf

    port 6381
    protected-mode no
    bind 0.0.0.0
    cluster-enabled yes
    cluster-config-file nodes-6381.conf
    cluster-node-timeout 15000
    cluster-announce-ip 192.168.0.180
    cluster-announce-port 6381
    cluster-announce-bus-port 16381
    appendonly yes
  • redis-salver-1.conf

    port 6382
    protected-mode no
    bind 0.0.0.0
    cluster-enabled yes
    cluster-config-file nodes-6382.conf
    cluster-node-timeout 15000
    cluster-announce-ip 192.168.0.180
    cluster-announce-port 6382
    cluster-announce-bus-port 16382
    appendonly yes
  • redis-slave-2.conf

    port 6383
    protected-mode no
    bind 0.0.0.0
    cluster-enabled yes
    cluster-config-file nodes-6383.conf
    cluster-node-timeout 15000
    cluster-announce-ip 192.168.0.180
    cluster-announce-port 6383
    cluster-announce-bus-port 16383
    appendonly yes
  • redis-slave-3.conf

    port 6384
    protected-mode no
    bind 0.0.0.0
    cluster-enabled yes
    cluster-config-file nodes-6384.conf
    cluster-node-timeout 15000
    cluster-announce-ip 192.168.0.180
    cluster-announce-port 6384
    cluster-announce-bus-port 16384
    appendonly yes

各节点配置参数解释如下所示:

  • port:节点端口,即对外提供通信的端口

  • protected-mode:为保护模式。如果设置为yes,那么只允许我们在本机的回环连接,其他机器无法连接。线上Redis服务,为了安全,我们建议将protected-mode设置为yes。protected-mode设置为yes的情况下,为了我们的应用服务可以正常访问Redis,我们需要设置Redis的bind参数或者密码参数requirepass。

  • bind 0.0.0.0:接受所有来自于可用网络接口的连接

  • cluster-enabled:是否启用集群

  • cluster-config-file:集群配置文件

  • cluster-node-timeout:连接超时时间

  • cluster-announce-ip:集群各节点IP地址

  • cluster-announce-port:集群节点映射端口

  • cluster-announce-bus-port:集群总线端口

  • appendonly:持久化模式

5、执行以下命令启动 Redis 集群

[root@localhost redis-cluster]# docker-compose up -d

6、查看容器,确认是否正常运行

[root@localhost redis-cluster]# docker ps -a
CONTAINER ID   IMAGE                       COMMAND                 CREATED         STATUS                 PORTS                                                                                                                 NAMES
d75ce0c86eca   redis:latest                "docker-entrypoint.s…"   2 minutes ago   Up 2 minutes           0.0.0.0:6382->6379/tcp, :::6382->6379/tcp                                                                             redis-slave-1
717c30655dd9   redis:latest                "docker-entrypoint.s…"   2 minutes ago   Up 2 minutes           0.0.0.0:6384->6379/tcp, :::6384->6379/tcp                                                                             redis-slave-3
ac533661e714   redis:latest                "docker-entrypoint.s…"   2 minutes ago   Up 2 minutes           0.0.0.0:6383->6379/tcp, :::6383->6379/tcp                                                                             redis-slave-2
91e68abe9f09   redis:latest                "docker-entrypoint.s…"   2 minutes ago   Up 2 minutes           0.0.0.0:6379->6379/tcp, :::6379->6379/tcp                                                                             redis-master-1
fd22c969ba94   redis:latest                "docker-entrypoint.s…"   2 minutes ago   Up 2 minutes           0.0.0.0:6380->6379/tcp, :::6380->6379/tcp                                                                             redis-master-2
9ad3a6670b23   redis:latest                "docker-entrypoint.s…"   2 minutes ago   Up 2 minutes           0.0.0.0:6381->6379/tcp, :::6381->6379/tcp                                                                             redis-master-3

7、进入redis-master-1容器,创建集群

[root@localhost redis-cluster]# docker exec -it redis-master-1 /bin/sh
# redis-cli --cluster create 192.168.0.180:6379 192.168.0.180:6380 192.168.0.180:6381 192.168.0.180:6382 192.168.0.180:6383 192.168.0.180:6384 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.0.180:6383 to 192.168.0.180:6379
Adding replica 192.168.0.180:6384 to 192.168.0.180:6380
Adding replica 192.168.0.180:6382 to 192.168.0.180:6381
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: c85d6dad7e8490b5ec0bc626cfc9651559b1a33a 192.168.0.180:6379
  slots:[0-5460] (5461 slots) master
M: 8fce72829499c9069f70e9f964335e13e8e36bcd 192.168.0.180:6380
  slots:[5461-10922] (5462 slots) master
M: c32e1d66093d88ecc3610c9f9ff964366c283733 192.168.0.180:6381
  slots:[10923-16383] (5461 slots) master
S: 2847e4df97dda6674ccf82d4eb28656a2713299e 192.168.0.180:6382
  replicates c32e1d66093d88ecc3610c9f9ff964366c283733
S: cb5c15aa1ca64aa06b1d262786a931bd62994bfc 192.168.0.180:6383
  replicates c85d6dad7e8490b5ec0bc626cfc9651559b1a33a
S: 7e4afb9a2f021a39877937d903d42cd7f2a20491 192.168.0.180:6384
  replicates 8fce72829499c9069f70e9f964335e13e8e36bcd
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.
>>> Performing Cluster Check (using node 192.168.0.180:6379)
M: c85d6dad7e8490b5ec0bc626cfc9651559b1a33a 192.168.0.180:6379
  slots:[0-5460] (5461 slots) master
  1 additional replica(s)
S: cb5c15aa1ca64aa06b1d262786a931bd62994bfc 192.168.0.180:6383
  slots: (0 slots) slave
  replicates c85d6dad7e8490b5ec0bc626cfc9651559b1a33a
S: 7e4afb9a2f021a39877937d903d42cd7f2a20491 192.168.0.180:6384
  slots: (0 slots) slave
  replicates 8fce72829499c9069f70e9f964335e13e8e36bcd
S: 2847e4df97dda6674ccf82d4eb28656a2713299e 192.168.0.180:6382
  slots: (0 slots) slave
  replicates c32e1d66093d88ecc3610c9f9ff964366c283733
M: 8fce72829499c9069f70e9f964335e13e8e36bcd 192.168.0.180:6380
  slots:[5461-10922] (5462 slots) master
  1 additional replica(s)
M: c32e1d66093d88ecc3610c9f9ff964366c283733 192.168.0.180:6381
  slots:[10923-16383] (5461 slots) master
  1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

参数说明:

  • –cluster-replicas:为每一个master配置slave

集群信息槽16384 个槽位已全部分配出去

测试集群

1、现在我们已经成功创建了一个 Redis 集群,现在可以通过连接 Redis 客户端并执行一些命令来测试它。首先,使用以下命令检查集群是否正在运行:

#防止路由失效加参数-c
[root@localhost redis-cluster]# docker exec -it redis-master-1 redis-cli -c -p 6379
127.0.0.1:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:394
cluster_stats_messages_pong_sent:417
cluster_stats_messages_sent:811
cluster_stats_messages_ping_received:412
cluster_stats_messages_pong_received:394
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:811

如果是cluster:fail,可能是你还没有分配槽,或者 16384 个槽位没有全部分配出去。

2、查看所有节点信息:

127.0.0.1:6379> cluster nodes
cb5c15aa1ca64aa06b1d262786a931bd62994bfc 192.168.0.180:6383@16383 slave c85d6dad7e8490b5ec0bc626cfc9651559b1a33a 0 1683450096936 1 connected
7e4afb9a2f021a39877937d903d42cd7f2a20491 192.168.0.180:6384@16384 slave 8fce72829499c9069f70e9f964335e13e8e36bcd 0 1683450098943 2 connected
2847e4df97dda6674ccf82d4eb28656a2713299e 192.168.0.180:6382@16382 slave c32e1d66093d88ecc3610c9f9ff964366c283733 0 1683450097000 3 connected
8fce72829499c9069f70e9f964335e13e8e36bcd 192.168.0.180:6380@16380 master - 0 1683450097000 2 connected 5461-10922
c32e1d66093d88ecc3610c9f9ff964366c283733 192.168.0.180:6381@16381 master - 0 1683450097938 3 connected 10923-16383
c85d6dad7e8490b5ec0bc626cfc9651559b1a33a 192.168.0.180:6379@16379 myself,master - 0 1683450098000 1 connected 0-5460

注意看图中的slave,master,myself等关键字:

关键字 说明
slave 该节点为备份节点
master 该节点为主节点
myself 该节点为当前连接的节点

从上面节点信息,可以看到集群中主机从机分布如下:

master               slave
6379       ->        6383  
6380       ->        6384  
6381       ->        6382    

docker-compose搭建3主3从redis集群

3、查询槽位分配情况:

127.0.0.1:6379> cluster slots
1) 1) (integer) 0
  2) (integer) 5460
  3) 1) "192.168.0.180"
     2) (integer) 6379
     3) "c85d6dad7e8490b5ec0bc626cfc9651559b1a33a"
  4) 1) "192.168.0.180"
     2) (integer) 6383
     3) "cb5c15aa1ca64aa06b1d262786a931bd62994bfc"
2) 1) (integer) 5461
  2) (integer) 10922
  3) 1) "192.168.0.180"
     2) (integer) 6380
     3) "8fce72829499c9069f70e9f964335e13e8e36bcd"
  4) 1) "192.168.0.180"
     2) (integer) 6384
     3) "7e4afb9a2f021a39877937d903d42cd7f2a20491"
3) 1) (integer) 10923
  2) (integer) 16383
  3) 1) "192.168.0.180"
     2) (integer) 6381
     3) "c32e1d66093d88ecc3610c9f9ff964366c283733"
  4) 1) "192.168.0.180"
     2) (integer) 6382
     3) "2847e4df97dda6674ccf82d4eb28656a2713299e"

当前集群环境的槽分配情况为:[0-5460] 6379节点[5461-10922] 6380节点[10923-16383] 6381节点

4、使用以下命令在集群中写入和读取值:

127.0.0.1:6379> keys *
(empty array)
127.0.0.1:6379> set name lin
-> Redirected to slot [5798] located at 192.168.0.180:6380
OK
192.168.0.180:6380> get name
"lin"

注意:上面key的槽位为5798所以该键的存储就被分配到了6380节点上,所以设置完key值可以看到连接的节点变为了192.168.0.180:6380。查询是在该数据所在槽位上进行,所以连接节点没有改变。继续在另外一个节点继续写入数据:

[root@localhost redis-cluster]# docker exec -it redis-master-3 redis-cli -c -p 6381
127.0.0.1:6381> keys *
(empty array)
127.0.0.1:6381> set weight 70
OK
192.168.0.180:6381> get weight
"70"

可以看到写入的键根据哈希函数运算以后得到的值也还在该节点的范围内,那么直接插入数据即可。到此使用docker-compose搭建3主3从redis集群完毕。

原文始发于微信公众号(面试技术):docker-compose搭建3主3从redis集群

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。

文章由极客之家整理,本文链接:https://www.bmabk.com/index.php/post/186837.html

(0)
小半的头像小半

相关推荐

发表回复

登录后才能评论
极客之家——专业性很强的中文编程技术网站,欢迎收藏到浏览器,订阅我们!