docker 常用命令

基本命令

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
/var/lib/docker/  #docker 默认存储路径
systemctl start docker #启动docker服务
systemctl stop docker #停止docker服务
systemctl enable docker #设置docker开机启动
docker stats #检查docker 守护进程是否在运行
docker info #查看docker相关信息

docker images #列出镜像
docker pull 镜像名 #拉取镜像
#删除所有镜像
docker rmi $(docker images | grep none | awk '{print $3}' | sort -r)
#删除为none的镜像
docker images --no-trunc| grep none | awk '{print $3}' | xargs -r docker rmi
docker save 镜像名 > /home/新镜像名.tar #保存镜像
docker load < /home/自定义镜像 #加载自定义镜像
docker tag 镜像名:标签名 新镜像名:标签名 #修改镜像名

docker ps -a #列出所有容器
docker ps -l #最后一次运行的容器
docker start 容器名(也可以使用容器ID) #重新启动已停止的容器
docker logs 容器名 #获取容器的日志
docker -f 容器名 #获取最后几条日志
docker rm $(docker ps -a -q) #删除所有容器
docker rm 容器名 #删除单个容器
docker inspect 容器名 #获取容器更多信息
docker top 容器名 #查看容器的进程

运行命令

docker run [options] image [command][arg…]
options 常用说明:
–name=:给容器指定一个名称,不使用则会随机分配一个名称
-d:后台运行容器并返回一个容器 ID (后台守护式容器,部分容器会在启动后自杀,也就是启动未成功,如:ubuntu)
-i:以交互模式运行容器,通常和-t 同时使用(前台交互式容器)
-t:为容器重新分配一伪输入终端,通常和-i 同时使用(前台交互式容器)
-P:随机端口映射,大写 P
-p:指定端口映射,小写 p
-v:指定容器卷

1
2
3
4
5
6
7
# -d 后台运行mysql容器
# -p 3310:3306 将容器中的3306端口映射到主机的3310端口
# -v 将容器中的/etc/mysql/conf.d挂载到容器的/etc/mysql/conf.d
# -v 将容器中的/var/lib/mysql挂载到主机的/home/mysql/data
# -e MYSQL_ROOT_PASSWORD=123456 启动mysql容器,并设置root密码为123456
# --name mysql01 容器名:mysql01
docker run -d -p 3310:3306 -v /home/mysql/conf:/etc/mysql/conf.d -v /home/mysql/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456 --name mysql01 mysql:5.7

进入正在运行的容器

1
2
3
4
两种方式:
docker exec -it [容器ID] /bin/bash
docker attach [容器ID]
(推荐使用docker exec命令,因为docker attach命令使用exit退出,会导致容器停止,而docker exec命令不会)

退出容器

1
2
3
两种方式:
exit:run 进去容器,exit 退出,容器停止
ctrl+p+q:run 进去容器,ctrl+p+q 退出,容器不停止

根据容器生成一个新的镜像

1
语法:docker commit -m="[提交描述]" -a="[作者]" [容器 ID] [镜像名]:[版本号]

数据卷容器

1
2
3
--volume-from:指定一个数据卷容器,该容器的数据卷将会被挂载到容器中。
# 运行docker03容器,挂载docker01容器的数据卷,docker01和docker03的数据卷是共享的,改变docker03的数据卷中的内容,docker01的数据卷也会改变。
docker run -it --name docker03 --volumes-from docker01 zzw/centos:1.0

Dockerfile

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
FROM [image]      #基础镜像(centos等)
MAINTAINER [name] # 镜像作者,姓名+邮箱
RUN [command] #镜像构建的时候需要的运行命令
ADD [src] [dest] #步骤:tomcat镜像,这个tomcat压缩包!添加内容
WORKDIR [path] #设置容器内工作目录的路径。
VOLUME [path] #挂载目录,持久化容器文件
EXPOSE [port] #暴露端口配置。
CMD [command] #指定这个镜像启动时,容器内运行的命令。只有最后一个CMD命令生效,可被替代。
ENTRYPOINT [command] #指定这个镜像启动时,容器内运行的命令。可追加命令行参数。
ONBUILD [INSTRUCTION] #创建一个触发器,当使用FROM命令创建一个新的镜像时,会自动执行ONBUILD命令
COPY [src] [dest] #将文件或目录从源路径复制到容器内的目标路径。
ENV [key] [value] #设置环境变量的值,在容器内运行的进程可以访问这些变量。

# 创建一个Dockerfile(mydockerfile-centos)文件,内容如下:
FROM centos:7
MAINTAINER ZZW<zzw@qq.com>
ENV MYPATH /usr/local
WORKDIR $MYPAHT
RUN yum -y install vim
RUN yum -y install net-tools
EXPOSE 80
CMD echo $MYPATH
CMD echo "---end---"
CMD /bin/bash

#构建自己的镜像mycentos:0.1 通过mydockerfile-centos文件
#-f 文件路径
#-t 镜像名称
#.表示当前目录
docker build -f mydockerfile-centos -t mycentos:0.1 .

#查看镜像构建历史
docker history mycentos:0.1

#ENTRYPOINT
新增dockerfile-cmd
FROM mycentos7
ENTRYPOINT ["ls","-a"]

#构建镜像
docker build -f dockerfile-cmd -t testcmd .
docker run testcmd #输出当前目录下文件名
docker run testcmd -l #输出当前目录下所有文件详情信息,包括文件读写权限等,完整执行命令ls -al

#CMD
新增dockerfile-cmd
FROM mycentos7
CMD ["ls","-a"]
docker build -f dockerfile-cmd -t testcmd .
docker run testcmd #输出当前目录下文件名
docker run testcmd -l #会报错,因为CMD命令只能有一个,不能追加命令行参数

#tomcat jdk
#ADD 自动解压

FROM centos:7
MAINTAINER ZZW<zzw@qq.com>

ADD ./jdk-8u281-linux-x64.tar.gz /usr/local/
ADD ./apache-tomcat-8.5.99.tar.gz /usr/local/

RUN yum -y install vim

Env MYPATH /usr/local
WORKDIR $MYPATH

ENV JAVA_HOME /usr/local/jdk1.8.0_281
ENV CLASSPATH $JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
ENV CATALINA_HOME /usr/local/apache-tomcat-8.5.99
ENV PATH $PATH:$JAVA_HOME/bin:$CATALINA_HOME/bin

EXPOSE 8080

CMD /usr/local/apache-tomcat-8.5.99/bin/startup.sh && tail -f /usr/local/apache-tomcat-8.5.99/logs/catalina.out

#构建镜像
docker build -t diytomcat .

#启动zzwtomcat容器
/usr/local/apache-tomcat-8.5.99/webapps
docker run -d -p 9090:8080 --name zzwtomcat -v /home/tomcat/test:/usr/local/apache-tomcat-8.5.99/webapps/test -v /home/tomcat/logs/:/usr/local/apache-tomcat-8.5.99/logs diytomcat

发布镜像

DokcerHub

1
2
3
4
http://hub.docker.com/ #需要注册账号
docker login -u [用户名] -p [密码] #登录dockerhub
docker push [镜像名]:[版本号] #上传镜像到dockerhub
docker pull [镜像名] #下载镜像到本地

阿里云镜像仓库

1.登录阿里云镜像 2.找到容器镜像服务 3.创建命名空间
创建命名空间 4.创建镜像仓库
创建镜像仓库 5.设置访问凭证
设置访问凭证 6.推送到阿里云
推送到阿里云
根据官网步骤操作即可

Docker 网络

Docker0 网络

本机
ipaddr

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
docker run -d -P --name tomcat01 tomcat
docker run -d -P --name tomcat02 tomcat
docker exec -it tomcat01 ip addr
如果报错:
OCI runtime exec failed: exec failed: unable to start container process: exec: “ip”: executable file not found in $PATH: unknown
解决方案:
进入容器指令 docker exec -it tomcat01 /bin/bash
进入容器,执行 apt update && apt install -y iproute2 && apt-get install -y iputils-ping 命令
容器不停止退出:Ctrl+P+Q
然后再次执行 docker exec -it tomcat01 ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
126: eth0@if127: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.18.0.2/16 brd 172.18.255.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe12:2/64 scope link
valid_lft forever preferred_lft forever

linux 可以 ping 通容器 ip
每启动一个容器,都会分配一个 ip 地址。我们只要安装了 docker,就会有一个 docker0 网卡,桥接模式。evth-pair
启动容器后

1
2
3
4
5
6
7
8
tomcat01 ping (tomcat02)172.18.0.3
docker exec -it tomcat01 ping 172.18.0.3
可以ping通
[root@iZuf62nk2jdk5sw0yzab16Z logs]# docker exec -it tomcat01 ping 172.18.0.3
PING 172.18.0.3 (172.18.0.3) 56(84) bytes of data.
64 bytes from 172.18.0.3: icmp_seq=1 ttl=64 time=0.123 ms
64 bytes from 172.18.0.3: icmp_seq=2 ttl=64 time=0.048 ms
64 bytes from 172.18.0.3: icmp_seq=3 ttl=64 time=0.043 ms

–link(不推荐使用)

1
2
3
4
5
docker run -d -P --name tomcat03 --link tomcat02 tomcat
docker exec -it tomcat03 ping tomcat02 #可ping通。但是tommcat02ping不通tomcat03

docker network ls #查看网络
docker network inspect bridge #查看网络详情

查看网络详情

1
2
3
4
5
6
7
8
9
10
11
12
docker exec -it tomcat03 cat /etc/hosts #查看tomcat03的hosts文件
[root@iZuf62nk2jdk5sw0yzab16Z logs]# docker exec -it tomcat03 cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.18.0.3 tomcat02 eb27d6339be9
172.18.0.4 ab5996b2e84a

--link 操作相当于在hosts中做了配置
1
2
3
4
5
6
7
8
9
10
11
12
# 创建网络 bridge:桥接 subnet:子网 gateway:网关
docker network create --driver bridge --subnet 192.168.0.0/16 --gateway 192.168.0.1 mynet
[root@iZuf62nk2jdk5sw0yzab16Z logs]# docker network ls
NETWORK ID NAME DRIVER SCOPE
c8dfd4fd86da bridge bridge local
679fd6a4f48a host host local
ce0c01e2141b mynet bridge local # 自定义网络
9baa05c12cec none null local
# 使用自定义网络
docker run -d -P --name tomcat04 --net mynet tomcat
docker run -d -P --name tomcat05 --net mynet tomcat
docker exec -it tomcat04 ping tomcat05 #可ping通

网络连通

不同网段之间连通

1
2
3
将tomcat01连接到mynet网络
docker network connnect mynet tomcat01
docker exec -it tomcat01 ping tomcat04 #可ping通

redis 单机集群

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
# 创建一个子网
docker network create redis --subnet 172.38.0.0/16
# 通过脚本创建六个redis配置
for port in $(seq 1 6); \
do \
mkdir -p /mydata/redis/node-${port}/conf
touch /mydata/redis/node-${port}/conf/redis.conf
cat << EOF >>/mydata/redis/node-${port}/conf/redis.conf
port 6379
bind 0.0.0.0
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-announce-ip 172.38.0.1${port}
Cluster-announce-port 6379
cluster-announce-bus-port 16379
appendonly yes
EOF
done

for port in $(seq 1 6); \
do \
docker run -p 637${port}:6379 -p 1637${port}:16379 --name redis-${port} \
-v /mydata/redis/node-${port}/data:/data \
-v /mydata/redis/node-${port}/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.1${port} redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf; \
done

#进入容器redis-1
docker exec -it redis-1 /bin/sh

# 创建集群
redis-cli --cluster create 172.38.0.11:6379 172.38.0.12:6379 172.38.0.13:6379 172.38.0.14:6379 172.38.0.15:6379 172.38.0.16:6379 --cluster-replicas 1

#进入redis-cli 命令行
/data # redis-cli -c
127.0.0.1:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:89
cluster_stats_messages_pong_sent:93
cluster_stats_messages_sent:182
cluster_stats_messages_ping_received:88
cluster_stats_messages_pong_received:89
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:182
# 查看集群节点85
127.0.0.1:6379> cluster nodes
0eed9fb0e030949c0009b327fe71ee396cf4db17 172.38.0.12:6379@16379 master - 0 1711433603596 2 connected 5461-10922
012ec944df134a04ef0289dd990d03598f7b16e1 172.38.0.14:6379@16379 slave 00277877f2527d9609a20ceb9bf23d6f9e035547 0 1711433604000 4 connected
00277877f2527d9609a20ceb9bf23d6f9e035547 172.38.0.13:6379@16379 master - 0 1711433605000 3 connected 10923-16383
356536ca20561b96e5ad47d58aba8fcb7b88cbc9 172.38.0.15:6379@16379 slave 01f83065027133ebecd00e546970a4e7ab4aaa6e 0 1711433603997 5 connected
978743c69a6ccaf102d111e793d0239935ce7c4d 172.38.0.16:6379@16379 slave 0eed9fb0e030949c0009b327fe71ee396cf4db17 0 1711433604000 6 connected
01f83065027133ebecd00e546970a4e7ab4aaa6e 172.38.0.11:6379@16379 myself,master - 0 1711433603000 1 connected 0-5460
127.0.0.1:6379> set a b
-> Redirected to slot [15495] located at 172.38.0.13:6379
OK
#新开一个终端停止redis-3容器
127.0.0.1:6379> get a
-> Redirected to slot [15495] located at 172.38.0.14:6379
"b"
#查看集群节点
#172.38.0.13:6379@16379 master,fail
# 172.38.0.14:6379@16379 myself,master
172.38.0.14:6379> cluster nodes
01f83065027133ebecd00e546970a4e7ab4aaa6e 172.38.0.11:6379@16379 master - 0 1711434024000 1 connected 0-5460
978743c69a6ccaf102d111e793d0239935ce7c4d 172.38.0.16:6379@16379 slave 0eed9fb0e030949c0009b327fe71ee396cf4db17 0 1711434023968 6 connected
356536ca20561b96e5ad47d58aba8fcb7b88cbc9 172.38.0.15:6379@16379 slave 01f83065027133ebecd00e546970a4e7ab4aaa6e 0 1711434024969 5 connected
012ec944df134a04ef0289dd990d03598f7b16e1 172.38.0.14:6379@16379 myself,master - 0 1711434022000 7 connected 10923-16383
0eed9fb0e030949c0009b327fe71ee396cf4db17 172.38.0.12:6379@16379 master - 0 1711434023000 2 connected 5461-10922
00277877f2527d9609a20ceb9bf23d6f9e035547 172.38.0.13:6379@16379 master,fail - 1711433943096 1711433942000 3 connected

SpringBoot 微服务 docker 打包

在项目根目录写 Dockerfile 文件

1
2
3
4
5
FROM java:8
COPY *.jar /app.jar
CMD ["--server.port=8080"]
EXPOSE 8080
ENTRYPOINT ["java","-jar","/app.jar"]

将 jar 包和 Dockerfile 文件放到同一个目录下,执行

1
2
docker build -t springboot-docker:1.0 .
docker run -d -p 8080:8080 --name springboot-docker springboot-docker:1.0

portainer

1
2
3
4
5
6
7
8
9
docker run -d \
--name portainer \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /data/portainer/data:/data \
-p 9000:9000 \
--cpus 0.5 \
--memory 200M \
--restart on-failure \
portainer/portainer

Docker swarm

swarm 是 docker 集群管理工具,可以管理多个 docker 主机,实现负载均衡,故障转移,自动扩容,自动缩容,自动升级,自动回滚等。
需要 4 台机器,每台机器安装 docker,并加入 swarm 集群

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
docker swarm init #初始化swarm集群,生成主节点
docker swarm init --advertise-addr 10.45.232.126 #机器内网地址

docker swarm join --token SWMTKN-1-******************** 10.45.232.126:2377 #加入swarm集群,在其他节点运行
docker node ls #在主节点查看节点
docker swarm join-token worker #获取worker节点加入集群的token
docker swarm join-token manager #获取manager节点加入集群的token


#删除master节点
docker swarm leave -f #在要退出的机器上退出swarm集群

#创建 Docker Swarm 网络(如果尚未创建)
docker network create --driver overlay redis-net

docker node update --availability drain j1c3rzz8w3pytmphh2h3on7p4 #将节点设置为不可用,在主节点上执行
docker node demote j1c3rzz8w3pytmphh2h3on7p4 #降级节点,在主节点上执行
docker node rm j1c3rzz8w3pytmphh2h3on7p4 #删除节点,id/hostname
docker node rm -f fvbo2vmlx82gzzjc96ta13z4 #强制删除节点

node 命令

1
2
3
4
5
6
7
8
hostnamectl set-hostname master1 #修改主机名
docker node demote #从 swarm 群集管理器中降级一个或多个节点
docker node inspect #显示一个或多个节点的详细信息
docker node ls #列出 swarm 群集中的节点
docker node promote #将一个或多个节点推入到群集管理器中
docker node ps #列出在一个或多个节点上运行的任务,默认为当前节点
docker node rm #从 swarm 群集删除一个或多个节点
docker node update #更新一个节点

docker service

创建应用服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
docker service create	创建服务
docker service inspect 显示一个或多个服务的详细信息
docker service logs 获取服务的日志
docker service ls 列出服务
docker service rm 删除一个或多个服务
docker service scale 设置服务的实例数量
docker service update 更新服务
docker service rollback 恢复服务至update之前的配置

# 创建服务
docker service create --name tomcat01 --replicas 4 --publish 8080:8080 tomcat:8.5.53-jre8-alpine
# 查看服务
docker service ls
# 查看单个服务
docker service ps tomcat01
# 扩缩容
docker service update --replicas 3 tomcat01
# 扩容
docker service scale tomcat01=5

图形界面

1
2
3
4
5
6
7
8
9
10
docker pull dockersamples/visualizer:latest  #拉取visualizer 镜像

docker run -itd --name visualizer -p 9001:8080 -e HOST=10.45.232.126 -e PORT=8080 -v /var/run/docker.sock:/var/run/docker.sock dockersamples/visualizer:latest
#或者使用以下命令 可以通过集群中所有ip+port进行访问
docker service create \
--name=viz \
--publish=9001:8080/tcp \
--constraint=node.role==manager \
--mount=type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock \
dockersamples/visualizer

docker stack

stack 是基于 swarm 的更高级的编排工具,可以管理多个 service,实现负载均衡,故障转移,自动扩容,自动缩容,自动升级,自动回滚等。

1
2
3
4
5
docker stack deploy	部署新的堆栈或更新现有堆栈
docker stack ls 列出现有堆栈
docker stack ps 列出堆栈中的任务
docker stack rm 删除一个或多个堆栈
docker stack services 列出堆栈中的服务

部署 Redis 集群

机器配置:

1
2
3
4
10.45.232.126    master1
10.45.232.127 master2
10.45.232.128 master3
10.45.232.129 master4

创建 Docker Swarm 网络(如果尚未创建)

1
docker network create --driver overlay redis-net

设置 node 节点 hostname

1
2
3
4
5
6
7
8
9
#每台机器执行,对应的主机名,下面redis-stack.yml文件中的的master1、master2、master3、master4
hostnamectl set-hostname master1

[root@iZm5eda25wlr41jilwmuzdZ redis]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
676ij6puaiyhznxcxa8gus2qh * master1 Ready Active Reachable 26.0.0
wgzd8qz4pfoczrxdc3e3aga6s master2 Ready Active Leader 26.0.0
7qnka9orpr2ksz3lw73t6tb76 master3 Ready Active Reachable 26.0.0
l2e0n4t8958fqc79enzcqrkfe master4 Ready Active Reachable 26.0.0

每台机器执行

1
2
#创建本地文件夹
mkdir -p /data/redis/{7001,7002,7003,7004,7005,7006,7007,7008}/{data,conf} && chmod 777 -R /data/

创建网络

1
docker network create --driver overlay reids_net

新建配置文件

登录 manager 服务器(10.45.232.126),进入/data/redis/目录下,创建 redis-stack.yml 文件。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
version: "3.8"
services:
redis7001:
image: redis
container_name: redis7001
#设置主机名
hostname: redis7001
restart: always
#privileged: true
#挂载目录,相当于 docker run -v 主机目录:容器目录
volumes:
- /data/redis/7001/data:/data
- /data/redis/7001/conf:/conf
#启动容器执行命令,相当于docker run [镜像:tag] [命令], 登录redis: redis-cli -h 10.45.232.126 -p 6379 -a Dszn@2020
command: redis-server --appendonly yes --cluster-enabled yes --cluster-config-file /conf/nodes.conf --cluster-announce-ip 10.45.232.126 --cluster-announce-port 7001 --cluster-announce-bus-port 17001 --io-threads-do-reads yes --io-threads 2
ports:
- "7001:6379"
- "17001:16379"
#指定环境变量,相当于docker run -e 参数, 登录mysql: mysql -h10.45.232.126 -P3306 -uroot -pDs20Pwd@
environment:
- TZ=Asia/Shanghai
networks:
- redis_net
deploy:
placement:
constraints:
- node.hostname == master1
- node.role == manager

redis7002:
image: redis
container_name: redis7002
#设置主机名
hostname: redis7002
restart: always
volumes:
- /data/redis/7002/data:/data
- /data/redis/7002/conf:/conf
command: redis-server --appendonly yes --cluster-enabled yes --cluster-config-file /conf/nodes.conf --cluster-announce-ip 10.45.232.126 --cluster-announce-port 7002 --cluster-announce-bus-port 17002 --io-threads-do-reads yes --io-threads 2
ports:
- "7002:6379"
- "17002:16379"
environment:
- TZ=Asia/Shanghai
networks:
- redis_net
deploy:
placement:
constraints:
- node.hostname == master1

redis7003:
image: redis
container_name: redis7003
#设置主机名
hostname: redis7003
restart: always
#privileged: true
#挂载目录,相当于 docker run -v 主机目录:容器目录
volumes:
- /data/redis/7003/data:/data
- /data/redis/7003/conf:/conf
#启动容器执行命令,相当于docker run [镜像:tag] [命令], 登录redis: redis-cli -h 10.45.232.127-p 6379 -a Dszn@2020
command: redis-server --appendonly yes --cluster-enabled yes --cluster-config-file /conf/nodes.conf --cluster-announce-ip 10.45.232.127 --cluster-announce-port 7003 --cluster-announce-bus-port 17003 --io-threads-do-reads yes --io-threads 2
ports:
- "7003:6379"
- "17003:16379"
#指定环境变量,相当于docker run -e 参数, 登录mysql: mysql -h10.45.232.127 -P3306 -uroot -pDs20Pwd@
environment:
- TZ=Asia/Shanghai
networks:
- redis_net
deploy:
placement:
constraints:
- node.hostname == master2
- node.role == manager

redis7004:
image: redis
container_name: redis7004
#设置主机名
hostname: redis7004
restart: always
volumes:
- /data/redis/7004/data:/data
- /data/redis/7004/conf:/conf
command: redis-server --appendonly yes --cluster-enabled yes --cluster-config-file /conf/nodes.conf --cluster-announce-ip 10.45.232.127 --cluster-announce-port 7004 --cluster-announce-bus-port 17004 --io-threads-do-reads yes --io-threads 2
ports:
- "7004:6379"
- "17004:16379"
environment:
- TZ=Asia/Shanghai
networks:
- redis_net
deploy:
placement:
constraints:
- node.hostname == master2

redis7005:
image: redis
container_name: redis7005
#设置主机名
hostname: redis7005
restart: always
#privileged: true
#挂载目录,相当于 docker run -v 主机目录:容器目录
volumes:
- /data/redis/7005/data:/data
- /data/redis/7005/conf:/conf
#启动容器执行命令,相当于docker run [镜像:tag] [命令], 登录redis: redis-cli -h 10.45.232.128-p 6379 -a Dszn@2020
command: redis-server --appendonly yes --cluster-enabled yes --cluster-config-file /conf/nodes.conf --cluster-announce-ip 10.45.232.128 --cluster-announce-port 7005 --cluster-announce-bus-port 17005 --io-threads-do-reads yes --io-threads 2
ports:
- "7005:6379"
- "17005:16379"
#指定环境变量,相当于docker run -e 参数, 登录mysql: mysql -h10.45.232.128 -P3306 -uroot -pDs20Pwd@
environment:
- TZ=Asia/Shanghai
networks:
- redis_net
deploy:
placement:
constraints:
- node.hostname == master3
- node.role == manager

redis7006:
image: redis
container_name: redis7006
#设置主机名
hostname: redis7006
restart: always
volumes:
- /data/redis/7006/data:/data
- /data/redis/7006/conf:/conf
command: redis-server --appendonly yes --cluster-enabled yes --cluster-config-file /conf/nodes.conf --cluster-announce-ip 10.45.232.128 --cluster-announce-port 7006 --cluster-announce-bus-port 17006 --io-threads-do-reads yes --io-threads 2
ports:
- "7006:6379"
- "17006:16379"
environment:
- TZ=Asia/Shanghai
networks:
- redis_net
deploy:
placement:
constraints:
- node.hostname == master3

redis7007:
image: redis
container_name: redis7007
#设置主机名
hostname: redis70077
restart: always
#privileged: true
#挂载目录,相当于 docker run -v 主机目录:容器目录
volumes:
- /data/redis/7007/data:/data
- /data/redis/7007/conf:/conf
#启动容器执行命令,相当于docker run [镜像:tag] [命令], 登录redis: redis-cli -h 10.45.232.129-p 6379 -a Dszn@2020
command: redis-server --appendonly yes --cluster-enabled yes --cluster-config-file /conf/nodes.conf --cluster-announce-ip 10.45.232.129 --cluster-announce-port 7007 --cluster-announce-bus-port 17007 --io-threads-do-reads yes --io-threads 2
ports:
- "7007:6379"
- "17007:16379"
#指定环境变量,相当于docker run -e 参数, 登录mysql: mysql -h10.45.232.129 -P3306 -uroot -pDs20Pwd@
environment:
- TZ=Asia/Shanghai
networks:
- redis_net
deploy:
placement:
constraints:
- node.hostname == master4
- node.role == manager

redis7008:
image: redis
container_name: redis7008
#设置主机名
hostname: redis7008
restart: always
volumes:
- /data/redis/7008/data:/data
- /data/redis/7008/conf:/conf
command: redis-server --appendonly yes --cluster-enabled yes --cluster-config-file /conf/nodes.conf --cluster-announce-ip 10.45.232.129 --cluster-announce-port 7008 --cluster-announce-bus-port 17008 --io-threads-do-reads yes --io-threads 2
ports:
- "7008:6379"
- "17008:16379"
environment:
- TZ=Asia/Shanghai
networks:
- redis_net
deploy:
placement:
constraints:
- node.hostname == master4
#声明网桥
networks:
#定义服务网桥名称
redis_net:
#指定网桥驱动,有bridge/overlay,默认是bridge
driver: overlay
#false-统自动创建网桥名,格式为: 目录名_网桥名,默认为false; true-使用外部创建的网桥,需要自己手动创建
external: true

#挂载目录,声明服务使用的创建卷名
volumes:
mysqldata:
#false-系统自动创建的卷名,格式为: 目录名_卷名,默认为false; true-使用外部创建的卷面,需要自己手动创建
external: false
# 检查集群状态
redis-cli --cluster check 10.45.232.126:7001

docker stack 部署

1
docker stack deploy -c redis-stack.yml redis

创建 redis 集群

1
2
3
4
5
6
7
8
9
10
11
12
13
docker ps
[root@iZm5eda25wlr41jilwmuzdZ redis]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a420eea48702 redis:latest "docker-entrypoint.s…" 26 minutes ago Up 26 minutes 6379/tcp redis_redis7002
a72a447cedb6 redis:latest "docker-entrypoint.s…" 26 minutes ago Up 26 minutes 6379/tcp redis_redis7001
#进入容器
docker exec -it a72a447cedb6 /bin/bash
cd /usr/local/bin/

redis-cli --cluster create 10.45.232.126:7001 10.45.232.126:7002 10.45.232.127:7003 10.45.232.127:7004 10.45.232.128:7005 10.45.232.128:7006 10.45.232.129:7007 10.45.232.129:7008 --cluster-replicas 1 --cluster-yes

redis-cli -c #使用redis-cli命令,
127.0.0.1:6379> cluster nodes #查看集群节点

部署 zookeeper 集群

新建文件夹

1
2
3
#在三台机器上运行
docker pull zookeeper
rm -rf /data/zookeeper/* && mkdir -p /data/zookeeper/2181/{data,conf,datalog} && chmod 777 -R /data/

创建网络

1
docker network create --driver overlay zoo_net

zoo_stack.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
version: '3.8'

services:
zoo1:
image: zookeeper
restart: always
networks:
- zoo_net
hostname: zoo1
ports: # 端口
- 2181:2181
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=0.0.0.0:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181
volumes:
- /data/zookeeper/2181/conf:/conf
- /data/zookeeper/2181/data:/data
- /data/zookeeper/2181/datalog:/datalog
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- "node.hostname==master4"

zoo2:
image: zookeeper
restart: always
networks:
- zoo_net
hostname: zoo2
ports: # 端口
- 2182:2181
environment:
ZOO_MY_ID: 2
ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=0.0.0.0:2888:3888;2181 server.3=zoo3:2888:3888;2181
volumes:
- /data/zookeeper/2181/conf:/conf
- /data/zookeeper/2181/data:/data
- /data/zookeeper/2181/datalog:/datalog
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- "node.hostname==master2"
zoo3:
image: zookeeper
restart: always
networks:
- zoo_net
hostname: zoo3
ports: # 端口
- 2183:2181
environment:
ZOO_MY_ID: 3
ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=0.0.0.0:2888:3888;2181
volumes:
- /data/zookeeper/2181/conf:/conf
- /data/zookeeper/2181/data:/data
- /data/zookeeper/2181/datalog:/datalog
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- "node.hostname==master3"
#声明网桥
networks:
#定义服务网桥名称
zoo_net:
#指定网桥驱动,有bridge/overlay,默认是bridge
driver: overlay
#false-统自动创建网桥名,格式为: 目录名_网桥名,默认为false; true-使用外部创建的网桥,需要自己手动创建
external: true

启动集群

1
docker stack deploy -c zoo_stack.yml zookeeper

查看集群状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#列出容器列表
[root@master4 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ea14cde83b3b zookeeper:latest "/docker-entrypoint.…" 57 seconds ago Up 57 seconds 2181/tcp, 2888/tcp, 3888/tcp, 8080/tcp zookeeper_zoo1.1.zige4m021tf64y7ckp58sftxl
8e19019d6f04 redis:latest "docker-entrypoint.s…" 25 hours ago Up 25 hours 6379/tcp redis_redis7007.1.h5t8rvd62gtkfxt2ew0xhtg3x
99814e563766 redis:latest "docker-entrypoint.s…" 25 hours ago Up 25 hours 6379/tcp redis_redis7008.1.ihi3t2bj6ll0nqcb1230pt6l5
#进入容器
[root@master4 ~]# docker exec -it ea14cde83b3b /bin/bash
root@zoo1:/apache-zookeeper-3.9.2-bin# cd bin
root@zoo1:/apache-zookeeper-3.9.2-bin/bin# ls
README.txt zkCli.cmd zkEnv.cmd zkServer.cmd zkServer.sh zkSnapshotComparer.sh zkSnapshotRecursiveSummaryToolkit.sh zkSnapShotToolkit.sh zkTxnLogToolkit.sh
zkCleanup.sh zkCli.sh zkEnv.sh zkServer-initialize.sh zkSnapshotComparer.cmd zkSnapshotRecursiveSummaryToolkit.cmd zkSnapShotToolkit.cmd zkTxnLogToolkit.cmd
#查看集群状态
root@zoo1:/apache-zookeeper-3.9.2-bin/bin# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower #表示从节点

设置值测试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[root@master4 ~]# docker exec -it ea14cde83b3b /bin/bash 
root@zoo1:/apache-zookeeper-3.9.2-bin# cd bin/
root@zoo1:/apache-zookeeper-3.9.2-bin/bin# zkCli.sh # 进入客户端
#创建/test
[zk: localhost:2181(CONNECTED) 4] create /test
Created /test
#获取/test 为null
[zk: localhost:2181(CONNECTED) 5] get /test
null
#设置值/test 为11
[zk: localhost:2181(CONNECTED) 6] set /test 11
[zk: localhost:2181(CONNECTED) 7] get /test
11

##去到集群另一台机器上
[root@master2 ~]# docker exec -it b97847daa0c5 /bin/bash
root@zoo2:/apache-zookeeper-3.9.2-bin# cd bin/
root@zoo2:/apache-zookeeper-3.9.2-bin/bin# zkCli.sh

[zk: localhost:2181(CONNECTED) 0] get /test
11
[zk: localhost:2181(CONNECTED) 1] set /test 444
[zk: localhost:2181(CONNECTED) 2] get /test
444

zookeeper集群配置成功

部署mongo集群

docker pull mongo

新建文件夹

1
2
3
#在三台机器上运行
docker pull mongo
rm -rf /data/mongo/* && mkdir -p /data/mongo/data/db && chmod 777 -R /data/

创建网络

1
docker network create --driver overlay mongo_net

mongo_stack.yml

node.hostname==iot-32 为自己的docker node ls 中的hostname

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
version: '3.8'
services:
mongo1:
image: mongo
command: mongod --storageEngine wiredTiger --profile=1 --port 27017 --bind_ip_all --slowms=50 --replSet mongo --dbpath /data/db
ports:
- "27017:27017"
networks:
- mongo_net
volumes:
- /data/mongo/data/db:/data/db
deploy:
restart_policy:
condition: on-failure
replicas: 1
placement:
constraints:
- node.hostname==iot-32
# endpoint_mode: dnsrr
# resources:
# limits:
# cpus: "2"
# memory: 1200M
# reservations:
# cpus: "0.8"
# memory: 800M
mongo2:
image: mongo
command: mongod --storageEngine wiredTiger --profile=1 --port 27017 --bind_ip_all --slowms=50 --replSet mongo --dbpath /data/db
ports:
- "27018:27017"
networks:
- mongo_net
volumes:
- /data/mongo/data/db:/data/db
deploy:
restart_policy:
condition: on-failure
replicas: 1
placement:
constraints:
- node.hostname==iot-33
# endpoint_mode: dnsrr
mongo3:
image: mongo
command: mongod --storageEngine wiredTiger --profile=1 --port 27017 --bind_ip_all --slowms=50 --replSet mongo --dbpath /data/db
ports:
- "27019:27017"
networks:
- mongo_net
volumes:
- /data/mongo/data/db:/data/db
deploy:
restart_policy:
condition: on-failure
replicas: 1
placement:
constraints:
- node.hostname==iot-34
# endpoint_mode: dnsrr
networks:
mongo_net:
driver: overlay
external: true

docker stack 部署

1
docker stack deploy -c mongo-stack.yml mongo_cluster

启动集群

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
#进入容器
[root@iot-32 mongo]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
71e093a630c1 mongo:latest "docker-entrypoint.s…" 11 minutes ago Up 11 minutes 27017/tcp mongo_cluster_mongo1.1.hsbms8e5n9likecljqin8po78

docker exec -it 71e093a630c1 /bin/bash

# 进入命令行
mongosh
root@71e093a630c1:/# mongosh
Current Mongosh Log ID: 661c9b3dab16df54397b2da8
Connecting to: mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+2.2.2
Using MongoDB: 7.0.8
Using Mongosh: 2.2.2
**************************************************************************************************************
# 定义config
test> config={_id:"mongo",members:[{_id:0,host:"mongo1:27017"},{_id:1,host:"mongo2:27017"},{_id:2,host:"mongo3:27017","arbiterOnly" : true}]}
{
_id: 'mongo',
members: [
{ _id: 0, host: 'mongo1:27017' },
{ _id: 1, host: 'mongo2:27017' },
{ _id: 2, host: 'mongo3:27017', arbiterOnly: true }
]
}
# 初始化
test> rs.initiate(config)
{ ok: 1 }
# 查看集群状态
mongo [direct: secondary] test> rs.status()
{
set: 'mongo',
date: ISODate('2024-04-15T03:14:32.851Z'),
。。。
members: [
{
_id: 0,
name: 'mongo1:27017',
。。。
{
_id: 1,
name: 'mongo2:27017',
。。。
},
{
_id: 2,
name: 'mongo3:27017',
。。。
}
],
ok: 1,
。。。
}
# 接下来修改mongodb 权重 因为不想集群重启之后主节点改变
# 重新赋值
mongo [direct: secondary] test>cfg=rs.config()
{
_id: 'mongo',
version: 1,
term: 1,
members: [
{
_id: 0,
host: 'mongo1:27017',
arbiterOnly: false,
buildIndexes: true,
hidden: false,
priority: 1,
tags: {},
secondaryDelaySecs: Long('0'),
votes: 1
},
{
_id: 1,
host: 'mongo2:27017',
arbiterOnly: false,
buildIndexes: true,
hidden: false,
priority: 1,
tags: {},
secondaryDelaySecs: Long('0'),
votes: 1
},
{
_id: 2,
host: 'mongo3:27017',
arbiterOnly: true,
buildIndexes: true,
hidden: false,
priority: 0,
tags: {},
secondaryDelaySecs: Long('0'),
votes: 1
}
],
protocolVersion: Long('1'),
writeConcernMajorityJournalDefault: true,
settings: {
chainingAllowed: true,
heartbeatIntervalMillis: 2000,
heartbeatTimeoutSecs: 10,
electionTimeoutMillis: 10000,
catchUpTimeoutMillis: -1,
catchUpTakeoverDelayMillis: 30000,
getLastErrorModes: {},
getLastErrorDefaults: { w: 1, wtimeout: 0 },
replicaSetId: ObjectId('661c9b91e68c823d44507a9a')
}
}
# 修改权重
mongo [direct: primary] test> cfg.members[0].priority = 3
3
# 修改权重
mongo [direct: primary] test> cfg.members[1].priority = 2
2
# 重新加载权重信息
mongo [direct: primary] test> rs.reconfig(cfg)
{
ok: 1,
'$clusterTime': {
clusterTime: Timestamp({ t: 1713150935, i: 1 }),
signature: {
hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0),
keyId: Long('0')
}
},
operationTime: Timestamp({ t: 1713150935, i: 1 })
}
# 查看权重信息
mongo [direct: primary] test> rs.config()
{
_id: 'mongo',
version: 2,
term: 1,
members: [
{
_id: 0,
host: 'mongo1:27017',
arbiterOnly: false,
buildIndexes: true,
hidden: false,
priority: 3,
tags: {},
secondaryDelaySecs: Long('0'),
votes: 1
},
{
_id: 1,
host: 'mongo2:27017',
arbiterOnly: false,
buildIndexes: true,
hidden: false,
priority: 2,
tags: {},
secondaryDelaySecs: Long('0'),
votes: 1
},
{
_id: 2,
host: 'mongo3:27017',
arbiterOnly: true,
buildIndexes: true,
hidden: false,
priority: 0,
tags: {},
secondaryDelaySecs: Long('0'),
votes: 1
}
],
protocolVersion: Long('1'),
writeConcernMajorityJournalDefault: true,
settings: {
chainingAllowed: true,
heartbeatIntervalMillis: 2000,
heartbeatTimeoutSecs: 10,
electionTimeoutMillis: 10000,
catchUpTimeoutMillis: -1,
catchUpTakeoverDelayMillis: 30000,
getLastErrorModes: {},
getLastErrorDefaults: { w: 1, wtimeout: 0 },
replicaSetId: ObjectId('661c9b91e68c823d44507a9a')
}
}

创建用户

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
// 切换到admin数据库
use admin

// 创建拥有集群管理权限的用户
db.createUser({
user: "clusterAdmin",
pwd: "clusterPassword",
roles: [ { role: "clusterAdmin", db: "admin" } ]
})

// 创建拥有数据库管理权限的用户
db.createUser({
user: "dbAdmin",
pwd: "dbPassword",
roles: [ { role: "dbAdmin", db: "admin" } ]
})

// 创建拥有读写权限的用户
db.createUser({
user: "readWriteUser",
pwd: "rwPassword",
roles: [ { role: "readWrite", db: "yourDatabase" } ]
})

# 创建数据库
mongo [direct: primary] admin> use iot
switched to db iot

mongo [direct: primary] iot> show tables

mongo [direct: primary] iot> db.createCollection("sensor_data")
{ ok: 1 }
mongo [direct: primary] iot> show tables
sensor_data
mongo [direct: primary] iot> show dbs
admin 188.00 KiB
config 264.00 KiB
iot 8.00 KiB
local 876.00 KiB
runoob 40.00 KiB
mongo [direct: primary] iot>

db.runoob.insert({title: 'MongoDB 教程',
description: 'MongoDB 是一个 Nosql 数据库',
by: '菜鸟教程',
url: 'http://www.runoob.com',
tags: ['mongodb', 'database', 'NoSQL'],
likes: 100
})

部署mysql

docker pull mysql

新建文件夹

1
2
3
#在iot-34机器上运行
docker pull mysql
rm -rf /data/mysql/* && mkdir -p /data/mysql/{data,config} && chmod 777 -R /data/

创建网络

1
docker network create --driver overlay mysql_net

mysql_stack.yml

node.hostname==iot-34 为自己的docker node ls 中的hostname

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
version: "3.8"
services:
mysql:
image: mysql
hostname: mysql
restart: always
environment:
TZ: Asia/Shanghai
MYSQL_ROOT_PASSWORD: xxxxxx #设置root密码
networks:
- mysql_net
volumes:
- /data/mysql/data:/var/lib/mysql
- /data/mysql/config:/etc/mysql/mysql.conf.d
ports:
- "3306:3306"
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.hostname == iot-34
networks:
mysql_net:
driver: overlay
external: true

启动

docker stack deploy -c mysql_stack.yml mysql

部署emqx (mqtt服务器)

docker pull emqx

新建文件夹

1
2
3
#在iot-33机器上运行
docker pull emqx
rm -rf /data/emqx/* && mkdir -p /data/emqx/{data,log} && chmod 777 -R /data/

创建网络

1
docker network create --driver overlay emqx_net

emqx_stack.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
version: "3.8"
services:
emqx:
image: emqx
hostname: emqx
restart: always
networks:
- emqx_net
volumes:
- /data/emqx/data:/opt/emqx/data
- /data/emqx/log:/opt/emqx/log
ports:
- "18083:18083"
- "1883:1883"
- "4370:4370"
- "5369:5369"
- "8083:8083"
- "8084:8084"
- "8883:8883"
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.hostname == iot-33
networks:
emqx_net:
driver: overlay
external: true

启动

docker stack deploy -c emqx_stack.yml emqx

链接测试

1
2
3
4
5
6
7
使用mqttx工具测试

访问地址:
http://iot-33:18083
默认账号密码:
admin
public

部署rabbitmq

docker pull rabbitmq

新建文件夹

1
2
3
#在iot-33机器上运行
docker pull rabbitmq
rm -rf /data/rabbitmq/* && mkdir -p /data/rabbitmq/{data,conf} && chmod 777 -R /data/

创建网络

1
docker network create --driver overlay rabbitmq_net

rabbitmq_stack.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
version: "3.8"
services:
rabbitmq:
image: rabbitmq
hostname: rabbitmq
restart: always
networks:
- rabbitmq_net
volumes:
- /data/rabbitmq/data:/var/lib/rabbitmq
- /data/rabbitmq/conf:/etc/rabbitmq/conf.d
ports:
- "15691:15691"
- "15692:15692"
- "25672:25672"
- "5672:5672"
- "15672:15672" #管理页面端口
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.hostname == iot-32
networks:
rabbitmq_net:
driver: overlay
external: true

启动

docker stack deploy -c rabbitmq_stack.yml rabbitmq

启动管理页面

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# 进入容器
docker exec -it a7f72708b24b /bin/bash
# 开启管理页面插件
rabbitmq-plugins enable rabbitmq_management
# 去除警告
rabbitmqctl enable_feature_flag all

# 修改配置文件,解决不显示total
## 版本差异,因此文件名使用*进行通配
root@idooy-rabbit:/# cat /etc/rabbitmq/conf.d/*management_agent.disable_metrics_collector.conf
management_agent.disable_metrics_collector = true

root@idooy-rabbit:/# cd /etc/rabbitmq/conf.d/
root@idooy-rabbit:/etc/rabbitmq/conf.d# echo management_agent.disable_metrics_collector = false > *management_agent.disable_metrics_collector.conf

root@idooy-rabbit:/etc/rabbitmq/conf.d# exit

docker restart 容器ID

# 访问网址
http://iot-32:15672
账号密码:
guest
guest