作者: 李镇伟

Argo CD接入LDAP认证或者gitea认证的方法

背景

argocd默认是通过修改argocd-cm来添加账户的,添加完账户后,还需要使用argocd客户端命令去给账户设置密码,这肯定是比较麻烦的,为了方便使用,我们可以接入ldap认证或者gitea的oauth2认证。

这里我们主要写ldap认证,因为gitea没有提供“组信息”给dex,而ldap能返回”组信息”

关键词:argocd ldap dex

看图讲故事

根据上面的图,我们可以看到,主要是通过配置argocd-cm和argocd-rbac-cm两个配置文件来生效的

下面我们来详细讲讲配置文件如何编写,关于gitea,ldap的安装这里就不再描述了,简单提一句argocd的安装

kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

接入LDAP的配置

编写一个ldap-patch-dex.yaml

注意:这里有一个坑爹的地方,DN居然要大写才能使用,官网文档没有说要大写

apiVersion: v1
data:
  dex.config: |
    connectors:
    - type: ldap
      name: 统一账户中心
      id: ldap
      config:
        # Ldap server address
        host: ${LDAP地址}:${LDAP端口}
        insecureNoSSL: true
        insecureSkipVerify: true
        # Variable name stores ldap bindDN in argocd-secret
        bindDN: "$dex.ldap.bindDN"
        # Variable name stores ldap bind password in argocd-secret
        bindPW: "$dex.ldap.bindPW"
        usernamePrompt: 用户名
        # Ldap user serch attributes
        userSearch:
          baseDN: "ou=XXXX,dc=XXX,dc=com"
          filter: "(objectClass=person)"
          username: uid
          idAttr: uid
          emailAttr: mail
          nameAttr: cn
        # Ldap group serch attributes
        groupSearch:
          baseDN: "dc=XXX,dc=com"
          filter: "(objectClass=groupOfUniqueNames)"
          userAttr: DN
          groupAttr: uniqueMember
          nameAttr: cn
kubectl -n argocd patch configmaps argocd-cm --patch "$(cat ldap-patch-dex.yaml)"

上面的 bindPW 和 bindDN 我们放一个只读权限的账户到secret里,设置方法如下

kubectl -n argocd patch secrets argocd-secret --patch "{\"data\":{\"dex.ldap.bindPW\":\"$(echo my-password | base64 -w 0)\"}}"

kubectl -n argocd patch secrets argocd-secret --patch "{\"data\":{\"dex.ldap.bindDN\":\"$(echo CN=ldapuser,OU=Service Accounts,OU=Resource,DC=mydomain,DC=local | base64 -w 0)\"}}"

设置grooup权限(只有ldap能分组,gitea接入不能获取分组)

编辑argocd-rbac-cm 文件,这里举例设置 “administrators “组为管理员

kubectl edit configmaps -n argocd argocd-rbac-cm

apiVersion: v1
data:
  policy.csv: |
    g, administrators, role:admin
  policy.default: role:readonly

编辑完成之后,需要重启argocd和dex

kubectl delete pod -n argocd argocd-dex-server-7857b96dbb-s596m
kubectl delete pod -n argocd argocd-server-559f498454-fl5d2

效果演示



不推荐使用(接入gitea oauth2认证)

这个 接入gitea oauth2 认证我不推荐,因为没有办法设置“组”,所有用户通过这种方式登录进来的都是 policy.default 对应的权限,也许以后会有,但是笔者写这篇文章的时候是没有办法获取“组”的。

1,在gitea里输入重定向URI创建oauth2认证,获得clientID和clientSecret。

注意:argocd的重定向地址是固定后缀/api/dex/callback

2.创建一个gitea-patch-dex.yaml 内容如下

apiVersion: v1
data:
  accounts.drone: apiKey,login
  dex.config: |-
    connectors:
    - type: gitea
      name: Gitea
      id: gitea
      config:
        baseURL: https://gitea域名
        redirectURI: https://argocd域名/api/dex/callback
        clientID: 上一步获取的clientID
        clientSecret: 上一步获取的clientSecret

3.生效配置文件,重启dex

kubectl -n argocd patch configmaps argocd-cm --patch "$(cat ldap-patch-dex.yaml)"

kubectl delete pod -n argocd argocd-dex-server-7857b96dbb-s596m


docker-compose快速部署LDAP

背景

开发人员一般会用到很多开发软件,例如GIT,SonarQueb,minio,rancher等程序,这么多的程序,每个程序都有自己的一套账户系统和权限肯定是不合适的,作为用户来说,我们肯定是希望同一个账户能在多个软件中登录,就像一个微信号可以玩腾讯的所有游戏。作为管理员来说,肯定是希望前端开发,后端开发,测试人员的权限是分开的,在一个地方修改,所有软件的权限都能同步变更。那我们就采用了ldap的方式来快速部署试试吧。

前提条件

ubuntu系统,安装了docker和docker-compose

架构图

docker-compose.yml内容

创建以下内容的docker-compose.yml 文件,使用docker-compose up -d 命令运行

version: '3'
 
services:
    ldap-service:
        image: osixia/openldap:1.5.0
        container_name: ldap-service
        restart: always
        hostname: ldap.zhenwei.local
        environment:
            - LDAP_ORGANISATION=zhenwei.li.Co.,Ltd.
            - LDAP_DOMAIN=域名.com
            - LDAP_ADMIN_PASSWORD=超管密码
            - LDAP_READONLY_USER=true
            - LDAP_READONLY_USER_USERNAME=lzwread
            - LDAP_READONLY_USER_PASSWORD=只读密码
            - LDAP_CONFIG_PASSWORD=只读密码
            - LDAP_TLS_VERIFY_CLIENT=never
        networks:
            server:
        ports:
          - "389:389"
          - "636:636"
        volumes:
            - /home/zhenwei/ldap/database:/var/lib/ldap
            - /home/zhenwei/ldap/config:/etc/ldap/slapd.d
    ldap-backup:
        image: osixia/openldap-backup:1.5.0
        container_name: ldap-backup
        restart: always
        environment:
            - LDAP_ORGANISATION=zhenwei.li.Co.,Ltd.
            - LDAP_BACKUP_CONFIG_CRON_EXP="0 2 * * *"
            - LDAP_DOMAIN=域名.com
            - LDAP_ADMIN_PASSWORD=超管密码
            - LDAP_READONLY_USER=true
            - LDAP_READONLY_USER_USERNAME=lzwread
            - LDAP_READONLY_USER_PASSWORD=只读密码
            - LDAP_CONFIG_PASSWORD=只读密码
        volumes:
            - /home/zhenwei/ldap/database:/var/lib/ldap
            - /home/zhenwei/ldap/config:/etc/ldap/slapd.d
            - /home/zhenwei/ldap/backup:/data/backup
        networks:
            server:
    phpldap-service:
        image: osixia/phpldapadmin:0.9.0
        container_name: phpldap-service
        restart: always
        environment:
            - PHPLDAPADMIN_LDAP_HOSTS=10.80.3.249
            - PHPLDAPADMIN_HTTPS=false
        networks:
          server:
        ports:
          - "3081:80"
        volumes:
            - /home/zhenwei/ldap/phpadmin-data:/var/www/phpldapadmin
        depends_on:
            - ldap-service
 
    ldap-ltb:
        image: accenture/adop-ldap-ltb:0.1.0
        container_name: ldap-ltb
        restart: always
        networks:
          server:
        ports:
          - "8095:80"
        environment:
            - LDAP_LTB_URL=ldap://ldap-service:389
            - LDAP_LTB_BS=dc=zhenwei.li,dc=com
            - LDAP_LTB_PWD=超管密码
            - LDAP_LTB_DN=cn=admin,dc=zhenwei.li,dc=com
        depends_on:
            - ldap-service
        volumes:
            - /home/zhenwei/ldap/ltb-config:/usr/share/self-service-password/conf
networks:
  server:
#    external: true

electron+droneCI+minio流水线

背景

因为我们的electron程序已经开发完成,期望要能开发人员每次上传代码,打了tag就自动build一份deb文件,自动上传到minio,方便运维人员去拿deb文件部署到ubuntu环境上。我们已有的技术栈包含droneCI,minio,python,于是边有了该方案。本文省略了vault,ldap,minio,harbor的安装与配置,这些程序的安装配置在本网站的其他文章里,就不一一贴出来了


架构图

解释:

1.前端开发上传electron代码到git服务端

2.git服务端通过webhook方式通知drone-server产生了。例如本文只测试的是发布tag触发webhook,还有很多种触发方式都可以设置

3.drone-server收到通知后,再在drone-runner所在的k8s集群里启动一个包含nodejs和python的任务容器

4.任务容器通过electron-forge make 命令打包一个deb文件

5.任务容器通过minio提供的python sdk上传deb文件到minio


drone插件编写

要完成上述目标,第一步就是得编写一个drone的插件

我编写该插件使用的是nodejs16版本的debian系统,然后通过提前安装好需要的如下表格里的工具。注意,因为我用的是华为源,2021年12月9日的时候,华为镜像上最新的electron只到16.0.2版本,所以注意指定版本号

介绍:该插件使用nodejs16版本的debian系统,然后通过提前安装好需要的如下工具。注意,因为我用的是华为源,2021年12月9日的时候,华为镜像上最新的electron只到16.0.2版本,所以注意指定版本号

工具名
rpm
python3-pip
python3
fakeroot
electron@v16.0.2 
electron-prebuilt-compile
electron-forge 
dpkg
minio的python sdk

代码有3个文件main.py Dockerfile ,requirements.txt,下面是详细介绍

main.py

代码功能是先获取环境变量,然后使用git的tag号替换掉package.json里的version字段。执行yarn install,yanr make,通过环境变量找到需要上传的文件,通过pythonde的sdk上传到minio里。详细代码如下

#main.py
import json
import os
import subprocess

from minio import Minio
from minio.error import S3Error

endpoint = "minio.sfere.local"
access_key = "bababa"
secret_key = "bababa"
bucket = "electronjs"
folder_path = "/drone/src/out/make/deb/x64"
suffix = "deb"
tag = "0.0.0"


def find_file_by_suffix(target_dir, target_suffix="deb"):
    find_res = []
    target_suffix_dot = "." + target_suffix
    walk_generator = os.walk(target_dir)
    for root_path, dirs, files in walk_generator:
        if len(files) < 1:
            continue
        for file in files:
            file_name, suffix_name = os.path.splitext(file)
            if suffix_name == target_suffix_dot:
                find_res.append(os.path.join(root_path, file))
    return find_res


def get_environment():
    global endpoint, access_key, secret_key, bucket, suffix, tag

    if "PLUGIN_ENDPOINT" in os.environ:
        endpoint = os.environ["PLUGIN_ENDPOINT"]
    if "PLUGIN_ACCESS_KEY" in os.environ:
        access_key = os.environ["PLUGIN_ACCESS_KEY"]
    if "PLUGIN_SECRET_KEY" in os.environ:
        secret_key = os.environ["PLUGIN_SECRET_KEY"]
    if "PLUGIN_BUCKET" in os.environ:
        bucket = os.environ["PLUGIN_BUCKET"]
    if "PLUGIN_SUFFIX" in os.environ:
        suffix = os.environ["PLUGIN_SUFFIX"]
    if "PLUGIN_TAG" in os.environ:
        tag = os.environ["PLUGIN_TAG"]


def yarn_make():
    with open('./package.json', 'r', encoding='utf8')as fp:
        json_data = json.load(fp)
    json_data['version'] = tag
    with open('./package.json', 'w', encoding='utf8')as fp:
        json.dump(json_data, fp, ensure_ascii=False, indent=2)
    print('package version replace to ' + tag)
    print(subprocess.run("yarn install", shell=True))
    print(subprocess.run("yarn make", shell=True))


def upload_file():
    file_list = find_file_by_suffix(folder_path, suffix)
    # 创建minio连接,这里因为我们是http的,所以secure=False
    client = Minio(
        endpoint=endpoint,
        access_key=access_key,
        secure=False,
        secret_key=secret_key,
    )

    # 检查bucket是否存在,不存在就创建bucket
    found = client.bucket_exists(bucket)
    if not found:
        client.make_bucket(bucket)
    else:
        print("Bucket 'electronjs' already exists")

    # 上传文件到bucket里
    for file in file_list:
        name = os.path.basename(file)
        client.fput_object(
            bucket, name, file,
        )
        print(
            "'" + file + "' is successfully uploaded as "
                         "object '" + name + "' to bucket '" + bucket + "'."
        )


if __name__ == "__main__":
    get_environment()
    yarn_make()
    try:
        upload_file()
    except S3Error as exc:
        print("error occurred.", exc)

Dockerfile

取一个node16版本的debian系统,使用国内源安装我们在之前列出来要用的工具,然后指定程序入口是我们的python程序。编写完后,使用docker build -t drone-electron-minio-plugin:0.1.0 . 做一个镜像上传到私仓里

FROM node:16-buster
RUN npm config set registry https://mirrors.huaweicloud.com/repository/npm/ \
    && npm config set disturl https://mirrors.huaweicloud.com/nodejs \
    && npm config set sass_binary_site https://mirrors.huaweicloud.com/node-sass \
    && npm config set phantomjs_cdnurl https://mirrors.huaweicloud.com/phantomjs \
    && npm config set chromedriver_cdnurl https://mirrors.huaweicloud.com/chromedriver \
    && npm config set operadriver_cdnurl https://mirrors.huaweicloud.com/operadriver \
    && npm config set electron_mirror https://mirrors.huaweicloud.com/electron/ \
    && npm config set python_mirror https://mirrors.huaweicloud.com/python \
    && npm config set canvas_binary_host_mirror https://npm.taobao.org/mirrors/node-canvas-prebuilt/ \
    && npm install -g npm@8.2.0 \
    && yarn config set registry https://mirrors.huaweicloud.com/repository/npm/ \
    && yarn config set disturl https://mirrors.huaweicloud.com/nodejs \
    && yarn config set sass_binary_site https://mirrors.huaweicloud.com/node-sass \
    && yarn config set phantomjs_cdnurl https://mirrors.huaweicloud.com/phantomjs \
    && yarn config set chromedriver_cdnurl https://mirrors.huaweicloud.com/chromedriver \
    && yarn config set operadriver_cdnurl https://mirrors.huaweicloud.com/operadriver \
    && yarn config set electron_mirror https://mirrors.huaweicloud.com/electron/ \
    && yarn config set python_mirror https://mirrors.huaweicloud.com/python \
    && yarn config set canvas_binary_host_mirror https://npm.taobao.org/mirrors/node-canvas-prebuilt/ \
    && yarn global add electron@v16.0.2 electron-forge electron-prebuilt-compile\
    && sed -i "s@http://ftp.debian.org@https://repo.huaweicloud.com@g" /etc/apt/sources.list \
    && sed -i "s@http://security.debian.org@https://repo.huaweicloud.com@g" /etc/apt/sources.list \
    && sed -i "s@http://deb.debian.org@https://repo.huaweicloud.com@g" /etc/apt/sources.list \
    && apt update \
    && apt install -y fakeroot dpkg rpm python3 python3-pip
ADD . .   
WORKDIR . 
RUN pip3 install -r ./requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
#CMD ["python3","/main.py"]
WORKDIR /drone/src
ENTRYPOINT ["python3", "/main.py"]

requirements.txt

minio==7.1.2

electron仓库代码

我们的electron仓库里要添加一个.drone.yml文件和对package.json稍微进行一些修改

package.json

.drone.yml

droneCI的流水线文件,使用了我们在上一节里build出来的drone插件镜像


流水线演示

需要人手动操作的

流水线自动操作的

minio分布式裸机安装(图文并茂)

背景&架构

因为单机的minio无法扩充节点,无法使用版本功能,于是我们边开始使用minio的分布式版本,minio的分布式版本可以使用docker、kubernetes、裸机三种方式,这里我们使用裸机安装,架构如下图所示

1准备工作

4台ubuntu18的电脑,每台电脑的系统,CPU,内存,硬盘空间大小均一致。给minio用的硬盘需使用XFS格式化。挂载给minio用的硬盘到/mnt/disk目录。分别按顺序配置了4个域名

minio1.sfere.local  minio2.sfere.local minio3.sfere.local minio4.sfere.local

编者注:这里我与官网略有不同,我每个服务器只有一块硬盘给挂载,官网是每个服务器给4块硬盘挂载

1个安装了nginx的服务器,域名是minio.sfere.local

编者注:如果你没有域名,你可以在这5台机器里的hosts文件里把5个地址加上,再在你的测试机器的hosts里上加上这5个地址


2.安装minio程序(4台电脑均进行一样的操作)

1.进入官网的下载链接,下载一个最新的deb文件https://dl.min.io/server/minio/release/linux-amd64/ 

例如我下载的 是 https://dl.min.io/server/minio/release/linux-amd64/minio_20211124231933.0.0_amd64.deb

2.把最新文件放到4台服务器上,使用dpkg命令安装 

3.sudo vi /etc/systemd/system/minio.service 注释掉ProtectProc=invisible 。这个是kernel 5.8之后才加入的,我们的ubuntu18系统不支持

4.添加minio-user用户和用户组。注意:此处与官网略有不同,官网打错字了把minio-user打成了miniouser

sudo groupadd -r minio-user
sudo useradd -M -r -g minio-user minio-user
sudo chown minio-user:minio-user /mnt/disk

5.创建环境变量文件

sudo nano /etc/default/minio

# Set the hosts and volumes MinIO uses at startup
# The command uses MinIO expansion notation {x...y} to denote a
# sequential series.
#
# The following example covers four MinIO hosts
# with 4 drives each at the specified hostname and drive locations.
 
MINIO_VOLUMES="http://minio{1...4}.sfere.local/mnt/disk/minio"
 
# Set all MinIO server options
#
# The following explicitly sets the MinIO Console listen address to
# port 9001 on all network interfaces. The default behavior is dynamic
# port selection.
 
MINIO_OPTS="--console-address :9001"
 
# Set the root username. This user has unrestricted permissions to
# perform S3 and administrative API operations on any resource in the
# deployment.
#
# Defer to your organizations requirements for superadmin user name.
 
MINIO_ROOT_USER=minioadmin
 
# Set the root password
#
# Use a long, random, unique string that meets your organizations
# requirements for passwords.
 
MINIO_ROOT_PASSWORD=sfere!lzw!2021
 
# Set to the URL of the load balancer for the MinIO deployment
# This value *must* match across all MinIO servers. If you do
# not have a load balancer, set this value to to any *one* of the
# MinIO hosts in the deployment as a temporary measure.
# nginx服务器地址
MINIO_SERVER_URL="http://minio.sfere.local"
 
MINIO_IDENTITY_LDAP_TLS_SKIP_VERIFY=on
MINIO_IDENTITY_LDAP_SERVER_INSECURE=on
MINIO_IDENTITY_LDAP_STS_EXPIRY=24h
MINIO_IDENTITY_LDAP_SERVER_ADDR=${LDAP域名}
MINIO_IDENTITY_LDAP_LOOKUP_BIND_DN=${LDAP只读账户}
MINIO_IDENTITY_LDAP_LOOKUP_BIND_PASSWORD=${LDAP只读账户的密码}
MINIO_IDENTITY_LDAP_USER_DN_SEARCH_BASE_DN=${LDAP用户搜索域}
MINIO_IDENTITY_LDAP_USER_DN_SEARCH_FILTER=(&(objectClass=inetOrgPerson)(uid=%s))
MINIO_IDENTITY_LDAP_GROUP_SEARCH_BASE_DN=${LDAP组搜索域}
MINIO_IDENTITY_LDAP_GROUP_SEARCH_FILTER=(&(objectclass=groupOfUniqueNames))

6. 运行minio服务,检查运行是否成功

sudo systemctl start minio.service
sudo systemctl status minio.service
journalctl -f -u minio.service

nginx配置

在/etc/nginx/conf.d目录下添加一个minio.conf

upstream minio {
    server minio1.sfere.local:9000;
    server minio2.sfere.local:9000;
    server minio3.sfere.local:9000;
    server minio4.sfere.local:9000;
}
 
upstream console {
    ip_hash;
    server minio1.sfere.local:9001;
    server minio2.sfere.local:9001;
    server minio3.sfere.local:9001;
    server minio4.sfere.local:9001;
}
 
server {
        listen       80;
        listen  [::]:80;
        server_name  minio.sfere.local;
 
        # To allow special characters in headers
        ignore_invalid_headers off;
        # Allow any size file to be uploaded.
        # Set to a value such as 1000m; to restrict file size to a specific value
        client_max_body_size 0;
        # To disable buffering
        proxy_buffering off;
 
        location / {
            proxy_set_header Host $http_host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
 
            proxy_connect_timeout 300;
            # Default is HTTP/1, keepalive is only enabled in HTTP/1.1
            proxy_http_version 1.1;
            proxy_set_header Connection "";
            chunked_transfer_encoding off;
 
            proxy_pass http://minio;
        }
}
server {
        listen       9001;
        listen  [::]:9001;
        server_name  minio.sfere.local;
 
        # To allow special characters in headers
        ignore_invalid_headers off;
        # Allow any size file to be uploaded.
        # Set to a value such as 1000m; to restrict file size to a specific value
        client_max_body_size 0;
        # To disable buffering
        proxy_buffering off;
 
        location / {
            proxy_set_header Host $http_host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header X-NginX-Proxy true;
 
            # This is necessary to pass the correct IP to be hashed
            real_ip_header X-Real-IP;
 
            proxy_connect_timeout 300;
 
            # To support websocket
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
 
            chunked_transfer_encoding off;
 
            proxy_pass http://console;
        }
}

使用mc客户端添加ldap超管,普通用户

docker run --rm -it --entrypoint=/bin/sh minio/mc
 
mc config host add minio http://minio.sfere.local minioadmin 'sfere!lzw!2021' --api S3v4
  
mc admin policy list minio
  
mc admin policy set minio consoleAdmin user=cn=李镇伟,ou=test-department,ou=NJ-Dev,ou=SFERE-RD,dc=sfere-elec,dc=com
mc admin policy set minio readwrite group=cn=jira-software-users,dc=sfere-elec,dc=com
mc admin policy set minio consoleAdmin group=cn=超级用户,dc=sfere-elec,dc=com

访问页面

访问http://minio.sfere.local/ 会自动跳转到http://minio.sfere.local:9001/login

参考文章

https://docs.min.io/minio/baremetal/installation/deploy-minio-distributed.html

docker版Minio接入LDAP

背景

因为官网的LDAP文档接入写的过于分散,实在不利于新手部署,所以重新整理了一版,方便用户能一次部署完成

docker 运行一个minio 服务端

1.首先我们使用docker运行一个新版本的minio。主要是设置minio的root用户名密码(以前叫AccessKey和secrestKey),LDAP服务端信息

2.注意下面运行命令中${}的替换成你自己的LDAP服务信息

docker run --rm -p 7000:9000 -p 7001:7001 --name minio1 \
  -e "MINIO_ROOT_USER=minio" \
  -e "MINIO_ROOT_PASSWORD=minio123" \
  -e "MINIO_IDENTITY_LDAP_TLS_SKIP_VERIFY=on" \
  -e "MINIO_IDENTITY_LDAP_SERVER_INSECURE=on" \
  -e "MINIO_IDENTITY_LDAP_STS_EXPIRY=24h" \
  -e "MINIO_IDENTITY_LDAP_SERVER_ADDR=${LDAP域名}" \
  -e "MINIO_IDENTITY_LDAP_LOOKUP_BIND_DN=${LDAP只读账户}" \
  -e "MINIO_IDENTITY_LDAP_LOOKUP_BIND_PASSWORD=${LDAP只读账户的密码}" \
  -e "MINIO_IDENTITY_LDAP_USER_DN_SEARCH_BASE_DN=${LDAP用户搜索域}" \
  -e "MINIO_IDENTITY_LDAP_USER_DN_SEARCH_FILTER=(&(objectClass=inetOrgPerson)(uid=%s))" \
  -e "MINIO_IDENTITY_LDAP_GROUP_SEARCH_BASE_DN=${LDAP组搜索域}" \
  -e "MINIO_IDENTITY_LDAP_GROUP_SEARCH_FILTER=(&(objectclass=groupOfUniqueNames)(uniquemember=%d))" \
  minio/minio:RELEASE.2021-11-24T23-19-33Z server /data --console-address ":7001"

docker 运行一个minio 客户端

1.运行minio客户端,并进入容器内

docker run -it --entrypoint=/bin/sh minio/mc

2.设置客户端到服务端的连接信息

mc config host add minio http://${服务器IP}:7000 minio minio123 --api S3v4

3.检查minio服务端的权限列表

mc config host add minio http://${服务器IP}:7000 minio minio123 --api S3v4

4.设置用户权限或者组权限

mc admin policy set minio consoleAdmin user=cn=李镇伟,ou=XXX,ou=XXX,ou=XXX,dc=XXX
mc admin policy set minio consoleAdmin group=cn=南京测试部,dc=XXX

打开浏览器,使用ldap账户登录

这里我设置的是超管用户,所以可以看到所有的功能


python使用ldap账户接入minio,下载文件

from progress.bar import Bar
import threading
from minio import Minio
import zipfile
import os
import time
from minio.credentials import LdapIdentityProvider

# STS endpoint 一般来说就是 MinIO server的地址
sts_endpoint = "minio.lzw.local"

# LDAP username.
ldap_username = "ldap用户名"

# LDAP password.
ldap_password = "ldap密码"

provider = LdapIdentityProvider(sts_endpoint, ldap_username, ldap_password)

# 如果是http的注意secure=False,使用上面的ldap信息
client = Minio(sts_endpoint,secure=False, credentials=provider)

# 测试下载一个文件玩玩,加一个进度条
bucket_name="bucket名字"

object_name="object名字"

get_object_with_progress(client, bucket_name, object_name)

def get_object_with_progress(client, bucket_name, object_name):
    try:
        data = client.get_object(bucket_name, object_name)
        total_length = int(data.headers.get('content-length'))
        bar = Bar(object_name, max=total_length / 1024 / 1024, fill='*', check_tty=False,
                  suffix='%(percent).1f%% - %(eta_td)s')
        with open('./' + object_name, 'wb') as file_data:
            for d in data.stream(1024 * 1024):
                bar.next(1)
                file_data.write(d)
        bar.finish()
    except Exception as err:
        print(err)


class ProgressThread(threading.Thread):
    def __init__(self, name):
        threading.Thread.__init__(self)
        self.name = name

    def run(self):
        print("开始下载文件:" + self.name)
        global download_flag
        max_number = 100
        bar = Bar(self.name, max=max_number, check_tty=False)
        for i in range(max_number):
            # Do some work
            if download_flag is False:
                bar.next(max_number - i)
                bar.finish()
                break
            else:
                time.sleep(2)
                bar.next()
        print("\n文件下载完成:" + self.name)

fluentd接入Elasticsearch的简单例子

背景

最近想学习一下elasticsearch和fluentd的配合使用, fluentd比logstash节省太多资源了,所以就有了如下文章

Elasticsearch快捷安装(使用ECK方式)

参考文章

https://www.elastic.co/guide/en/cloud-on-k8s/1.8/k8s-deploy-eck.html

先安装一个eck的operator

kubectl create -f https://download.elastic.co/downloads/eck/1.8.0/crds.yaml
kubectl apply -f https://download.elastic.co/downloads/eck/1.8.0/operator.yaml

等命令介绍,输入下面命令查看日志

kubectl -n elastic-system logs -f statefulset.apps/elastic-operator

安装elasticsearch

cat <<EOF | kubectl apply -f -
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: quickstart
spec:
  version: 7.15.2
  nodeSets:
  - name: default
    count: 1
    config:
      node.store.allow_mmap: false
EOF

安装完成后,输入命令,获得es的密码,默认账户是elastic

PASSWORD=$(kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}')

部署完毕后,可以通过port-forward转发elasticsearch的端口到外部进行测试

kubectl port-forward service/quickstart-es-http 9200

再安装一个kibana

cat <<EOF | kubectl apply -f -
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: quickstart
spec:
  version: 7.15.2
  count: 1
  elasticsearchRef:
    name: quickstart
EOF

可以通过port-forward转发kibana的端口到外部进行测试

kubectl port-forward service/quickstart-kb-http 5601

fluentd安装

编写一个fluentd.yaml ,

编写完毕后kubectl apply -f fluentd.yaml 。内容如下,注意替换密码:

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluentd
  namespace: kube-logging
  labels:
    app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: fluentd
  labels:
    app: fluentd
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - namespaces
  verbs:
  - get
  - list
  - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd
roleRef:
  kind: ClusterRole
  name: fluentd
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: fluentd
  namespace: kube-logging
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
  namespace: kube-logging
  labels:
    app: fluentd
spec:
  selector:
    matchLabels:
      app: fluentd
  template:
    metadata:
      labels:
        app: fluentd
    spec:
      serviceAccount: fluentd
      serviceAccountName: fluentd
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: fluentd
        image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1
        env:
          - name:  FLUENT_ELASTICSEARCH_HOST
            value: "quickstart-es-http.default.svc.cluster.local"
          - name:  FLUENT_ELASTICSEARCH_PORT
            value: "9200"
          - name: FLUENT_ELASTICSEARCH_SCHEME
            value: "http"
          - name: FLUENT_ELASTICSEARCH_USER
            value: "elastic"
          - name: FLUENT_ELASTICSEARCH_PASSWORD
            value: "我是密码!注意替换"
          - name: FLUENT_ELASTICSEARCH_SSL_VERSION
            value: "TLSv1_2"
          - name: FLUENTD_SYSTEMD_CONF
            value: disable
          - name: FLUENT_UID
            value: "0"
        resources:
          limits:
            memory: 512Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers

部署一个测试程序(用完之后可以删除)

kubectl -n logging apply -f - <<"EOF"
apiVersion: apps/v1
kind: Deployment
metadata:
 name: log-generator
spec:
 selector:
   matchLabels:
     app.kubernetes.io/name: log-generator
 replicas: 1
 template:
   metadata:
     labels:
       app.kubernetes.io/name: log-generator
   spec:
     containers:
     - name: nginx
       image: banzaicloud/log-generator:0.3.2
EOF

kibana里添加index和查看

直接看图说话

参考文章

https://www.digitalocean.com/community/tutorials/how-to-set-up-an-elasticsearch-fluentd-and-kibana-efk-logging-stack-on-kubernetes
https://docs.fluentd.org/output/elasticsearch
https://github.com/fluent/fluentd-kubernetes-daemonset
https://medium.com/kubernetes-tutorials/cluster-level-logging-in-kubernetes-with-fluentd-e59aa2b6093a

内网穿透访问你家里的树莓派

背景

你有一个自己的外网服务器,然后捏你又买了一个树莓派放家里吃灰,有一天,你觉得不能让你的树莓派吃灰,你想上班的时候用你的树莓派在家里编译arm用的镜像程序。于是,便有了本篇文章~~~如何通过内网穿透访问你家里的树莓派。(注:仅用于测试,闹着玩,千万别上生产,上生产需要自己进行改造用户权限以及审计功能,这部分就不贴出来了)

使用的工具:docker,rtty,rttys

rtty是一款好用的内网穿透工具,感谢开源作者,下面是链接

https://gitee.com/zhaojh329/rtty#/zhaojh329/rtty/blob/master/CROSS_COMPILE.md

外网服务器端

1.下载rttys 4.0.0版本的代码到服务器上https://gitee.com/zhaojh329/rttys/attach_files/837471/download/rttys-linux-amd64-4.0.0.tar.gz

2.解压rttys,进入代码目录

3.制作一个docker镜像,Dockerfile内容如下:

From ubuntu:20.04
ADD . .
WORKDIR .
CMD ["./rttys"]

4.使用以下docker命令运行rttys

docker build -t rttys .
docker run --name rttys -d -p 5912:5912 -p 5913:5913 rttys

树莓派运行客户端

1.下载rtty8.0.0版本的代码https://gitee.com/zhaojh329/rtty/attach_files/837470/download/rtty-8.0.0.tar.gz

2.使用以下命令build客户端程序

mv rtty-8.0.0 rtty && cd rtty && mkdir build && cd build
cmake .. && sudo make install

3.运行客户端程序,注意填写服务器IP ,例如下面的“192.168.0.243”

sudo rtty -I ‘lizhenwei’ -h 192.168.0.243 -p 5912 -a -v


在浏览器里访问你的树莓派

访问IP+5913端口,注册一个用户,然后点击【控制台】按钮即可ssh到树莓派

TDengine安装,python客户端测试,接入DBeaver

简介

最近在看TDengine数据库,思考如何和我们的边缘集群结合在一起使用,本文结构是:

服务端:ubuntu18系统,通过deb文件安装TDengine数据库,主机IP 192.168.0.13,使用默认用户名密码

客户端:容器运行,python客户端,可以运行在另外一台机器或者K8S集群里

图形化工具:使用Dbeaver添加jdbc驱动,在图形化工具里使用TDengine


服务端TDengine安装

根据文档推荐,目前支持linux裸机安装,不建议docker安装,所以我们这里裸机安装

1.通过这个https://www.taosdata.com/assets-download/TDengine-server-2.2.0.5-Linux-x64.deb链接下载安装文件

2.放置此文件到192.168.0.13服务器上,通过命令sudo dpkg -i TDengine-server-2.2.0.5-Linux-x64.deb 安装

3.通过命令启动TDengine数据库sudo systemctl start taosd

4.参考https://www.taosdata.com/cn/getting-started/#Quick%20Start上的命令可以去创建数据库和表测试


python客户端容器

以python做一个docker容器为例。这里有一个很奇葩的地方,每一种TDengine的连接方式都必须要设置一个taos.cfg的路径,和安装一个独立的客户端程序

创建一个Dockerfile和一个测试文件sub.py ,一个requirements.txt文件,如果不用faker库,requirements.txt文件也不需要

Dockerfile内容如下

FROM python:3.9.7
ADD . .
RUN wget https://www.taosdata.com/assets-download/TDengine-client-2.2.0.5-Linux-x64.tar.gz
RUN tar -xzf TDengine-client-2.2.0.5-Linux-x64.tar.gz
WORKDIR /TDengine-client-2.2.0.5
RUN ./install_client.sh
WORKDIR /
RUN git clone --depth 1 https://github.com/taosdata/TDengine.git
RUN pip install -r requirements.txt  -i https://pypi.tuna.tsinghua.edu.cn/simple
RUN pip install ./TDengine/src/connector/python -i https://pypi.tuna.tsinghua.edu.cn/simple
CMD ["python","test.py"]

测试主文件test.py

import time
 
import taos
from faker import Faker
 
fake = Faker()
print()
conn = taos.connect(host="192.168.0.13", user="root", password="taosdata", config="C:\TDengine\cfg")
c1 = conn.cursor()
import datetime
 
# 创建数据库
# c1.execute('create database db')
c1.execute('use db')
print("开始时间")
print(time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time())))
 
# # 建表
# c1.execute('create table tb (ts timestamp, temperature int, humidity float)')
# # 插入数据
start_time = datetime.datetime(2018, 11, 1)
# affected_rows = c1.execute("insert into tb_ts_kv_t values (\'%s\', 'DEVICE', '1eb3c1d1d5db250860b6553ce011990', 'DCT', '1623200', True, '123', '456', 3.0)" %start_time)
# 批量插入数据
time_interval = datetime.timedelta(seconds=1)
sqlcmd = ['insert into tb_ts_kv_t values']
for irow in range(1, 1000):
    sqlcmd = ['insert into tb_ts_kv_t values']
    start_time += time_interval
    sqlcmd.append("('" + str(start_time) + "', 'DEVICE', '"+fake.company()+"', '"+fake.name()+"', '" + str(
        irow) + "', True, '"+fake.word()+"', '456', " + str(irow * 1.2) + ")")
    exestring = ' '.join(sqlcmd)
    # sqlcmd.append('(\'%s\', %d, %f)' %(start_time, irow, irow*1.2))
    affected_rows = c1.execute(exestring)
print("结束时间")
print(time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time())))
 
c1.execute('select * from tb_ts_kv_t')
for data in c1:
    print("ts=%s, temperature=%d, humidity=%f" %(data[0], data[4],data[-1]))

requirements.txt文件

Faker==9.3.1

接入图形化工具DBeaver

1.点击【数据库】->【驱动管理器】,点击【新建】

2,进行如下设置,输入如下参数

keyvalue
驱动名称Tdengine
驱动类型Generic
类名com.taosdata.jdbc.TSDBDriver
默认端口6030

3.点击添加【工件】,输入参数如下

keyvalue
Group IDcom.taosdata.jdbc
taos-jdbcdrivertaos-jdbcdriver
版本RELEASE

4.点击【驱动类】旁边的【找到类】按钮,选择【com.taosdata.jdbc.TSDBDriver】,点击【确定】

5.新建TDengine连接,参数如下

keyvalue
JDBC URLjdbc:TAOS://192.168.0.13/db
用户名root
密码taosdata

用python实现helm template功能

背景

众所周知,helm template 包名 -f values.yaml >输出文件。这个方式能渲染go-template,自动填充{{ .Values.XXX }}参数到文件里。现在有一个需求,需要用python来实现类似的功能。那么就来看看我的最后实现吧


sapmle.tmpl 待填充文件

{{ .Values.Count }} items are made of {{ .Values.Material }}
{{ .Values.Material }} items are made of {{ .Values.Material }}
{{ .Values.Material }} items are made of {{ .Values.Count }}
{{ .Values.mqtt.server }} dadasdjsaijaid

values.yml 参数文件

Count: 14
Material: Wool
mqtt:
  server: 172.15.62.2

python 代码

import re

from ruamel import yaml


def traverse(dic, path=None):
    if not path:
        path = []
    if isinstance(dic, dict):
        for x in dic.keys():
            local_path = path[:]
            local_path.append(x)
            for b in traverse(dic[x], local_path):
                yield b
    else:
        yield path, dic


def template_render(source_file, values_file, dest_file):
    with open(source_file, 'r') as source:
        origin = source.read()

    with open(values_file, 'r', encoding='utf-8') as vaules:
        result = yaml.load_all(vaules.read(), Loader=yaml.Loader)
        yaml_dict = list(result)[0]
    for x in traverse(yaml_dict):
        match = "\{\{ \.Values." + '.'.join(x[0]) + " \}?\}"
        origin = re.sub(match, str(x[1]), origin)

    with open(dest_file, 'w+') as dest:
        dest.write(origin)


if __name__ == '__main__':
    template_render('sample.tmpl', "values.yml","result.yaml")

result.yaml 渲染结果

14 items are made of Wool
Wool items are made of Wool
Wool items are made of 14
172.15.62.2 dadasdjsaijaid

ECK安装elasticsearch,接入apm测试

任务目标

以前都是用helm安装elasticsearch,最近发现elasticsearch推荐使用ECK在K8S上安装,那我们就来试试吧

我们会在已有的K8S上安装ECK,elasticsearch,kibana,apm,关闭ssl,loadbalancer暴露应用访问

测试golang接入apm

ECK创建过程

1.先安装上operator

kubectl create -f https://download.elastic.co/downloads/eck/1.7.1/crds.yaml
kubectl apply -f https://download.elastic.co/downloads/eck/1.7.1/operator.yaml

2.安装elasticsearch

cat <<EOF | kubectl apply -f -
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: quickstart
spec:
  version: 7.14.1
  nodeSets:
  - name: default
    count: 1
    config:
      node.store.allow_mmap: false
EOF

3.安装kibana

cat <<EOF | kubectl apply -f -
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: quickstart
spec:
  version: 7.14.1
  count: 1
  elasticsearchRef:
    name: quickstart
EOF

4.安装apm

cat <<EOF | kubectl apply -f -
apiVersion: apm.k8s.elastic.co/v1
kind: ApmServer
metadata:
  name: apm-server-quickstart
  namespace: default
spec:
  version: 7.14.1
  count: 1
  elasticsearchRef:
    name: quickstart
EOF

5.暴露kibana可外部访问,并且关闭ssl

kubectl edit kibanas.kibana.k8s.elastic.co quickstart。这里只贴上关键的spec部分代码

spec:
  count: 1
  elasticsearchRef:
    name: quickstart
  enterpriseSearchRef:
    name: ""
  http:
    service:
      metadata: {}
      spec:
        type: LoadBalancer
    tls:
      selfSignedCertificate:
        disabled: true

6.暴露apm可外部访问

kubectl edit apmserver.apm.k8s.elastic.co/apm-server-quickstart

修改的内容与上面kibana修改内容一致。


7.获取kibana登录用户名和密码

默认用户名 elastic

默认密码使用如下命令获取

kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode }}'

8.获取apm-server的secret-token

kubectl get secret/apm-server-quickstart-apm-token -o go-template='{{index .data "secret-token" | base64decode}}'

golang测试APM-SERVER通信

1.设置环境变量,

# 服务名,不设置的话,就是代码的文件名
export ELASTIC_APM_SERVICE_NAME=

# apm服务器地址
export ELASTIC_APM_SERVER_URL=http://localhost:8200

# 我们上一步拿到的token
export ELASTIC_APM_SECRET_TOKEN=

# 可以设置也可以不设置,用于标识环境的,类似标签功能
export ELASTIC_APM_ENVIRONMENT=

2.编写golang测试代码main.go

package main

import (
	"fmt"
	"log"
	"net/http"

	"github.com/gorilla/mux"
	"go.elastic.co/apm/module/apmgorilla"
)

func helloHandler(w http.ResponseWriter, req *http.Request) {
	fmt.Fprintf(w, "Hello, %s!\n", mux.Vars(req)["name"])
}
func main() {
	r := mux.NewRouter()
	r.HandleFunc("/hello/{name}", helloHandler)
	r.Use(apmgorilla.Middleware())
	log.Fatal(http.ListenAndServe(":8000", r))
}

3,在kibana上检查apm的信息,应该会看到一个main的server,有一些数据,如下图所示,证明apm可成功连通


苏ICP备18047533号-1