GitOps Delivery Based on KubeVela

Author Dong Tianxin (Fog)
Review & proofreading: streams, beads
Edit & Typography: Wen Yan

KubeVela is a simple, easy-to-use, highly scalable cloud native application management and delivery platform that allows developers to quickly and easily define and deliver modern micro-service applications on Kubernetes without having to know any details about the Kubernetes infrastructure.

The OAM model behind KubeVela naturally solves management issues such as the combination and organization of complex resources in the application building process, as well as modelling later maintenance strategies, which means that KubeVela can manage complex large-scale applications in conjunction with GitOps to converge on system complexity problems caused by the growth of team and system sizes.

What is GitOps

Its core idea is to store declarative descriptions of the infrastructure and application configurations required by the application system in the Git repository, along with an automated process that gradually updates the environment to the latest configuration each time the repository is updated.

This allows developers to automatically deploy applications by directly changing the code and configuration in the Git repository, and GitOps can bring many benefits to application development, such as:

* Increase productivity. Automated continuous deployment accelerates average deployment time and increases development efficiency.
Reduce the threshold for developer deployment. By pushing code instead of container configurations, developers can easily deploy without knowing the internal implementation of Kubernetes.
Make the change record traceable. Managing clusters through Git enables tracking of every change and enhances audit tracking.
* Clusters can be restored through Git's rollback/branching capabilities.

KubeVela and GitOps

KubeVela, as a declarative application delivery control plane, naturally supports GitOps as a way to use it and makes the benefits of GitOps more visible to users, as well as end-to-end application delivery and management experiences, including:

* Application Delivery Workflow (CD Pipeline): that is, KubeVela supports describing procedural application delivery in GitOps mode, rather than simply declaring a final state;
Dealing with dependencies and topologies during deployment;
* Provide a unified upper level abstraction over the semantics of existing GitOps tools to simplify application delivery and management processes;
- Unify the declaration, deployment and service binding of cloud services;
* Offer out-of-the-box delivery strategies (canaries, blue-green releases, etc.);
* Provide out-of-the-box hybrid cloud/multi-cloud deployment strategies (placement rules, cluster filtering rules, etc.);
* Provide Kustomize-style Patch es in multi-environment delivery to describe deployment differences without learning any details about Kustomize itself;
• ......

In this article, we will focus on the steps for delivering directly using KubeVela in GitOps mode.

GitOps workflow

The GitOps workflow is divided into CI and CD parts:

* CI (Continuous Integration): Continuous integration code-builds business applications, builds mirrors, and pushes them to the mirror warehouse. There are many mature CI tools, such as GitHub Action, Travis, and so on, which are commonly used in open source projects, as well as Jenkins, Tekton, which are commonly used in enterprises. In this article, we use GitHub Action to complete the CI step, and of course you can use other CI tools instead, KubeVela can dock the CI process around GitOps under any tool.
* CD (Continuous Delivery): Continuous deployment automatically updates the configuration in the cluster, such as updating the latest mirrors in the mirror warehouse to the cluster. There are currently two main options for CD:
1) Push-Based:Push mode CD s are mainly accomplished by configuring CI pipelining, which requires sharing the access key of the cluster to CI so that CI pipelining can push changes to the cluster by command, referring to our previous blog: use Jenkins + KubeVela to complete the continuous delivery of the application (see the link at the end of the article).
2) The Pull-Based:Pull mode CD monitors changes in the repository (code or configuration) in the cluster and synchronizes these changes into the cluster. Compared with Push mode, Pull-Based is actively pulled by the cluster to update, thereby avoiding the problem of secret key exposure. This is also the core content of this article.

There are two types of people delivered to you, which we will describe separately:

  1. Infrastructure delivery for platform administrators/operations personnel allows users to update infrastructure configurations in clusters, such as system-dependent software, security policies, storage, networks, and so on, by directly updating configuration files in warehouses.
  2. For end-developer delivery, once the user's code is merged into the application code repository, it triggers updates in the cluster automatically, completes iterations of applications more efficiently, and combines with features such as KubeVela's grayscale publishing, traffic calls, multi-cluster deployment to form a more powerful automated publishing capability.

Delivery to platform administrators/operations personnel

As shown in the diagram, for platform administrators/operations personnel, they don't need to care about application code, so they just need to prepare a Git configuration warehouse and deploy a KubeVela configuration file. Subsequent configuration changes for applications and infrastructure can be accomplished by updating the Git configuration warehouse directly, making each configuration change traceable.

Prepare to configure warehouse

Specific configurations can be found in Example Warehouse 1 (see the link at the end of this article for details).

In this example, we will deploy a MySQL database software as the infrastructure for the project and a business application to use the database. The directory structure of the configuration warehouse is as follows:

* Clusters / contains the KubeVela GitOps configuration in the cluster, and users need to manually deploy the files in clusters / to the cluster. This is a one-time management operation that, when completed, allows KubeVela to automatically monitor file changes in the configuration repository and automatically update configurations in the cluster. Where clusters/apps.yaml will listen for changes to all applications under apps/and clusters/infra.yaml will listen for changes to all infrastructure under infrastructure/
* The apps/directory contains all the configurations of a business application, in this case a business application that uses a database.
Infrastructure/includes some infrastructure-related configurations and strategies, in this case MySQL databases.

├── apps
│   └── my-app.yaml
├── clusters
│   ├── apps.yaml
│   └── infra.yaml
└── infrastructure
    └── mysql.yaml

KubeVela recommends using the directory structure above to manage your GitOps repository. clusters/hold the associated KubeVela GitOps configuration and need to be manually deployed to the cluster, apps/and infrastructure/hold your application and infrastructure configurations separately. By separating your application from your underlying configuration, you can manage your deployment environment more rationally and isolate the impact of changes to your application.

clusters/directory

First, let's look at the clusters directory, which is also the initial operation configuration directory for KubeVela docking GitOps.

Take clusters/infra.yaml for example:

apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
  name: infra
spec:
  components:
  - name: database-config
    type: kustomize
    properties:
      repoType: git
      # Replace this with the git configuration repository address you need to listen to
      url: https://github.com/FogDong/KubeVela-GitOps-Infra-Demo
      # If it is a private warehouse, you also need to associate git secret
      # secretRef: git-secret
      # Time interval for automatic pull configuration, set here to 10 minutes due to small changes in infrastructure
      pullInterval: 10m
      git:
        # Branch listening for changes
        branch: main
      # Path to monitor changes, pointing to files in the infrastructure directory in the repository
      path: ./infrastructure

Apps.yaml is almost identical to infra.yaml, except that the file directories listened for are different. In apps.yaml, the value of properties.path will be changed to. /apps, indicating that you are listening for file changes in the apps/directory.

The GitOps control profile in the cluster folder needs to be manually deployed to the cluster at one time during initialization, after which KubeVela will automatically listen for the apps/and infrastructure/directory configuration files and update the synchronization periodically.

apps/directory

The apps/directory holds the application configuration file, which is a simple application that configures database information and Ingress. The application will connect to a MySQL database and simply start the service. Under the default service path, the current version number is displayed. Under the / db path, the information in the current database is listed.

apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
  name: my-app
  namespace: default
spec:
  components:
    - name: my-server
      type: webservice
      properties:
        image: ghcr.io/fogdong/test-fog:master-cba5605f-1632714412
        port: 8088
        env:
          - name: DB_HOST
            value: mysql-cluster-mysql.default.svc.cluster.local:3306
          - name: DB_PASSWORD
            valueFrom:
              secretKeyRef:
                name: mysql-secret
                key: ROOT_PASSWORD
      traits:
        - type: ingress
          properties:
            domain: testsvc.example.com
            http:
              /: 8088

This is an application that uses the KubeVela built-in component type webservice, which binds Ingress operational features. By declaring operations and maintenance capabilities in an application, you can assemble the underlying Deployment, Service, and Ingress in a single file to more easily manage the application.

infrastructure/directory

Some infrastructure configurations are stored in the infrastructure/directory. Here we deploy a MySQL cluster using mysql controller (see the link at the end of this article for details).

Note that make sure you have a secret in your cluster and pass ROOT_PASSWORD declares the MySQL password.

apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
  name: mysql
  namespace: default
spec:
  components:
    - name: mysql-controller
      type: helm
      properties:
        repoType: helm
        url: https://presslabs.github.io/charts
        chart: mysql-operator
        version: "0.4.0"
    - name: mysql-cluster
      type: raw
      dependsOn:
        - mysql-controller
      properties:
        apiVersion: mysql.presslabs.org/v1alpha1
        kind: MysqlCluster
        metadata:
          name: mysql-cluster
        spec:
          replicas: 1
          # Associated secret name
          secretName: mysql-secret

In this MySQL application, we used the capabilities of the KubeVela workflow. The workflow is divided into two steps, the first of which is to deploy the controller for MySQL. When the controller deployment succeeds and runs correctly, the second step starts deploying the MySQL cluster.

Deploy files under clusters/directories

After configuring the above files and storing them in the Git configuration repository, we need to manually deploy the KubeVela GitOps configuration files in the clusters/directories in the cluster.

First, deploy clusters/infra.yaml in the cluster. You can see that it automatically pulls up MySQL deployment files in the infrastructure/directory in the cluster:

kubectl apply -f clusters/infra.yaml
$ vela ls
APP     COMPONENT           TYPE        TRAITS  PHASE   HEALTHY STATUS  CREATED-TIME
infra   database-config     kustomize           running healthy         2021-09-26 20:48:09 +0800 CST
mysql   mysql-controller    helm                running healthy         2021-09-26 20:48:11 +0800 CST
└─      mysql-cluster       raw                 running healthy         2021-09-26 20:48:11 +0800 CST

Next, deploy in a cluster

clusters/apps.yaml, you can see that it automatically pulls up the application deployment file in the apps/directory:

kubectl apply -f clusters/apps.yaml
APP     COMPONENT           TYPE        TRAITS  PHASE   HEALTHY STATUS  CREATED-TIME
apps    apps                kustomize           running healthy         2021-09-27 16:55:53 +0800 CST
infra   database-config     kustomize           running healthy         2021-09-26 20:48:09 +0800 CST
my-app  my-server           webservice  ingress running healthy         2021-09-27 16:55:55 +0800 CST
mysql   mysql-controller    helm                running healthy         2021-09-26 20:48:11 +0800 CST
└─      mysql-cluster       raw                 running healthy         2021-09-26 20:48:11 +0800 CST

So far, we have automatically pulled up applications and databases in the cluster by deploying the KubeVela GitOps configuration file.

With Ingress applied by curl, you can see that the current version is 0.1.5 and that you have successfully connected to the database:

$ kubectl get ingress
NAME        CLASS    HOSTS                 ADDRESS         PORTS   AGE
my-server   <none>   testsvc.example.com   <ingress-ip>    80      162m
$ curl -H "Host:testsvc.example.com" http://<ingress-ip>
Version: 0.1.5
$ curl -H "Host:testsvc.example.com" http://<ingress-ip>/db
User: KubeVela
Description: It's a test user

Modify Configuration

After the first deployment, we can complete the configuration updates applied in the cluster by modifying the configuration in the configuration warehouse.

Modify the Domain where Ingress is applied:

...
      traits:
        - type: ingress
          properties:
            domain: kubevela.example.com
            http:
              /: 8089

After a while, review Ingress in the cluster:

NAME        CLASS    HOSTS                 ADDRESS         PORTS   AGE
my-server   <none>   kubevela.example.com  <ingress-ip>    80      162m

You can see that Ingress's Host address has been successfully updated.

In this way, we can easily update the configuration in the cluster by updating the files in the Git configuration repository.

Delivery for end developers

For end developers, you need to prepare an application code repository in addition to the KubeVela Git configuration repository. As shown in the diagram, after the user updates the code in the application code repository, a CI needs to be configured to automatically build the mirror and push it to the mirror repository. KubeVela listens for the latest mirrors in the mirror warehouse, automatically updates the mirror configuration in the configuration warehouse, and finally updates the application configuration in the cluster. Allows users to achieve the effect of automatically updating the configuration in the cluster after updating the code.

Prepare code repository

Prepare a code repository with some source code and corresponding Dockerfile s.

The code connects to a MySQL database and simply starts the service. Under the default service path, the current version number is displayed. Under the / db path, the information in the current database is listed.

http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
        _, _ = fmt.Fprintf(w, "Version: %s\n", VERSION)
    })
    http.HandleFunc("/db", func(w http.ResponseWriter, r *http.Request) {
        rows, err := db.Query("select * from userinfo;")
        if err != nil {
            _, _ = fmt.Fprintf(w, "Error: %v\n", err)
        }
        for rows.Next() {
            var username string
            var desc string
            err = rows.Scan(&username, &desc)
            if err != nil {
                _, _ = fmt.Fprintf(w, "Scan Error: %v\n", err)
            }
            _, _ = fmt.Fprintf(w, "User: %s \nDescription: %s\n\n", username, desc)
        }
    })
    if err := http.ListenAndServe(":8088", nil); err != nil {
        panic(err.Error())
    }

We want the user to automatically build the latest mirror and push it to the mirror repository after submitting the change code. This step of CI can be achieved by integrating GitHub Actions, Jenkins, or other CI tools. In this example, we do continuous integration with GitHub Actions. Specific code files and configurations can be found in Example Warehouse 2 (see the links at the end of this article for details).

Configure key information

After the new image is pushed to the mirror warehouse, KubeVela recognizes the new image and updates the Application configuration files in the warehouse and cluster. Therefore, we need a Secret with Git information for KubeVela to submit to the Git repository. Deploy the following file and replace the user name and password with your Git user name and password (or Token):

apiVersion: v1
kind: Secret
metadata:
  name: git-secret
type: kubernetes.io/basic-auth
stringData:
  username: <your username>
  password: <your password>

Prepare to configure warehouse

The configuration warehouse is much the same as the previous configuration for operations and maintenance personnel, simply adding the configuration related to the mirror warehouse. Refer to example warehouse 1 for configuration (see the related link at the end of the article for details).

Modify apps.yaml in clusters/which monitors application file changes under apps/in the repository and mirror updates in the mirror repository:

...
  imageRepository:
    # Mirror Address
    image: <your image>
    # If this is a private mirror repository, you can create and associate a corresponding mirror secret key by `kubectl create secret docker-registry`
    # secretRef: imagesecret
    filterTags:
      # Can filter mirror Tags
      pattern: '^master-[a-f0-9]+-(?P<ts>[0-9]+)'
      extract: '$ts'
    # Filter out the latest mirrored Tag s through policy and use them for updates
    policy:
      numerical:
        order: asc
    # Additional submission information
    commitMessage: "Image: {{range .Updated.Images}}{{println .}}{{end}}"

Modify the image field in apps/my-app.yaml, followed by a comment of # {'$imagepolicy':'default:apps'}. KubeVela updates the corresponding mirror field with this comment. Default:apps is the namespace and name for the GitOps configuration above.

spec:
  components:
    - name: my-server
      type: webservice
      properties:
        image: ghcr.io/fogdong/test-fog:master-cba5605f-1632714412 # {"$imagepolicy": "default:apps"}

After updating the clusters/files containing the mirror warehouse configuration to the cluster, we can update the application by modifying the code.

Modify Code

Change Version in the code file to 0.1.6 and modify the data in the database:

const VERSION = "0.1.6"
...
func InsertInitData(db *sql.DB) {
    stmt, err := db.Prepare(insertInitData)
    if err != nil {
        panic(err)
    }
    defer stmt.Close()
    _, err = stmt.Exec("KubeVela2", "It's another test user")
    if err != nil {
        panic(err)
    }
}

Submit the change to the code repository, and you can see that our configured CI pipeline begins to build a mirror and push it to the mirror repository.

KubeVela updates my-app under apps/in the configuration warehouse by monitoring the mirror warehouse and according to the latest mirror Tag.

At this point, you can see a submission from the kubevelabot in the configuration repository with the Update image automatic. prefix. You can also append the information you need to the commitMessage field by {{range.Updated.Images} {{println.)} {{end}}.

It is worth noting that if you want to keep the code and configuration in the same repository, you need to filter submissions from kubevelabot to prevent duplicate pipeline building. You can filter in the CI by configuring the following:

jobs:
publish:
  if: "!contains(github.event.head_commit.message, 'Update image automatically')"

Looking back at the applications in the cluster, you can see that after a while, the image of my-app application has been updated.

KubeVela retrieves the latest information from the configuration warehouse and the mirror warehouse at interval s you configure:

* When the configuration files in the Git repository are updated, KubeVela updates the applications in the cluster based on the latest configuration.
* When new Tags are added to the mirror warehouse, KubeVela will filter out the latest mirror Tags based on the policy rules you configure and update them to the Git warehouse. When the files in the code repository are updated, KubeVela will repeat the first step to update the files in the cluster to achieve automatic deployment.

View current version and database information through curl's corresponding Ingress:

$ kubectl get ingress
NAME        CLASS    HOSTS                 ADDRESS         PORTS   AGE
my-server   <none>   kubevela.example.com  <ingress-ip>    80      162m
$ curl -H "Host:kubevela.example.com" http://<ingress-ip>
Version: 0.1.6
$ curl -H "Host:kubevela.example.com" http://<ingress-ip>/db
User: KubeVela
Description: It's a test user
User: KubeVela2
Description: It's another test user

Version has been successfully updated! So far, we've done everything from changing the code to automatically deploying to the cluster.

summary

On the maintenance side, if you need to update the configuration of an infrastructure (such as a database) or the configuration items of an application, simply modify the files in the configuration repository, and KubeVela will automatically synchronize the configuration into the cluster, simplifying the deployment process.

On the R&D side, after the user modifies the code in the code repository, KubeVela automatically updates the mirrors in the configuration repository. To update the version of the application.

By combining with GitOps, KubeVela accelerates the entire application development-to-deployment process.

Stamp the original text and view the official homepage and documentation for the KubeVela project!!

Related Links
1) Continuous delivery of applications using Jenkins + KubeVela:
https://kubevela.io/zh/blog/2...
2) Example warehouse 1:
https://github.com/oam-dev/sa...
3)mysql controller:
https://github.com/bitpoke/my...
4) Example warehouse 2:
https://github.com/oam-dev/sa...

For more information, please scan the QR code below the pin or search for the pin group number (35922870) to join the Aliyun Native Information Exchange Group! Get more information!

Tags: Kubernetes Cloud Native Alibaba Cloud

Posted by dombrorj on Sun, 07 Nov 2021 04:24:20 +0530