This Article will describe the overall design in a practical way to implement and integrate a bunch of related software for driving the Devops within a hybird cloud environment. It will cover some technical details, steps and Architecture.
Index
1.Architecture design
2.Use-cases show
1) CI/CD pipeline with Jenkins, Maven, Container, Github and Jenkinsfile for Java application driven by “Pipeline as Code”
2) CI/CD pipeline with Jenkins driven by project relationships
3) Bitbucket commit trigger Terraform build and destroy AWS instance with Jenkins
4) CI/CD pipeline for python application driven by “Pipeline as Code”
5) CI/CD pipeline for NodeJs application driven by “Pipeline as Code”
6) Use Jfrog Artifactory to manage artifacts from snapshot to release stage.
7) CI/CD pipeline for Java application driven by “Pipeline as Code” plus Jfrog Artifactory based on #1 case
3.Install and configuration
1) JDK and JRE Version: 8u181-linux-x64
2) Jenkins Version: 2.138.1
3) Docker Versoin: 18.06.1-ce
4) Kubernetes Version: 1.11.3-0.x86_64
5) Terraform Version: v0.11.8
6) Jfrog Artifactory Version: 6.4.1-60401900
4.Monitoring
5.Some concepts
Architecture design
The design below completely deployed in AWS cloud. Suitable for websites.
The design below allow the Vmware private cloud shares the devops common service running in AWS public cloud.
1. Active com.snc.cloud.mgmt plugin which is CMPv2 (Cloud Management Product the Servcie-now Jakarta release) to retrieve full cloud information to allow self-service provision request by selecting from listed values, e.g. VPC, security groups.
2. Resources provision and configuration management for anything on top of the container’s level will be carried out by Terraform and Puppet.
Use-cases show
1. Pipeline CI/CD with Jenkins, Maven, Container, Github and Jenkinsfile for Java application
Pipeline is achieved by Jenkinsfile no matter with simple UI, blue Ocean UI or SCM repository.Before Pipeline of Jenkins available after version 2.0, the “conditional buildsteps” plugin is great to run build steps like a pipeline. I will use the simple-java-maven-app to try this out. The setup simply include 2 parts: creating branch called ‘java’ in my github repository and creating a ‘pipeline’ type project in Jenkins. The ‘Jenkinsfile‘ at https://github.com/samwjq/devops/blob/java/jenkins/Jenkinsfile-no-artifactory.
We can see the pipeline style output as below. Jenkins actually has done a lot of jobs.
Checking out Revision 24edc86693d63d30abcb04006d09515684bdc901 (origin/java) > git config core.sparsecheckout # timeout=10 > git checkout -f 24edc86693d63d30abcb04006d09515684bdc901 Commit message: "clean up java branch 2" First time build. Skipping changelog. [Pipeline] } [Pipeline] // stage [Pipeline] withEnv [Pipeline] { [Pipeline] sh [pipeline-maven] Running shell script + docker inspect -f . maven:3-alpine . [Pipeline] withDockerContainer Jenkins does not seem to be running inside a container $ docker run -t -d -u 997:994 -v /root/.m2:/root/.m2 -w /var/lib/jenkins/workspace/pipeline-maven -v /var/lib/jenkins/workspace/pipeline-maven:/var/lib/jenkins/workspace/pipeline-maven:rw,z -v /var/lib/jenkins/workspace/pipeline-maven@tmp:/var/lib/jenkins/workspace/pipeline-maven@tmp:rw, -e ******** maven:3-alpine cat $ docker top c979b78d31a68236f9f766cda3f9c43ef635168f7adb558a6433a2e3cce8e6e9 -eo pid,comm [Pipeline] { [Pipeline] stage [Pipeline] { (Build) [Pipeline] sh [pipeline-maven] Running shell script + mvn -B -DskipTests clean package [INFO] Scanning for projects... [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building my-app 1.0-SNAPSHOT ... [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1.728 s [INFO] Finished at: 2018-10-02T06:28:50Z [INFO] Final Memory: 10M/150M [INFO] ------------------------------------------------------------------------ [Pipeline] junit Recording test results ... $ docker stop --time=1 5cf138edb86a3bc737bb0eec912597a08b57235d5b2a0c8f801f10d6cfa75730 $ docker rm -f 5cf138edb86a3bc737bb0eec912597a08b57235d5b2a0c8f801f10d6cfa75730 ... [Pipeline] End of Pipeline Finished: SUCCESS
2. Pipeline CI/CD with Jenkins driven by project relationships
Rather than using the above pipeline methods, the pipeline can be driven by projects relationships.
build-docker-image
Build:
Output:
Deploy to testing environment in kubernetes cluster
Build:
kubectl delete deployment nginx sleep 10 kubectl create deployment nginx --image=samwjq/nginx
Output:
[k8s-deploy-test-env] $ /bin/sh -xe /tmp/jenkins4923601192141532266.sh + kubectl delete deployment nginx deployment.extensions "nginx" deleted + sleep 10 + kubectl create deployment nginx --image=samwjq/nginx deployment.apps/nginx created
Test the web content with curl. I will keep the Kubernetes Nodeport Service running so the curl can access the nginx content.
Output:
If the content is expected, deploy the image to staging environment in kubernetes cluster.
Build:
kubectl create deployment nginx --image=samwjq/nginx --namespace=staging
3. Bitbucket commit trigger Terraform build and destroy AWS instance with Jenkins
Config the Build Environment as Terraform in the Jenkins’ job. Below is the Jenkins’s job output:
Started by user admin Building in workspace /var/lib/jenkins/workspace/firstone > git rev-parse --is-inside-work-tree # timeout=10 Fetching changes from the remote Git repository > git config remote.origin.url http://bitbucket.local:7990/scm/~sam.wen/devops.git # timeout=10 Fetching upstream changes from http://bitbucket.local:7990/scm/~sam.wen/devops.git > git --version # timeout=10 using GIT_ASKPASS to set credentials > git fetch --tags --progress http:/bitbucket.local:7990/scm/~sam.wen/devops.git +refs/heads/*:refs/remotes/origin/* > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 > git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10 Checking out Revision 70ce9d0076ad1cb1c0d9ca6ed8c380c89916b4a0 (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f 70ce9d0076ad1cb1c0d9ca6ed8c380c89916b4a0 Commit message: "change ec2 size" > git rev-list --no-walk 70ce9d0076ad1cb1c0d9ca6ed8c380c89916b4a0 # timeout=10 [terraform] $ /usr/local/bin/terraform get [terraform] $ /usr/local/bin/terraform apply -input=false -state=/var/lib/jenkins/workspace/firstone/terraform-plugin/terraform-plugin.tfstate [0m[1maws_instance.sam1: Refreshing state... (ID: i-018e41b3a5c3baebd)[0m [0m[1maws_instance.sam1: Creating...[0m ... vpc_security_group_ids.#: "" => ""[0m [0m[1maws_instance.sam1: Still creating... (10s elapsed)[0m[0m [0m[1maws_instance.sam1: Creation complete after 42s (ID: i-0e16084dc4881a928)[0m[0m [0m[1m[32m Apply complete! Resources: 1 added, 0 changed, 0 destroyed.[0m [terraform] $ /usr/local/bin/terraform destroy -input=false -state=/var/lib/jenkins/workspace/firstone/terraform-plugin/terraform-plugin.tfstate --force [0m[1maws_instance.sam1: Refreshing state... (ID: i-0e16084dc4881a928)[0m [0m[1maws_instance.sam1: Destroying... (ID: i-0e16084dc4881a928)[0m[0m [0m[1maws_instance.sam1: Still destroying... (ID: i-0e16084dc4881a928, 10s elapsed)[0m[0m .... [0m[1maws_instance.sam1: Destruction complete after 2m10s[0m[0m [0m[1m[32m Destroy complete! Resources: 1 destroyed.[0m Finished: SUCCESS
4. CI/CD pipeline for python application driven by “Pipeline as Code”
The Jenkinsfile in the git repository is the “code of pipeline”. Refer to this link: https://github.com/samwjq/devops/blob/python/jenkins/Jenkinsfile
Use the python to create the unit test cases. https://github.com/samwjq/devops/blob/python/sources/test_calc.py
+ py.test --verbose --junit-xml test-reports/results.xml sources/test_calc.py ============================= test session starts ============================== platform linux2 -- Python 2.7.14, pytest-3.5.0, py-1.5.3, pluggy-0.6.0 -- /usr/local/bin/python cachedir: .pytest_cache rootdir: /var/lib/jenkins/workspace/pipeline-python, inifile: plugins: cov-2.5.1 collecting ... collected 5 items sources/test_calc.py::TestCalc::test_add_floats PASSED [ 20%] sources/test_calc.py::TestCalc::test_add_integers PASSED [ 40%] sources/test_calc.py::TestCalc::test_add_string_and_integer PASSED [ 60%] sources/test_calc.py::TestCalc::test_add_string_and_number PASSED [ 80%] sources/test_calc.py::TestCalc::test_add_strings PASSED [100%]
5. CI/CD pipeline for NodeJs application driven by “Pipeline as Code”
You need to update the source code from Jenkins tutorial to set the environment to avoid “npm installation failure, permission denied, mkdir ‘/.npm’ */”
Check it out at https://github.com/samwjq/devops/blob/nodejs/jenkins/Jenkinsfile
environment { npm_config_cache = 'npm-cache' HOME = '.' }
The pipeline can pause and wait for your respond. I can see the nodejs webpage so I hit “Proceed”.
6. Use Jfrog Artifactory to manage artifacts from snapshot to release stage
Source code located at: https://github.com/samwjq/devops/tree/master/maven-artifactory
Config the post build to push the artifact to the artifactory’s release repository.
[INFO] Packaging webapp [INFO] Assembling webapp [multi3] in [/var/lib/jenkins/workspace/jfrog-maven/maven-example/multi3/target/multi3-3.7-SNAPSHOT] [INFO] Processing war project [INFO] Copying webapp resources [/var/lib/jenkins/workspace/jfrog-maven/maven-example/multi3/src/main/webapp] [INFO] Webapp assembled in [44 msecs] [INFO] Building war: /var/lib/jenkins/workspace/jfrog-maven/maven-example/multi3/target/multi3-3.7-SNAPSHOT.war [INFO] [INFO] --- maven-install-plugin:2.4:install (default-install) @ multi3 --- [INFO] Installing /var/lib/jenkins/workspace/jfrog-maven/maven-example/multi3/target/multi3-3.7-SNAPSHOT.war to /var/lib/jenkins/.m2/repository/org/jfrog/test/multi3/3.7-SNAPSHOT/multi3-3.7-SNAPSHOT.war [INFO] Installing /var/lib/jenkins/workspace/jfrog-maven/maven-example/multi3/pom.xml to /var/lib/jenkins/.m2/repository/org/jfrog/test/multi3/3.7-SNAPSHOT/multi3-3.7-SNAPSHOT.pom [INFO] Artifactory Build Info Recorder: Saving Build Info to '/var/lib/jenkins/workspace/jfrog-maven/maven-example/target/build-info.json' [INFO] Deploying artifact: http://localhost:8081/artifactory/libs-snapshot-local/org/jfrog/test/multi2/3.7-SNAPSHOT/multi2-3.7-SNAPSHOT.jar ... [INFO] Artifactory Build Info Recorder: Deploying build info ... [INFO] Deploying build descriptor to: http://localhost:8081/artifactory/api/build [INFO] Build successfully deployed. Browse it in Artifactory under http://localhost:8081/artifactory/webapp/builds/jfrog-maven/6 [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Simple Multi Modules Build 3.7-SNAPSHOT ............ SUCCESS [ 0.645 s] [INFO] Multi 1 ............................................ SUCCESS [ 4.110 s] [INFO] Multi 2 ............................................ SUCCESS [ 2.183 s] [INFO] Multi 3 3.7-SNAPSHOT ............................... SUCCESS [ 1.145 s] [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS
Assume you satisfy the snapshot(development) build, you can proceed to release build.
This will do the following:
1. deploy the artifacts to release-local repo.
2. update the maven pom file with new version number from 3.7 to 3.8.
3. create a new branch in the git repo as keeping the source code of version 3.7.
7. CI/CD pipeline for Java application driven by “Pipeline as Code” plus Artifactory
Basically I have successfully use Artifactory with local Maven installation at the previous #6 use-case, but how about using Artifactory with containerized Maven?
Based on #1 use-case, I move further to use Jfrog Artifactory by defining artifactory in the Jenkinsfile like below.
node { def server = Artifactory.server('artifactory1') def rtMaven = Artifactory.newMavenBuild() def buildInfo stage ('Artifactory configuration') { rtMaven.deployer releaseRepo: 'libs-release-local', snapshotRepo:'libs-snapshot-local', server: server rtMaven.resolver releaseRepo: 'libs-release', snapshotRepo:'libs-snapshot', server: server buildInfo = Artifactory.newBuildInfo() } stage ('Exec Maven') { docker.image('maven:3-alpine').inside { withEnv(['JAVA_HOME=/usr/lib/jvm/java-1.8-openjdk', 'MAVEN_HOME=/usr/share/maven']) { rtMaven.run pom: 'pom.xml', goals: 'clean install', buildInfo: buildInfo } } } stage ('Publish build info') { server.publishBuildInfo buildInfo } }
For java build, I think this is quite a final trial as I have used DSL pipeline script, containerized Maven and Jfrog Artifactory.
Installation and configuration
-
JDK Version: 8u181-linux-x64
[devops ~]# wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u181-b13/96a7b8442fe848ef90c96a2fad6ed6d1/jdk-8u181-linux-x64.rpm"
-
Jenkins Version: 2.138.1
You need to install Oracle Java first. Install and initially setup Jenkins are just super simple. Visit the console URL for the first time and after choosing a set of plugins, it automatically download/install the plugins. Set you password and you are ready to go!
Config the SCM to allow trigger build after code commit. You need to install the git integration plugin for Github. The screenshot below is for github.
-
Docker Versoin: 18.06.1-ce
Following the link https://docs.docker.com/install/linux/docker-ce/centos/#install-docker-ce-1 to install the docker from repository is quite simple. And you can run your fist container by one command.
# sudo docker run hello-world Unable to find image 'hello-world:latest' locally latest: Pulling from library/hello-world d1725b59e92d: Pull complete Digest: sha256:0add3ace90ecb4adbf7777e9aacf18357296e799f81cabc9fde470971e499788 Status: Downloaded newer image for hello-world:latest Hello from Docker! This message shows that your installation appears to be working correctly. To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. (amd64) 3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal. To try something more ambitious, you can run an Ubuntu container with: $ docker run -it ubuntu bash Share images, automate workflows, and more with a free Docker ID: https://hub.docker.com/ For more examples and ideas, visit: https://docs.docker.com/get-started/
-
Kubernetes Version: 1.11.3-0.x86_64
Kubernetes is a cluster and orchestration engine for docker containers. In other words Kubernetes is an open source software or tool which is used to orchestrate and manage docker containers in cluster environment.
Install packages on master and slaves with the similar steps.
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg exclude=kube* EOF setenforce 0 yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
Initial the configuration on the master.
[devops ~]# kubeadm init (should add --pod-network-cidr=10.244.0.0/16, necessary if you use fannel CNI)
[init] using Kubernetes version: v1.11.3
[preflight] running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
I0926 00:07:39.567779 27838 kernel_validator.go:81] Validating kernel version
I0926 00:07:39.567852 27838 kernel_validator.go:96] Validating kernel config
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.1-ce. Max validated version: 17.03
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
....
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You must install a pod network add-on so that your pods can communicate with each other.Once a pod network has been installed, you can confirm that it is working by checking that the CoreDNS pod is Running in the output of kubectl get pods --all-namespaces
.
You can now join any number of machines by running the following on each node
as root:
kubeadm join 10.xxxx:6443 --token 7m0lra.e0hy37u --discovery-token-ca-cert-hash sha256:71f824dbfd3376229
The above steps will pull the images to run this kubernetes
[devops ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy-amd64 v1.11.3 be5a6e1ecfa6 2 weeks ago 97.8MB k8s.gcr.io/kube-apiserver-amd64 v1.11.3 3de571b6587b 2 weeks ago 187MB k8s.gcr.io/kube-scheduler-amd64 v1.11.3 ca1f38854f74 2 weeks ago 56.8MB k8s.gcr.io/kube-controller-manager-amd64 v1.11.3 a710d6a92519 2 weeks ago 155MB k8s.gcr.io/coredns 1.1.3 b3b94275d97c 4 months ago 45.6MB k8s.gcr.io/etcd-amd64 3.2.18 b8df3b177be2 5 months ago 219MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 9 months ago 742kB
You can enable and restart the kubelet service now.
systemctl enable kubelet && systemctl restart kubelet
To run the kubernetes as a cluster, you need the network abstract also called CNI. To use weave as the provider, run the following command. It will load a pod and config some new virtual network interfaces.
[devops ~]# kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
If the kubernetes setup and CNI are both healthy. You should see the following output.
[devops ~]# kubectl get pods -n kube-system -o=wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE coredns-78fcdf6894-fz6j7 1/1 Running 0 17h 10.32.0.3 devops coredns-78fcdf6894-gw8hk 1/1 Running 0 17h 10.32.0.2 devops etcd-devops 1/1 Running 4 17h 10.xxx devops kube-apiserver-devops 1/1 Running 4 17h 10.xxx devops kube-controller-manager-devops 1/1 Running 4 17h 10.xxx devops kube-proxy-rwl6m 1/1 Running 3 17h 10.xxx devops kube-scheduler-devops 1/1 Running 4 17h 10.xxx devops weave-net-hs77s 2/2 Running 3 16h 10.xxx devops
Finally we can join the other slave nodes into the cluster.
[devops-slave ~]# kubeadm join 10.xxxx:6443 --token 7m0lra.e0hy37u --discovery-token-ca-cert-hash sha256:71fxxx ... [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [preflight] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "devops-slave" as an annotation This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. [devops ~]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-l634hi1RlU-m2NQrYPs4pHMZFB9jcL7tvObIiB8PjtE 6m system:bootstrap:idohb5 Approved,Issued [devops ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION devops Ready master 1d v1.11.3 devops-slave Ready 13m v1.11.3
Include some errors you might encounter.
If your IP_POOL for the CNI conflict with the local IP of the node. You will see the following.
kubectl -n kube-system logs -f coredns-78fcdf6894-fz6j7 E0804 12:57:16.794093 1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:313: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0804 12:57:16.795146 1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:320: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0804 12:57:16.796230 1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
If your master token expired. You will see the following when join the slave to the cluster.
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace Unauthorized
By default the token expire after 24 hours. So generate the token again.
kubeadm token create --print-join-command
Ready to try to run the first container, let’s try Ngnix.
nodeport is one of the service type to expose your app between pods or to external.
kubectl create deployment nginx --image=nginx kubectl create service nodeport nginx --tcp=80:80 [devops~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d nginx NodePort 10.106.88.58 <none> 80:31103/TCP 15m
Describe the pod, you will the events. Normally pods will run at workers/slaves.
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 20m default-scheduler Successfully assigned default/nginx-78f5d695bd-g7glf to devops-slave Normal Pulling 20m kubelet, devops-slave pulling image "nginx" Normal Pulled 20m kubelet, devops-slave Successfully pulled image "nginx" Normal Created 20m kubelet, devops-slave Created container Normal Started 20m kubelet, devops-slave Started container
Play around within the Nginx pod.
[devops ~]# kubectl exec nginx-78f5d695bd-g7glf -it "/bin/bash"
-
Terraform Version: v0.11.8
Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. With its providers, you can mange most of the infrastructure by one interface. Such as AWS, Azure, Vmware, Kubernetes, Mysql and Palo Alto Networks. Refer to the page for the list of providers: https://www.terraform.io/docs/providers
Use Bitbucket as the repository of the terraform files, clone the repository and add some rf file. For aws, you can choose to use IAM role, local credential or access_key.
If you want to use “Jenkins Terraform Integration plugin” rather than using the terraform command directly, you can just use Terraform v0.10.8 which does not come with an approve feature.
[devops ~]# cat testec2.tf provider "aws" { region = "ap-southeast-2" } resource "aws_instance" "sam1" { ami = "ami-7847sidkdh" instance_type = "t2.micro" user_data = "${file("user_data.sh")}" }
[devops ~]# cat user_data.sh #!/bin/bash yum update [devops ~]# terraform init [devops ~]# terraform apply ....Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
It’s good enough when you can specify the user_data for AWS when doing provision. You will see a new instance provisioned in the aws console. There is a state file which save all the information about the resource that rf file created for future usage, such as destroy.
-
Jfrog Artifactory Version: 6.4.1-60401900
When there are more and more artifacts built, it’s necessary to use some tool like Jfrog Artifactory to manage the repositories and dependency.
wget https://bintray.com/jfrog/artifactory-rpms/rpm -O bintray-jfrog-artifactory-rpms.repo sudo mv bintray-jfrog-artifactory-rpms.repo /etc/yum.repos.d/ sudo yum install jfrog-artifactory-oss
After installation when accessing the GUI first time at port 8081, there is a wizard to guide the initial setup, like admin password. Install the following plugin to allow Jenkins communicates with artifactory.
Config the artifactory detail into the “config system” section of “Manage Jenkins”.
Monitoring
Monitor Kubernete with Splunk
Install Test Results Analyzer Plugin and monitor the testing
Some concepts
1) Continuous Integration: Developers continuous commit and merge their codes to the repositories. It will trigger the builds and the changes will be automatically tested. In a small team with 5 people, they might trigger 100 builds daily.
2) Continuous Deployment/Delivery: Normally after the continuous integration, this part automate the release of changes into production environment. Practically this will be triggered by clicking on a button or sending a confirmation to Slack.
3) CI/CD pipeline:
— Copyright for this article reserved by itact.com.au —