This Article will describe the overall design in a practical way to implement and integrate a bunch of related software for driving the Devops within a hybird cloud environment. It will cover some technical details, steps and Architecture.


Index


1.Architecture design
2.Use-cases show

1) CI/CD pipeline with Jenkins, Maven, Container, Github and Jenkinsfile for Java application driven by “Pipeline as Code”
2) CI/CD pipeline with Jenkins driven by project relationships
3) Bitbucket commit trigger Terraform build and destroy AWS instance with Jenkins
4) CI/CD pipeline for python application driven by “Pipeline as Code”
5) CI/CD pipeline for NodeJs application driven by “Pipeline as Code”
6) Use Jfrog Artifactory to manage artifacts from snapshot to release stage.
7) CI/CD pipeline for Java application driven by “Pipeline as Code” plus Jfrog Artifactory based on #1 case

3.Install and configuration

1) JDK and JRE Version: 8u181-linux-x64
2) Jenkins Version: 2.138.1
3) Docker Versoin: 18.06.1-ce
4) Kubernetes Version: 1.11.3-0.x86_64
5) Terraform Version: v0.11.8
6) Jfrog Artifactory Version: 6.4.1-60401900

4.Monitoring

5.Some concepts

 

Architecture design


The design below completely deployed in AWS cloud. Suitable for websites.
devops1
The design below allow the Vmware private cloud shares the devops common service running in AWS public cloud.
devops2

1. Active com.snc.cloud.mgmt plugin which is CMPv2 (Cloud Management Product the Servcie-now Jakarta release) to retrieve full cloud information to allow self-service provision request by selecting from listed values, e.g. VPC, security groups.
2. Resources provision and configuration management for anything on top of the container’s level will be carried out by Terraform and Puppet.

 

Use-cases show


1. Pipeline CI/CD with Jenkins, Maven, Container, Github and Jenkinsfile for Java application

Pipeline is achieved by Jenkinsfile no matter with simple UI, blue Ocean UI or SCM repository.Before Pipeline of Jenkins available after version 2.0, the “conditional buildsteps” plugin is great to run build steps like a pipeline. I will use the simple-java-maven-app to try this out. The setup simply include 2 parts: creating branch called ‘java’ in my github repository and creating a ‘pipeline’ type project in Jenkins. The ‘Jenkinsfile‘ at https://github.com/samwjq/devops/blob/java/jenkins/Jenkinsfile-no-artifactory.

jenkins-pipeline-maven-java

jenkins-pipeline-maven-java-project

We can see the pipeline style output as below. Jenkins actually has done a lot of jobs.

jenkins-pipeline-maven-java-log1
Some of the output:

Checking out Revision 24edc86693d63d30abcb04006d09515684bdc901 (origin/java)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 24edc86693d63d30abcb04006d09515684bdc901
Commit message: "clean up java branch 2"
First time build. Skipping changelog.
[Pipeline] }
[Pipeline] // stage
[Pipeline] withEnv
[Pipeline] {
[Pipeline] sh
[pipeline-maven] Running shell script
+ docker inspect -f . maven:3-alpine
.
[Pipeline] withDockerContainer
Jenkins does not seem to be running inside a container
$ docker run -t -d -u 997:994 -v /root/.m2:/root/.m2 -w /var/lib/jenkins/workspace/pipeline-maven -v /var/lib/jenkins/workspace/pipeline-maven:/var/lib/jenkins/workspace/pipeline-maven:rw,z -v /var/lib/jenkins/workspace/pipeline-maven@tmp:/var/lib/jenkins/workspace/pipeline-maven@tmp:rw, -e ******** maven:3-alpine cat
$ docker top c979b78d31a68236f9f766cda3f9c43ef635168f7adb558a6433a2e3cce8e6e9 -eo pid,comm
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Build)
[Pipeline] sh
[pipeline-maven] Running shell script
+ mvn -B -DskipTests clean package
[INFO] Scanning for projects...
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Building my-app 1.0-SNAPSHOT
...
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1.728 s
[INFO] Finished at: 2018-10-02T06:28:50Z
[INFO] Final Memory: 10M/150M
[INFO] ------------------------------------------------------------------------
[Pipeline] junit
Recording test results
...
$ docker stop --time=1 5cf138edb86a3bc737bb0eec912597a08b57235d5b2a0c8f801f10d6cfa75730
$ docker rm -f 5cf138edb86a3bc737bb0eec912597a08b57235d5b2a0c8f801f10d6cfa75730
...
[Pipeline] End of Pipeline
Finished: SUCCESS

 

2. Pipeline CI/CD with Jenkins driven by project relationships

Rather than using the above pipeline methods, the pipeline can be driven by projects relationships.

jenkins-pipeline-byprojects

build-docker-image
Build:
jenins-docker-build-push
Output:
jenkins-image-build

Deploy to testing environment in kubernetes cluster
Build:

kubectl delete deployment nginx 
sleep 10
kubectl create deployment nginx --image=samwjq/nginx

Output:

[k8s-deploy-test-env] $ /bin/sh -xe /tmp/jenkins4923601192141532266.sh
+ kubectl delete deployment nginx
deployment.extensions "nginx" deleted
+ sleep 10
+ kubectl create deployment nginx --image=samwjq/nginx
deployment.apps/nginx created

Test the web content with curl. I will keep the Kubernetes Nodeport Service running so the curl can access the nginx content.
Output:
jenkins-check-content

If the content is expected, deploy the image to staging environment in kubernetes cluster.
Build:

kubectl create deployment nginx --image=samwjq/nginx --namespace=staging

Output:
jenkins-deploy-staging

 

3. Bitbucket commit trigger Terraform build and destroy AWS instance with Jenkins

Config the Build Environment as Terraform in the Jenkins’ job. Below is the Jenkins’s job output:

Started by user admin
Building in workspace /var/lib/jenkins/workspace/firstone
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url http://bitbucket.local:7990/scm/~sam.wen/devops.git # timeout=10
Fetching upstream changes from http://bitbucket.local:7990/scm/~sam.wen/devops.git
 > git --version # timeout=10
using GIT_ASKPASS to set credentials 
 > git fetch --tags --progress http:/bitbucket.local:7990/scm/~sam.wen/devops.git +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
Checking out Revision 70ce9d0076ad1cb1c0d9ca6ed8c380c89916b4a0 (refs/remotes/origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 70ce9d0076ad1cb1c0d9ca6ed8c380c89916b4a0
Commit message: "change ec2 size"
 > git rev-list --no-walk 70ce9d0076ad1cb1c0d9ca6ed8c380c89916b4a0 # timeout=10
[terraform] $ /usr/local/bin/terraform get
[terraform] $ /usr/local/bin/terraform apply -input=false -state=/var/lib/jenkins/workspace/firstone/terraform-plugin/terraform-plugin.tfstate
aws_instance.sam1: Refreshing state... (ID: i-018e41b3a5c3baebd)
aws_instance.sam1: Creating...
...
  vpc_security_group_ids.#:     "" => ""
aws_instance.sam1: Still creating... (10s elapsed)
aws_instance.sam1: Creation complete after 42s (ID: i-0e16084dc4881a928)

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
[terraform] $ /usr/local/bin/terraform destroy -input=false -state=/var/lib/jenkins/workspace/firstone/terraform-plugin/terraform-plugin.tfstate --force
aws_instance.sam1: Refreshing state... (ID: i-0e16084dc4881a928)
aws_instance.sam1: Destroying... (ID: i-0e16084dc4881a928)
aws_instance.sam1: Still destroying... (ID: i-0e16084dc4881a928, 10s elapsed)
....
aws_instance.sam1: Destruction complete after 2m10s

Destroy complete! Resources: 1 destroyed.
Finished: SUCCESS

 

4. CI/CD pipeline for python application driven by “Pipeline as Code”

The Jenkinsfile in the git repository is the “code of pipeline”. Refer to this link: https://github.com/samwjq/devops/blob/python/jenkins/Jenkinsfile
Use the python to create the unit test cases. https://github.com/samwjq/devops/blob/python/sources/test_calc.py

+ py.test --verbose --junit-xml test-reports/results.xml sources/test_calc.py
============================= test session starts ==============================
platform linux2 -- Python 2.7.14, pytest-3.5.0, py-1.5.3, pluggy-0.6.0 -- /usr/local/bin/python
cachedir: .pytest_cache
rootdir: /var/lib/jenkins/workspace/pipeline-python, inifile:
plugins: cov-2.5.1
collecting ... collected 5 items

sources/test_calc.py::TestCalc::test_add_floats PASSED                   [ 20%]
sources/test_calc.py::TestCalc::test_add_integers PASSED                 [ 40%]
sources/test_calc.py::TestCalc::test_add_string_and_integer PASSED       [ 60%]
sources/test_calc.py::TestCalc::test_add_string_and_number PASSED        [ 80%]
sources/test_calc.py::TestCalc::test_add_strings PASSED                  [100%]

 

5. CI/CD pipeline for NodeJs application driven by “Pipeline as Code”

You need to update the source code from Jenkins tutorial to set the environment to avoid “npm installation failure, permission denied, mkdir ‘/.npm’ */”

Check it out at https://github.com/samwjq/devops/blob/nodejs/jenkins/Jenkinsfile

    environment {
        npm_config_cache = 'npm-cache'
        HOME = '.'
    }

The pipeline can pause and wait for your respond. I can see the nodejs webpage so I hit “Proceed”.
jenkins-nodejs2
jenkins-nodejs1

 

6. Use Jfrog Artifactory to manage artifacts from snapshot to release stage

Source code located at: https://github.com/samwjq/devops/tree/master/maven-artifactory
Config the post build to push the artifact to the artifactory’s release repository.

[INFO] Packaging webapp
[INFO] Assembling webapp [multi3] in [/var/lib/jenkins/workspace/jfrog-maven/maven-example/multi3/target/multi3-3.7-SNAPSHOT]
[INFO] Processing war project
[INFO] Copying webapp resources [/var/lib/jenkins/workspace/jfrog-maven/maven-example/multi3/src/main/webapp]
[INFO] Webapp assembled in [44 msecs]
[INFO] Building war: /var/lib/jenkins/workspace/jfrog-maven/maven-example/multi3/target/multi3-3.7-SNAPSHOT.war
[INFO] 
[INFO] --- maven-install-plugin:2.4:install (default-install) @ multi3 ---
[INFO] Installing /var/lib/jenkins/workspace/jfrog-maven/maven-example/multi3/target/multi3-3.7-SNAPSHOT.war to /var/lib/jenkins/.m2/repository/org/jfrog/test/multi3/3.7-SNAPSHOT/multi3-3.7-SNAPSHOT.war
[INFO] Installing /var/lib/jenkins/workspace/jfrog-maven/maven-example/multi3/pom.xml to /var/lib/jenkins/.m2/repository/org/jfrog/test/multi3/3.7-SNAPSHOT/multi3-3.7-SNAPSHOT.pom
[INFO] Artifactory Build Info Recorder: Saving Build Info to '/var/lib/jenkins/workspace/jfrog-maven/maven-example/target/build-info.json'
[INFO] Deploying artifact: http://localhost:8081/artifactory/libs-snapshot-local/org/jfrog/test/multi2/3.7-SNAPSHOT/multi2-3.7-SNAPSHOT.jar
...
[INFO] Artifactory Build Info Recorder: Deploying build info ...
[INFO] Deploying build descriptor to: http://localhost:8081/artifactory/api/build
[INFO] Build successfully deployed. Browse it in Artifactory under http://localhost:8081/artifactory/webapp/builds/jfrog-maven/6
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Simple Multi Modules Build 3.7-SNAPSHOT ............ SUCCESS [  0.645 s]
[INFO] Multi 1 ............................................ SUCCESS [  4.110 s]
[INFO] Multi 2 ............................................ SUCCESS [  2.183 s]
[INFO] Multi 3 3.7-SNAPSHOT ............................... SUCCESS [  1.145 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS

arti-snapshot-local

Assume you satisfy the snapshot(development) build, you can proceed to release build.

jenkins-artifactory-release

This will do the following:

1. deploy the artifacts to release-local repo.

2. update the maven pom file with new version number from 3.7 to 3.8.

3. create a new branch in the git repo as keeping the source code of version 3.7.

 

7. CI/CD pipeline for Java application driven by “Pipeline as Code” plus Artifactory

Basically I have successfully use Artifactory with local Maven installation at the previous #6 use-case, but how about using Artifactory with containerized Maven?
Based on #1 use-case, I move further to use Jfrog Artifactory by defining artifactory in the Jenkinsfile like below.

node {
    def server = Artifactory.server('artifactory1')
    def rtMaven = Artifactory.newMavenBuild()
    def buildInfo

    stage ('Artifactory configuration') {
        rtMaven.deployer releaseRepo: 'libs-release-local', snapshotRepo:'libs-snapshot-local', server: server
        rtMaven.resolver releaseRepo: 'libs-release', snapshotRepo:'libs-snapshot', server: server
        buildInfo = Artifactory.newBuildInfo()
    }

    stage ('Exec Maven') {
        docker.image('maven:3-alpine').inside {
            withEnv(['JAVA_HOME=/usr/lib/jvm/java-1.8-openjdk', 'MAVEN_HOME=/usr/share/maven']) {
            rtMaven.run pom: 'pom.xml', goals: 'clean install', buildInfo: buildInfo
            }
        }
    }

    stage ('Publish build info') {
        server.publishBuildInfo buildInfo
    }
}

For java build, I think this is quite a final trial as I have used DSL pipeline script, containerized Maven and Jfrog Artifactory.
arti-containerized-maven-by-dsl

Installation and configuration


  • JDK Version: 8u181-linux-x64

[devops ~]# wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u181-b13/96a7b8442fe848ef90c96a2fad6ed6d1/jdk-8u181-linux-x64.rpm"

jenkins-global-tool-config


 

  • Jenkins Version: 2.138.1

You need to install Oracle Java first. Install and initially setup Jenkins are just super simple. Visit the console URL for the first time and after choosing a set of plugins, it automatically download/install the plugins. Set you password and you are ready to go!
jenkins1

Config the SCM to allow trigger build after code commit. You need to install the git integration plugin for Github. The screenshot below is for github.

jenkins-github


 

  • Docker Versoin: 18.06.1-ce

Following the link https://docs.docker.com/install/linux/docker-ce/centos/#install-docker-ce-1 to install the docker from repository is quite simple. And you can run your fist container by one command.

# sudo docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
d1725b59e92d: Pull complete
Digest: sha256:0add3ace90ecb4adbf7777e9aacf18357296e799f81cabc9fde470971e499788
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/

For more examples and ideas, visit:
https://docs.docker.com/get-started/

 

  • Kubernetes Version: 1.11.3-0.x86_64

Kubernetes is a cluster and orchestration engine for docker containers. In other words Kubernetes is  an open source software or tool which is used to orchestrate and manage docker containers in cluster environment.
Install packages on master and slaves with the similar steps.

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF
setenforce 0
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

Initial the configuration on the master.

[devops ~]# kubeadm init    (should add --pod-network-cidr=10.244.0.0/16, necessary if you use fannel CNI)
[init] using Kubernetes version: v1.11.3
[preflight] running pre-flight checks
 [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
I0926 00:07:39.567779 27838 kernel_validator.go:81] Validating kernel version
I0926 00:07:39.567852 27838 kernel_validator.go:96] Validating kernel config
 [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.1-ce. Max validated version: 17.03
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
....
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
 https://kubernetes.io/docs/concepts/cluster-administration/addons/

You must install a pod network add-on so that your pods can communicate with each other.Once a pod network has been installed, you can confirm that it is working by checking that the CoreDNS pod is Running in the output of kubectl get pods --all-namespaces. 

You can now join any number of machines by running the following on each node
as root:

 kubeadm join 10.xxxx:6443 --token 7m0lra.e0hy37u --discovery-token-ca-cert-hash sha256:71f824dbfd3376229

The above steps will pull the images to run this kubernetes

[devops ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy-amd64 v1.11.3 be5a6e1ecfa6 2 weeks ago 97.8MB
k8s.gcr.io/kube-apiserver-amd64 v1.11.3 3de571b6587b 2 weeks ago 187MB
k8s.gcr.io/kube-scheduler-amd64 v1.11.3 ca1f38854f74 2 weeks ago 56.8MB
k8s.gcr.io/kube-controller-manager-amd64 v1.11.3 a710d6a92519 2 weeks ago 155MB
k8s.gcr.io/coredns 1.1.3 b3b94275d97c 4 months ago 45.6MB
k8s.gcr.io/etcd-amd64 3.2.18 b8df3b177be2 5 months ago 219MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 9 months ago 742kB

You can enable and restart the kubelet service now.

systemctl enable kubelet && systemctl restart kubelet

To run the kubernetes as a cluster, you need the network abstract also called CNI. To use weave as the provider, run the following command. It will load a pod and config some new virtual network interfaces.

[devops ~]# kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

cni-github-03

If the kubernetes setup and CNI are both healthy. You should see the following output.

[devops ~]# kubectl get pods -n kube-system -o=wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
coredns-78fcdf6894-fz6j7 1/1 Running 0 17h 10.32.0.3 devops
coredns-78fcdf6894-gw8hk 1/1 Running 0 17h 10.32.0.2 devops
etcd-devops              1/1 Running 4 17h 10.xxx devops
kube-apiserver-devops    1/1 Running 4 17h 10.xxx devops
kube-controller-manager-devops 1/1 Running 4 17h 10.xxx devops
kube-proxy-rwl6m         1/1 Running 3 17h 10.xxx devops
kube-scheduler-devops    1/1 Running 4 17h 10.xxx devops
weave-net-hs77s          2/2 Running 3 16h 10.xxx devops

Finally we can join the other slave nodes into the cluster.

[devops-slave ~]# kubeadm join 10.xxxx:6443 --token 7m0lra.e0hy37u --discovery-token-ca-cert-hash sha256:71fxxx
...
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "devops-slave" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

[devops ~]# kubectl get csr
NAME                                                   AGE       REQUESTOR                 CONDITION
node-csr-l634hi1RlU-m2NQrYPs4pHMZFB9jcL7tvObIiB8PjtE   6m        system:bootstrap:idohb5   Approved,Issued
[devops ~]# kubectl get nodes
NAME           STATUS    ROLES     AGE       VERSION
devops         Ready     master    1d        v1.11.3
devops-slave   Ready         13m       v1.11.3

Include some errors you might encounter.
If your IP_POOL for the CNI conflict with the local IP of the node. You will see the following.

kubectl -n kube-system logs -f coredns-78fcdf6894-fz6j7
E0804 12:57:16.794093       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:313: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0804 12:57:16.795146       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:320: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0804 12:57:16.796230       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout

If your master token expired. You will see the following when join the slave to the cluster.

[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
Unauthorized

By default the token expire after 24 hours. So generate the token again.

kubeadm token create --print-join-command

Ready to try to run the first container, let’s try Ngnix.
nodeport is one of the service type to expose your app between pods or to external.

kubectl create deployment nginx --image=nginx
kubectl create service nodeport nginx --tcp=80:80
[devops~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d
nginx NodePort 10.106.88.58 <none> 80:31103/TCP 15m

Describe the pod, you will the events. Normally pods will run at workers/slaves.

Events:
 Type Reason Age From Message
 ---- ------ ---- ---- -------
 Normal Scheduled 20m default-scheduler Successfully assigned default/nginx-78f5d695bd-g7glf to devops-slave
 Normal Pulling 20m kubelet, devops-slave pulling image "nginx"
 Normal Pulled 20m kubelet, devops-slave Successfully pulled image "nginx"
 Normal Created 20m kubelet, devops-slave Created container
 Normal Started 20m kubelet, devops-slave Started container

nginx-container

Play around within the Nginx pod.

[devops ~]# kubectl exec nginx-78f5d695bd-g7glf -it "/bin/bash"

 

  • Terraform Version: v0.11.8

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. With its providers, you can mange most of the infrastructure by one interface. Such as AWS, Azure, Vmware, Kubernetes, Mysql and Palo Alto Networks. Refer to the page for the list of providers: https://www.terraform.io/docs/providers
Use Bitbucket as the repository of the terraform files, clone the repository and add some rf file. For aws, you can choose to use IAM role, local credential or access_key.

If you want to use “Jenkins Terraform Integration plugin” rather than using the terraform command directly, you can just use Terraform v0.10.8 which does not come with an approve feature.

[devops ~]# cat testec2.tf
provider "aws" {
  region     = "ap-southeast-2"
}

resource "aws_instance" "sam1" {
  ami           = "ami-7847sidkdh"
  instance_type = "t2.micro"
  user_data = "${file("user_data.sh")}"
}
[devops ~]# cat user_data.sh
#!/bin/bash
yum update 
[devops ~]# terraform init 
[devops ~]# terraform apply
 ....Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

It’s good enough when you can specify the user_data for AWS when doing provision. You will see a new instance provisioned in the aws console. There is a state file which save all the information about the resource that rf file created for future usage, such as destroy.


 

  • Jfrog Artifactory Version: 6.4.1-60401900

When there are more and more artifacts built, it’s necessary to use some tool like Jfrog Artifactory to manage the repositories and dependency.

wget https://bintray.com/jfrog/artifactory-rpms/rpm -O bintray-jfrog-artifactory-rpms.repo
sudo mv bintray-jfrog-artifactory-rpms.repo /etc/yum.repos.d/
sudo yum install jfrog-artifactory-oss

After installation when accessing the GUI first time at port 8081, there is a wizard to guide the initial setup, like admin password. Install the following plugin to allow Jenkins communicates with artifactory.
jenkins-artifactory
Config the artifactory detail into the “config system” section of “Manage Jenkins”.

Monitoring


Monitor Kubernete with Splunk

k8s-overview

Install Test Results Analyzer Plugin and monitor the testing

jenkins-test-monitoring

 

Some concepts


1) Continuous Integration: Developers continuous commit and merge their codes to the repositories. It will trigger the builds and the changes will be automatically tested. In a small team with 5 people, they might trigger 100 builds daily.
2) Continuous Deployment/Delivery: Normally after the continuous integration, this part automate the release of changes into production environment. Practically this will be triggered by clicking on a button or sending a confirmation to Slack.
3) CI/CD pipeline:
realworld-pipeline-flow

— Copyright for this article reserved by itact.com.au —

In the IaaS cloud market, who will win between AWS, VMware, Hyper V and OpenStack?

All three have their respective strengths. VMware is the undisputed leader in virtualization and more broadly in on-premise infrastructure software. So far they have little to show when it comes to public clouds. OpenStack has huge popularity and the backing of legacy IT vendors. They are fighting the public cloud and the private cloud battle at the same time. Amazon Web Services are overwhelming leaders in public cloud, an industry that is growing fast. AWS hasn’t done much with large enterprises or on-premise environments. They do have Virtual Private Cloud (VPC), Direct Connect (DC), and the partnership with Eucalyptus.

But it’s not as if these vendors are operating in a vacuum. As VMware cloud executive Matthew Lodge notes, both AWS and VMware bring existing fan bases to the cloud party. VMware has an enviable foothold with enterprise IT and AWS owns developers. OpenStack has undoubted enthusiasm around the project from vendors but still lacks many public clouds built on its technology. Hyper V is similar to VMware but owned by Microsoft.

VMware has the “power base” with enterprises, and AWS has the same with developers. Hyper V has less base with enterprises. Eucalyptus ties the enterprise into the power of AWS through its API, and OpenStack has a great deal of momentum from vendors who want it to succeed against incumbent power bases.

Your customer or potential customer is everywhere! To let them find your business online is critical and here are the marketing tools to help.
A classic example is a Restaurant or a Bar. A Restaurant or a Bar owner would be of the school of thought that the main purpose for having an online presence is for people to view their menu and the specials that they offer from time to time or perhaps to make a booking and find their address. Is it really?
I would say this is completely wrong! Building on with the example of a Restaurant or a Bar owner with a website, I will put forward 4 marketing tools that they can use to generate more traffic on their website which in turn can lead to more walk-ins at their venue.
Blogs – If you run a Restaurant or a Bar, I suppose you are passionate about food so that should give you a million ideas to blog about. You could perhaps share recipes on your blog or write about the new menu items and where the inspiration for the recipe came from. Connect with your customers! Tell them the story; from the source to the plate! Get in touch with some influential food bloggers around town and see if they would like to share your blogs on their website. Blogs can be very powerful!
Local Directories and Restaurant Guides – Add your Restaurant or Bar to local directories which makes it easier for people to find you. Also there are plenty of Restaurant Guides out there who are happy to list restaurants and Bars for free. If you have a little extra money to spend you can enquire about the packages they offer for listing your business on their homepage or perhaps on top of search results within their website, this will give you that extra edge over your competitors as your brand becomes more visible.
Social Media – When I think of Restaurants and Bars in conjunction to social media my mind explodes with Ideas. Facebook, Pinterest and Instagram are on top of my list when it comes to social media marketing for Restaurants and Bars. You can share loads of pictures from behind the scenes through Instagram and Facebook to engage with your customers. To build on your initial fan base on Facebook, you can offer a discount coupon to customers who like or check-in to your Restaurant. For Pinterest you could create boards of the type of cuisine or cocktails that your Restaurant or Bar specialises in.
Search Engine Optimization – With a little bit of effort Restaurants and Bars can easily appear on top search engines particularly for local searches that include the suburb e.g. Restaurants in Ashfield etc. A bit of on-page optimization coupled with your social media efforts should make your venue visible to users in SERP’s. A great tool for Local SEO is Google My Business.

Most websites will not work on a mobile phone. They are slow to load, hard to view and do not offer the right kind of experience for customers on-the-go. By 2013, there will be an estimated 4.5 billion mobile phone users worldwide using their phones to surf the Internet and look for information and services. If your business is not on the mobile Web, your consumers will not be able to find you.

What are the benefits of a mobile website to my customers?

Studies show that visitors who access a full website on their mobile phones complain about how much time and data is required to load the site and how hard it is to find the information they need. Most will abandon the site and look for a competitor with a more mobile-friendly website.

Need more convincing? Read on…

  • Studies show that current mobile web-user experiences are overall horrible when it comes to viewing and navigating websites. Websites that have not been optimized for mobile viewing are too cluttered and negatively impact the visitor’s ability to locate information.
  • Traditional websites are designed to be viewed on a computer screen instead of a mobile device. This means that most regular websites are not compatible with mobile devices
  • More and more people are using their mobile devices to access the Internet while on the go. In fact, one-third of the world’s population uses their mobile devices to perform Internet searches.
  • One-half of all Internet searches for LOCAL products and services are performed from mobile devices.
  • Gartner reports that approximately 96 million mobile searches performed in 2009 were by those looking for a “location–based” (LOCAL) service. This number is expected to grow to more than 526 million searches by 2012.
  • Mobile-friendly websites receive higher mobile search engine rankings.
  • Mobile-friendly websites have higher speed, making the visitor experience more pleasurable.
  • Helps you attract both “new” and “repeat” customers by giving your business “instant mobile marketability” when your visitors land your clean mobile site with all of the important information clearly and easily accessible.
  • Can be viewed anytime and anywhere mobile cell phone coverage is available. Due to the massive amount of Internet-enabled mobile devices across the world, this means your website can easily be exposed to millions of people.
  • Mobile WebsiteMakes it easier for “On the go” visitors to easily find your business and contact you with one-click calling, one-click email, and instant directions. This eliminates the need for your customers to take “extra steps” in order to contact or visit your business.
  • When users are successful in finding what they need while visiting a mobile website, they’re more likely to come back. When mobile users land on a website that is all distorted, they quickly exit and make a mental note never to return.
  • Have the advantage of coming up higher in search engine results for local searches performed on mobile devices. This means that when people search for your type of product or service using their mobile device, you can show up higher in the results just because you have a mobile website.
  • Gives you the advantage over your competitors; most of which are not even leveraging the powerful benefits of mobile marketing.
  • It is easy to track your mobile success with free reporting and analytic tools available with most mobile platforms. Simply track your visitors and make improvements where necessary.
  • Integrating social media platforms such as Facebook, Twitter, and YouTube with your mobile website is simple and will help boost your profits even more.

The major point of resistance to the adoption of outsourced ICT services is the concept that organisations lose control over these processes, as they are handed over to a third party. This is a common misconception within the outsourcing field, as services should never be entirely handed over- as this does lead to a loss of control. The entire process should be carefully managed from both sides to ensure that expectations are met and disappointments avoided.

Disappointment and dissatisfaction can occur with incorrect or inefficient management of outsourced products and services, and companies that employ outsourced providers must bear in mind that they cannot simply forget about or neglect the outsourced processes and equipment.

The best way to deal with an outsourcing provider is to build a partnership
As good relationships will ensure that people within an organisation can work together as a synergistic team with the outsource provider. Support models, daily interactions, customer support and stakeholder management, as well as control over expectations from both parties are crucial to the success of any outsourcing initiative.

Managing the entire process is vital, from clearly defining expectations at the outset, to holding regular meetings and ensuring daily interaction with the outsource team. Organisations also need to ensure that they select the right outsource provider, and that the people at the firm are suitably certified and experienced on the platforms and equipment they are needed to work on. Services should be tailored to meet the specific needs of the organisation, from the agreed levels of service to the number of people on the ground at any point in time.

Only by building a mutually beneficial partnership and managing the process can businesses ensure that outsourcing helps them gain experience, skills and expertise rather than causing them to lose control over ICT processes, and that it continues to provide the expected effectiveness, efficiency and benefits.

Other than that, engage with iTact
We help you keep an eye on the whole outsourcing process with out expertise and experience. We are the independent third party which bear with your best interest.

Placing your bets on Enterprise Software

Office 365 vs Google Apps

The first thing any new company has to do is setup their domain, and choose their email provider. No one wants to build their own mail server infrastructure anymore, it simply doesn’t make sense.

Our decision had nothing to do with cost because the fact is both work out to about the same price at the end of the day. The decision all boiled down to one thing: Gmail.

Gmail is the Gateway Drug for Google Apps and Google knows this

Office 365 mail is not the same as Outlook.com for some reason. Gmail is the only cross platform cloud email that allows for instant search, massive archives, ease of use, etc. You don’t need to download an email client to use it … and there’s better than instant cloud search vs local search for any email ever in a large archives. There are also many third party cloud plugins that just work with Gmail. Gmail is universally the same across all devices, and thus it is the gateway drug to Google Apps.

Once you start working with the productivity tools you realize that calendaring on Outlook is much better than Google Apps. Excel, PowerPoint, and Word all beat Sheets, Slides, and Docs hands down. Things like font management, financial modelling, and effective presentations are all Office 365 all day long.

Real time collaboration on Google is better, you can tell it was built from the cloud up. Google Drive is much easier to administer than OneDrive. Google makes it very easy to share, search, and access documents from virtually any device, anywhere. Drive has quickly become the core of everything we do in our business. SharePoint almost felt like a bolt-on to Office 365, and trying to rationalize the difference between SharePoint and OneDrive just felt weird for the people on our team. Both companies continue to improve their products which is always nice to see.

Microsoft Office 365 could gain significant ground with new company by doing one thing

We may not be “enterprise class” companies, but today’s new companies are indeed tomorrow’s large companies. Microsoft needs to get more people at every accelerator and co-working space using Office 365 without even thinking about it. They need to make it easy and automated so the growth happens organically rather than through sales motions, pr, or blog posts. People just have to want to use it “just because”.

So how does Microsoft drive immediate usage and significant new company interest to Office 365 Apps like Word, PowerPoint, and Excel? They need to make it fully compatible with Google Apps. They should create a third party app for Google Apps, fully compatible with Drive, Gmail, and Adobe Acrobat Reader. If Microsoft can introduce better calendaring for Gmail, with integrated Skype and/or Hangouts support, that would be a huge bonus.

Zoho requires set up and training to use, but you get features you won’t find in Nimble or Insightly, like email marketing, custom reporting, customer service case management, invoicing and more.

Zoho is a customizable CRM that offers many of the same capabilities as Salesforce but for a fraction of the price ($20 per user per month vs. $125 for salesforce). What this means is you can adjust the layout, change data fields and even add or remove features to your liking. You can build entirely custom pages from the admin panel, or integrate other Zoho apps with a single click to get tools like project management, accounting, website visitor tracking and more.

Comparing to other similar CRM software like insightly:
Insightly doesn’t offer the same level of reporting capabilities as a product like Zoho. Insightly don’t allow you to customize the layout, data entry or workflow rules as well as you can with Zoho or Salesforce. Also There’s no marketing automation tools.

From small businesses to that consummate eCommerce giant Amazon, many companies can benefit from having their own eCommerce site. As well as selling a physical product, eCommerce can be used to sell digital products, consultations or appointments, or intangibles, making it a flexible solution for all kinds of businesses.

So how can eCommerce boost your business? Read on for five ways it can do so, and to see how other businesses have had success with eCommerce.

eCommerce Influences Purchase Decisions

When customers are deciding on a purchase, they start by looking online.

Your website is your showroom, where customers can research your product and see all the reasons it is right for them.

Now imagine that same product was just a couple of clicks away from being theirs. A well-designed eCommerce site can influence purchase decisions by showcasing your product at its best, and making it as easy as pie to take action and make a purchase.

Staples’ customer-friendly online shop has been such a success that they have scaled back their bricks and mortar stores in favor of it. Their secret?

A searchable and convenient catalog that makes purchasing easy, with plenty of product reviews to encourage that sale.

eCommerce Taps into Social Media

It’s a social world out there, with customers turning to social media such as Facebook and Twitter to research potential purchases.

For a smart business, an engaging social media presence can raise their company profile and encourage traffic and sales. A well thought out social media strategy marries well with a good eCommerce site, forging connections with buyers. Offers, competitions and other shareable content can be used to grab attention and encourage visits to your eCommerce store.

Walmart takes its social media presence seriously, with a YouTube channel filled with useful videos, and a Facebook page that’s regularly updated with shareable feel-good pictures of their food.

It’s a seemingly simple strategy but it connects emotionally, garnering them a lot of likes – and custom.

eCommerce Is Convenient

Whether they’re just getting through the door after a 10-hour work day, or getting the shopping done at the crack of dawn before the kids get up, customers appreciate the convenience of online shopping.

People’s lives are hectic; getting to a bricks and mortar store means taking a sizeable chunk out of their day for a detour. eCommerce means you can fit into your customer’s busy lives, offering the products they want, when they want them.

Amazon is a prime example of the convenience of online shopping, with a vast online store of products available anytime, day or night. This convenience combined with a quick checkout process, order tracking and quick shipping is a winning combination.

eCommerce Can Broaden Your Brand

eCommerce can be used in ways a traditional bricks and mortar store couldn’t, whether that is by offering intangible products, providing some kind of web search, or giving customers the ability to order a product to their exact specifications.

eCommerce can be used to broaden your range of products for sale, bringing you more custom and diversifying your sales.

When Apple first started ITunes, it was viewed as an IPod marketing tactic rather than a serious online store.

With billions of sales, ITunes is now a vital part of the global Apple brand, demonstrating how eCommerce can be used to broaden what a brand offers, diversifying and increasing sales.

eCommerce Offers a Personalized Experience

There are many ways in which eCommerce can be used to forge a more personal connection with customers.

For example, an eCommerce site could include personal recommendations, order tracking, quick customer service, or personalized rewards, allowing your business to offer the same kind of personal service online that you would offer face to face.

Shopittome has built an entire business, and reputation, around offering shoppers a personalized experience. Busy browsers can enter their brand preferences and size, and receive regular personalized newsletters full of bargains to suit them.

eCommerce offers businesses a vital opportunity to offer their customers round-the-clock convenience.

A good eCommerce strategy that adds excellent customer service and a dynamic social media presence into the mix can spell higher traffic and better sales for your business.

You get extensive functionality of Marketing Automation when leveraging ZOHO CRM – the top CRM solution for small and midsize companies, for your business. Weoffer solutions and services that help clients implement Marketing Automation solutions.

ZOHO CRM offers useful marketing tools like Campaign management, Email marketing, and Web Forms (to generate leads, contacts, and cases) integrated with your website. It provides robust CRM integration between sales and marketing activities. It increases the volume and improves the quality of lead generation, thus providing better return on your marketing expenditure. Its reports and analysis dashboards provide intuitive insights for measuring the performance and effectiveness of the campaigns

Features of Zoho Marketing

Campaigns Campaigns refer to the marketing process which is planned, executed, distributed and analyzed. Clients can Create Campaigns and Add Campaigns to Leads/Contacts.

Web Forms Web forms will help you to capture leads contacts and cases online. In Zoho you can Set up multiple Web Forms, Can Insert Captcha and create Auto Response Rules to the forms.

Email Marketing You can Use emails to reach numerous customers by sending mass emails and use autoresponders to follow up with the customers. Zoho helps you with Sending Mass Emails, Schedule Mass Emails, and also Setup Autoresponders.