• Extending Aws Eks Cluster Ips

    For most kubernetes clusters using EKS, there is the fear of running out of IPs since each pod gets one IP address from the VPC. For large enterprise clusters, this is a problem. Now the question is, how do we solve this? We can leverage one of the feature of AWS VPC to use secondary IP range combined with customize AWS VPC CNI configuration.

    Secondary IPs

    For thoser that are not aware, AWS released a feature back in 2017 which allows you to extend your VPC with secondary CIDRs. This means, let’s say your primary CIDR is 10.2.0.0/16, you now have the ability to use another secondary CIDR like 172.14.0.0/16 with the VPC also. We will be taking advantage of this to extend our EKS cluster combined with ability of AWS VPC CNI to use custom CNI config.

    The Cluster

    We will be creating a VPC cluster that has primary CIDR 10.0.0.0/16 and additional secondary CIDR 172.2.0.0/16. In this VPC, we will create 3 private subnets, two belonging to 10.0.0.0/16 while the other to 172.2.0.0/16.

    resource "aws_vpc" "eks_vpc" {
      cidr_block = "10.0.0.0/16" 
      enable_dns_support = true //These configuration are needed for private EKS cluster
      enable_dns_hostnames = true
      tags = {
        "kubernetes.io/cluster/test-cluster" = "shared" 
      }
    }
    ...
    resource "aws_vpc_ipv4_cidr_block_association" "secondary_cidr" {
      vpc_id     = aws_vpc.eks_vpc.id
      cidr_block = "172.2.0.0/16"
    }
    ...
    resource "aws_subnet" "private_1" {
      vpc_id     = aws_vpc.eks_vpc.id
      availability_zone = "us-east-1a"
      cidr_block = "10.0.3.0/24"
      tags = {
        Name = "private_1"
        "kubernetes.io/cluster/${var.cluster_name}" = "shared" #We are adding this because EKS automatically does this anyway.
      }
    }
    
    resource "aws_subnet" "private_2" {
      vpc_id     = aws_vpc.eks_vpc.id
      availability_zone = "us-east-1b"
      cidr_block = "10.0.4.0/24"
      tags = {
        Name = "private_2"
        "kubernetes.io/cluster/${var.cluster_name}" = "shared" 
      }
    }
    
    #This is the secondary CIDR subnet.
    resource "aws_subnet" "private_3" {
      vpc_id     = aws_vpc.eks_vpc.id
      availability_zone = "us-east-1a" #The secondary subnet must be in the same AZ for AWS CNI to use its IPs.
      cidr_block = "172.2.3.0/24"
      tags = {
        Name = "private_3"
        "kubernetes.io/cluster/${var.cluster_name}" = "shared" 
      }
    }
    

    Now that our VPC has been setup, lets go ahead and create our EKS cluster to launch into private_1 and private_2 subnets both belonging to 10.0.0.0/16 CIDR. For our demo, we will be launching our workers node into one of the subnets in us-east-1a.

    resource "aws_eks_cluster" "test_cluster" {
      name     = var.cluster_name
      role_arn = aws_iam_role.cluster.arn
    
      vpc_config {
        subnet_ids = [aws_subnet.private_1.id, aws_subnet.private_2.id] #mininum of 2 is required
        security_group_ids = [aws_security_group.cluster.id]
      }
    }
    ...
    resource "aws_launch_template" "eks-cluster-worker-nodes" {
      iam_instance_profile        {
        arn  = aws_iam_instance_profile.workers-node.arn
      }
      image_id                    = data.aws_ami.eks-worker.id
      instance_type               = "t3.medium"
      key_name                    = "mykey.pem"
      vpc_security_group_ids      = [aws_security_group.workers-node.id]
      user_data                   = "${base64encode(local.workers-node-userdata)}"
      lifecycle {
        create_before_destroy = true
      }
    }
    
    resource "aws_autoscaling_group" "eks-cluster-worker-nodes-spot" {
      ...
      mixed_instances_policy {
        ...
        launch_template {
          launch_template_specification {
            launch_template_id = "${aws_launch_template.eks-cluster-worker-nodes.id}" 
            version = "$Latest"
          }
    
          override {
            instance_type = "t3.medium"
          }
        }
      }
       ...
    }
    

    To connect the cluster, we will need our awsauth config, the kubeconfig as well as ENIConfig which informs AWS VPC CNI which subnet to use for a particular node. These will be generated from terraform

    locals {
      kubeconfig = <<KUBECONFIG
    apiVersion: v1
    clusters:
    - cluster:
        server: ${aws_eks_cluster.test_cluster.endpoint}
        certificate-authority-data: ${aws_eks_cluster.test_cluster.certificate_authority.0.data}
      name: kubernetes
    contexts:
    - context:
        cluster: kubernetes
        user: aws
      name: aws
    current-context: aws
    kind: Config
    preferences: {}
    users:
    - name: aws
      user:
        exec:
          apiVersion: client.authentication.k8s.io/v1alpha1
          command: aws-iam-authenticator
          args:
            - "token"
            - "-i"
            - "${var.cluster_name}"
    KUBECONFIG
    
      config_map_aws_auth = <<CONFIGMAPAWSAUTH
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: aws-auth
      namespace: kube-system
    data:
      mapRoles: |
        - rolearn: ${aws_iam_role.workers-node.arn}
          username: system:node:
          groups:
            - system:bootstrappers
            - system:nodes
    CONFIGMAPAWSAUTH
    
      awsauth = <<AWSAUTH
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: aws-auth
      namespace: kube-system
    data:
      mapRoles: |
        - rolearn: ${aws_iam_role.workers-node.name}
          username: system:node:
          groups:
            - system:bootstrappers
            - system:nodes
    AWSAUTH
    }
    
    resource "local_file" "kubeconfig" {
      content  = "${local.kubeconfig}"
      filename = "kubeconfig"
    }
    
    resource "local_file" "aws_auth" {
      content  = "${local.config_map_aws_auth}"
      filename = "awsauth.yaml"
    }
    resource "local_file" "eni-a" {
      content  = "${local.eni_a}"
      filename = "eni-${aws_subnet.private_1.availability_zone}.yaml"
    }
    ...
    

    DEMO

    Once terraform apply is complete, these files will be generated and should be applied following the procedures below;

    • export KUBECONFIG to the generated kubeconfig file.

    • Run kubectl apply -f eni-us-east-1a.yaml to create CRD which informs the CNI which subnets to create workers pods on. Update the CNI daemonset.

      kubectl apply -f eni-us-east-1a.yaml
      kubectl set env daemonset aws-node -n kube-system AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG=true
      kubectl set env daemonset aws-node -n kube-system ENI_CONFIG_LABEL_DEF=failure-domain.beta.kubernetes.io/zone
      
    • Run kubectl apply -f awsauth.yaml for the workers node to be able to join the cluster.

    • Once joined, you should see the pods scheduled on 172.2.3.0/24 subnet rather than the primary interface ENI.

    $ kubectl get pods -n kube-system -owide
    NAME                       READY   STATUS    RESTARTS   AGE    IP            NODE                         NOMINATED NODE   READINESS GATES
    aws-node-7tv4z             1/1     Running   0          2m     10.0.3.232    ip-10-0-3-232.ec2.internal   <none>           <none>
    coredns-69bc49bfdd-s5t75   1/1     Running   0          3m9s   172.2.3.218   ip-10-0-3-232.ec2.internal   <none>           <none>
    coredns-69bc49bfdd-wk48q   1/1     Running   0          3m9s   172.2.3.230   ip-10-0-3-232.ec2.internal   <none>           <none>
    kube-proxy-fm564           1/1     Running   0          2m     10.0.3.232    ip-10-0-3-232.ec2.internal   <none>           <none>
    

    What happened here is that L-IPAMD launches an ENI and instead of attaching secondary IPs from the primary ENI subnet, it uses the IPs from subnet specified in the ENIConfig associated with the node. Note that the subnet in the ENIConfig must be in the same AZ as the subnet of the primary ENI.

  • Kubernetes, Podsecuritypolicy And Kubeadm

    I will assume you have a little background about PodSecurityPolicy and now its time to setup your cluster with PodSecurityPolicy admission controller enabled. For our setup, here is the kubeadm config;

    kind: InitConfiguration
    apiVersion: kubeadm.k8s.io/v1beta1
    kind: InitConfiguration
    ---
    apiVersion: kubelet.config.k8s.io/v1beta1
    kind: KubeletConfiguration
    featureGates:
      AppArmor: true
    cpuManagerPolicy: static
    systemReserved:
      cpu: 500m
      memory: 256M
    kubeReserved:
      cpu: 500m
      memory: 256M
    ---
    apiVersion: kubeadm.k8s.io/v1beta1
    kind: ClusterConfiguration
    apiServer:
      extraArgs:
        enable-admission-plugins:  PodSecurityPolicy,LimitRanger,ResourceQuota,AlwaysPullImages,DefaultStorageClass
    

    Let’s start the cluster,

    $ sudo kubeadm init --config kubeadm.json 
    $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    #join a worker node to the master
    $ sudo kubeadm join 10.100.11.231:6443 --token 6yel1a.ce3le6eel3kfnxsz --discovery-token-ca-cert-hash sha256:99ee2e4ea
    302c5270f2047c7a0093533b69105a8c91bf20f48b230dce9fd3f3a
    $ kubectl get no
    NAME               STATUS     ROLES    AGE    VERSION
    ip-10-100-11-199   NotReady   <none>   109s   v1.13.1
    ip-10-100-11-231   NotReady   master   3m3s   v1.13.1
    

    As you can see, our cluster is not ready because we need to install a network plugin which you can see if you describe the node.

    $ kubectl describe no ip-10-100-11-231
    ...
    Conditions:
      Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
      ----             ------  -----------------                 ------------------                ------                       -------
      MemoryPressure   False   Fri, 31 May 2019 22:50:36 +0000   Fri, 31 May 2019 22:46:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
      DiskPressure     False   Fri, 31 May 2019 22:50:36 +0000   Fri, 31 May 2019 22:46:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
      PIDPressure      False   Fri, 31 May 2019 22:50:36 +0000   Fri, 31 May 2019 22:46:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
      Ready            False   Fri, 31 May 2019 22:50:36 +0000   Fri, 31 May 2019 22:46:39 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
    Addresses:
    ...
    

    We will be using calico as our network plugin and you can follow the install instruction right here

    $ kubectl apply -f https://docs.projectcalico.org/v3.7/manifests/calico.yaml
    configmap/calico-config created
    customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
    clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
    clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
    clusterrole.rbac.authorization.k8s.io/calico-node created
    clusterrolebinding.rbac.authorization.k8s.io/calico-node created
    daemonset.extensions/calico-node created
    serviceaccount/calico-node created
    deployment.extensions/calico-kube-controllers created
    serviceaccount/calico-kube-controllers created
    

    To our surprise, there is no pod running in kube-system as we would expect. Let look at calico-node daemonset;

    $ kubectl describe daemonset calico-node  -n kube-system
    Name:           calico-node
    Selector:       k8s-app=calico-node
    Node-Selector:  beta.kubernetes.io/os=linux
    Labels:         k8s-app=calico-node
    ...
    Events:
      Type     Reason        Age                  From                  Message
      ----     ------        ----                 ----                  -------
      Warning  FailedCreate  20s (x15 over 102s)  daemonset-controller  Error creating: pods "calico-node-" is forbidden: no providers available to validate pod request
    

    Ok, it says no provider and that means you do not have any PSP defined that will validate the daemonset pods. It is a very confusing error message. Now, we will apply this PSP below.

    apiVersion: policy/v1beta1
    kind: PodSecurityPolicy
    metadata:
      name: calico-psp
      annotations:
        seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
    spec:
      privileged: false
      allowPrivilegeEscalation: true
      allowedCapabilities:
      - '*'
      volumes:
      - '*'
      hostNetwork: true
      hostPorts:
      - min: 0
        max: 65535
      hostIPC: true
      hostPID: true
      runAsUser:
        rule: 'RunAsAny'
      seLinux:
        rule: 'RunAsAny'
      supplementalGroups:
        rule: 'RunAsAny'
      fsGroup:
        rule: 'RunAsAny'
    
    $ kubectl apply -f calico-psp.yaml 
    podsecuritypolicy.policy/calico-psp created
    $ kubectl get psp
    NAME             PRIV    CAPS   SELINUX    RUNASUSER   FSGROUP    SUPGROUP   READONLYROOTFS   VOLUMES
    calico-psp   false          RunAsAny   RunAsAny    RunAsAny   RunAsAny   false            *
    

    Ok everything should be good now? Well, if you describe the daemonset, you will still see the same message as above. You need to delete and re-apply the calico plugin manifest. After this, let’s go ahead and describe our daemonset again.

    $ kubectl describe daemonset calico-node  -n kube-system
    Name:           calico-node
    Selector:       k8s-app=calico-node
    Node-Selector:  beta.kubernetes.io/os=linux
    Labels:         k8s-app=calico-node
    ...
        Type:          HostPath (bare host directory volume)
        Path:          /var/lib/cni/networks
        HostPathType:  
    Events:
      Type     Reason        Age                From                  Message
      ----     ------        ----               ----                  -------
      Warning  FailedCreate  4s (x12 over 14s)  daemonset-controller  Error creating: pods "calico-node-" is forbidden: unable to validate against any pod security policy: []
    

    The error means we have a psp but not something we can use to validate our daemonset. This is because our daemonset pod container cannot use our PSP and we need to modify the daemonset clusterrole to be able to use this specific PSP and add this rule. Yep, you need to recreate.

    # Include a clusterrole for the calico-node DaemonSet,
    # and bind it to the calico-node serviceaccount.
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: calico-node
    rules:
      - apiGroups: ["extensions"]
        resources:
           - podsecuritypolicies
        resourceNames:
           - calico-psp
        verbs:
           - use
      # The CNI plugin needs to get pods, nodes, and namespaces.
      - apiGroups: [""]
        resources:
          - pods
          - nodes
          - namespaces
        verbs:
          - get
    ---
    ...
    

    After the clusterrole has been modified, we will run into another error;

    ...
       host-local-net-dir:
        Type:          HostPath (bare host directory volume)
        Path:          /var/lib/cni/networks
        HostPathType:  
    Events:
      Type     Reason        Age               From                  Message
      ----     ------        ----              ----                  -------
      Warning  FailedCreate  3s (x11 over 9s)  daemonset-controller  Error creating: pods "calico-node-" is forbidden: unable to validate against any pod security policy: [spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]
    ...
    

    Our calico pods are forbidden because the PSP it is using forbids privileged container and Calico needs one to handle network policy. Now, lets go ahead and fix that in our PSP by updating;

    apiVersion: policy/v1beta1
    kind: PodSecurityPolicy
    metadata:
      name: calico-psp
      annotations:
        seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
    spec:
      privileged: true #update this to true
      allowPrivilegeEscalation: true
    ...
    

    Once the update has been applied, calico will create the pods and our node will become healthy.

    ...
    Events:
      Type     Reason            Age                     From                  Message
      ----     ------            ----                    ----                  -------
      Warning  FailedCreate      3m10s (x16 over 5m54s)  daemonset-controller  Error creating: pods "calico-node-" is forbidden: unable to validate against any pod security policy: [spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]
      Normal   SuccessfulCreate  26s                     daemonset-controller  Created pod: calico-node-hwb2b
      Normal   SuccessfulCreate  26s                     daemonset-controller  Created pod: calico-node-gtrm2
    
    $ kubectl get no
    NAME               STATUS   ROLES    AGE     VERSION
    ip-10-100-11-199   Ready    <none>   7m2s    v1.13.1
    ip-10-100-11-231   Ready    master   7m33s   v1.13.1
    

    Yayy!!! Our nodes are ready.

  • Lets Talk About Terraform 0.12

    Terraform 0.12 was recently released and you can check out the details more in-dept at terraform blogs. The focus of this blog post will be the new features released and how to use them. You can find the code used in this Github repo. Below is the content of the ec2 main.tf.

    module "test_ec2" {
          source = "./ec2_module"
          //I am passing the subnet object instead of the id
          subnet = aws_subnet.private.0
          instance_tags = [
            {
              Key = "Name"
              Value = "Test"
            },
            {
              Key = "Environment"
              Value = "prd"
            }
        ]
    }
    

    Here is the content of the ec2 module;

    data "aws_ami" "ubuntu" {
      most_recent = true
    
      filter {
        name   = "name"
        values = ["ubuntu/images/hvm-ssd/ubuntu-trusty-14.04-amd64-server-*"]
      }
    
      filter {
        name   = "virtualization-type"
        values = ["hvm"]
      }
    
      owners = ["099720109477"] # Canonical
    }
    resource "aws_instance" "web" {
      ami           = data.aws_ami.ubuntu.id
      instance_type = "t2.micro"
      subnet_id  = var.subnet.id
      user_data  = templatefile("${path.module}/userdata.tpl", { instance_tags = var.instance_tags })
      tags = {
         for tag in var.instance_tags:
           tag.Key => tag.Value
      }
    }
    
    variable "instance_type" {
         default = "t2.micro"
    }
    
    variable "instance_tags" {
         type = list
    }
    
    variable "subnet" {
        type = object({
            id = string
        })
    }
    

    For expression

    From my perspective, this is one of the best feature of 0.12 which allows you to iterate over a list or map and you can do whatever you want with each item. From above, you can see that we generated the instance tags property from the user passed in list of maps.

      tags = {
         for tag in var.instance_tags:
           tag.Key => tag.Value
      }
    

    First class expression

    You can see from above that we did not have to interpolate over the variables or object values instead they were used directly as first class variable. This will save some ink and allows for more complex operational usage with terraform.

    ...
      ami           = data.aws_ami.ubuntu.id
      instance_type = "t2.micro"
      subnet_id  = var.subnet.id
    ...
    

    Rich Value Type

    My impression of this feature so far is that we can have user-defined type and existing type are supported as first class type without quote. We defined a type that expects map with id attribute and if passing a non-matching object, it will be rejected which is pretty cool.

    variable "instance_tags" {
         type = list
    }
    
    variable "subnet" {
        type = object({
            id = string
        })
    }
    

    New Type Interpolation

    New terraform now gives you the capability to loop in your user data. For our example, here is the userdata;

    #!/bin/bash
    
    %{ for tag in instance_tags~}
    cat ${tag.Key}=${tag.Value}
    %{ endfor ~}
    

    We were able to use forloop to go over the tags and use it in instance user data which is lovely without having to through hacks.

    One other feature that was not discussed here is the dynamic block which you can checkout on terraform blog page.

  • Logging With Aws Kubernetes Eks Cluster

    Logs

    EKS is the managed kubernetes offering by AWS that saves you the stress of managing your own control plane with a twist of offboarding some controls like what goes on in your control. The feature was not available when the service went GA but was recently made available recently. Here are the kinds of logs that it provides;

    • API server component logs: You know that component of your cluster that validates requests, provides api rest endpoint and so on? These are the logs from the apiserver which are very critical when trying to diagnose things like why your pods are not creating, admission controller issues etc.
      E0523 03:27:22.258958 1 memcache.go:134] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
      
    • Audit Logs: People make changes in your cluster and you want to know who, what and when. This logs gives you the ability to this. ```yaml { “kind”: “Event”, “apiVersion”: “audit.k8s.io/v1beta1”, “metadata”: { “creationTimestamp”: “2019-05-23T02:08:34Z” }, “level”: “Request”, “timestamp”: “2019-05-23T02:08:34Z”, “auditID”: “84662c40-8d4f-4d3e-99b2-0d4005e44375”, “stage”: “ResponseComplete”, “requestURI”: “/api/v1/namespaces/default/services/kubernetes”, “verb”: “get”, “user”: { “username”: “system:apiserver”, “uid”: “2d8ad7ed-25ed-4f37-a2f0-416d2af705e9”, “groups”: [ “system:masters” ] }, “sourceIPs”: [ “::1” ], “userAgent”: “kube-apiserver/v1.12.6 (linux/amd64) kubernetes/d69f1bf”, “objectRef”: { “resource”: “services”, “namespace”: “default”, “name”: “kubernetes”, “apiVersion”: “v1” }, “responseStatus”: { “metadata”: {}, “code”: 200 }, “requestReceivedTimestamp”: “2019-05-23T02:08:34.498973Z”, “stageTimestamp”: “2019-05-23T02:08:34.501446Z”, “annotations”: { “authorization.k8s.io/decision”: “allow”, “authorization.k8s.io/reason”: “” } }
    + Authenticator Logs: EKS uses this thing called aws-iam-authenticator to guess what? Authenticate against the EKS cluster using AWS credentials and roles. These logs contains event from these activities
    ```yaml
    time="2019-05-16T22:19:48Z" level=info msg="Using assumed role for EC2 API" roleARN="arn:aws:iam::523447765480:role/idaas-kubernetes-cluster-idauto-dev-masters-role"
    
    • Controller manager: For those familiar with kubernetes objects such as Deployments, Replicas etc; these are managed by controllers which ships with kubernetes controller manager. To see what these controllers are doing under the hood, you need these.
      E0523 02:07:55.486872 1 horizontal.go:212] failed to compute desired number of replicas based on listed metrics for Deployment/routing/rapididentity-default-backend: failed to get memory utilization: unable to get metrics for resource memory: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
      
    • Scheduler: This component of the control plane does what it name says, put pods on the right node after factoring a number of constraints and resources available. To see information on how this component is making its decision, check these logs.
    E0523 02:07:55.486872 1 horizontal.go:212] failed to compute desired number of replicas based on listed metrics for Deployment/routing/rapididentity-default-backend: failed to get memory utilization: unable to get metrics for resource memory: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
    

    Enabling Logs

    You can easily enable the logs in your EKS cluser console and AWS updates your cluster to enable those logs to ship to cloudwatch. The corresponding cloudwatch log group will be displayed in your console. For those using terraform to provision their cluster, you can just pass in the types of logs that you want to provision and also create the log group to ship it to.

    resource "aws_eks_cluster" "my_cluster" {
      depends_on = ["aws_cloudwatch_log_group.eks_log_group"]
      enabled_cluster_log_types = ["api", "audit"]
      name                      = "${var.cluster_name}"
      # ... other configuration ...
    }
    
  • Setting Up Jenkins As Code

    Ok so our goal here is to deploy jenkins with the click of a button with our job configured and all. Our secret sauce for this will be jenkins configuration-as-code-plugin(JCasC) which allows you to define your jenkins setup in a YAML file or folder. The problem is, we want to use JCasC to configure jenkins but we need JCasC plugin installed ahead to be able to do that for us. Thankfully, we have a solution for that. We will be using Jenkins built-in process to install plugins.

    Install plugins

    workflow-aggregator:latest
    blueocean:latest
    pipeline-maven:latest
    configuration-as-code-support:latest
    job-dsl:latest
    

    For those installing jenkins using kubernetes, you will need to update your helm values file. Now, lets crank things up;

    #plugins.txt
    workflow-aggregator:2.6
    blueocean:1.16.0
    pipeline-maven:3.6.11
    configuration-as-code-support:1.14
    job-dsl:1.74
    workflow-job:2.32
    credentials-binding:1.18
    git:3.10.0
    

    Build and Configure

    jenkins:
      systemMessage: "I did this using Jenkins Configuration as Code Plugin \n\n"
    tool:
      git:
        installations:
        - home: "git"
          name: "Default"
      maven:
        installations:
        - name: "Maven 3"
          properties:
          - installSource:
              installers:
                - maven:
                    id: "3.5.4"
    jobs:
      - script: >
          pipelineJob('pipeline') {
              definition {
                  cpsScm {
                      scriptPath 'Jenkinsfile'
                      scm {
                        git {
                            remote { url 'https://github.com/mkrzyzanowski/blog-001.git' }
                            branch '*/docker-for-mac'
                            extensions {}
                        }
                      }
                  }
              }
          }
    

    These are the plugins that we are trying to install as well as how we want our jenkins setup. Here is the Docker image build that takes care of installing for us.

    #Dockerfile
    FROM jenkins/jenkins:lts
    COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
    RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt
    

    Once the image has been built, we need a way to let JCasC know the location of our configuration file( named jenkins.yaml in most cases).

    • Copy the jenkins.yaml file to /var/jenkins_home/. It looks for this file by default
    • Use CASC_JENKINS_CONFIG environmental variable to point to the file location and the location could be any of these;
      • A file path(/my/path/jenkins.yaml)
      • A folder path(/my/path/jenkins_casc_configs/)
      • A configuration file URL PATH(https://example.com/git/jenkins.yaml)

    For this example, I will mount the jenkins.yaml to /var/jenkins_home with docker

    $ docker run --name jenkins -p -d 8081:8080 -v $(pwd):/var/jenkins_home my_jenkins_image
    Running from: /usr/share/jenkins/jenkins.war
    webroot: EnvVars.masterEnvVars.get("JENKINS_HOME")
    May 08, 2019 12:00:19 AM org.eclipse.jetty.util.log.Log initialized
    INFO: Logging initialized @612ms to org.eclipse.jetty.util.log.JavaUtilLog
    May 08, 2019 12:00:19 AM winstone.Logger logInternal
    INFO: Beginning extraction from war file
    May 08, 2019 12:00:40 AM org.eclipse.jetty.server.handler.ContextHandler setContextPath
    WARNING: Empty contextPath
    May 08, 2019 12:00:40 AM org.eclipse.jetty.server.Server doStart
    INFO: jetty-9.4.z-SNAPSHOT; built: 2018-08-30T13:59:14.071Z; git: 27208684755d94a92186989f695db2d7b21ebc51; jvm 1.8.0_212-8u212-b01-1~deb9u1-b01
    May 08, 2019 12:00:47 AM org.eclipse.jetty.webapp.StandardDescriptorProcessor visitServlet
    INFO: NO JSP Support for /, did not find org.eclipse.jetty.jsp.JettyJspServlet
    May 08, 2019 12:00:47 AM org.eclipse.jetty.server.session.DefaultSessionIdManager doStart
    INFO: DefaultSessionIdManager workerName=node0
    May 08, 2019 12:00:47 AM org.eclipse.jetty.server.session.DefaultSessionIdManager doStart
    INFO: No SessionScavenger set, using defaults
    May 08, 2019 12:00:47 AM org.eclipse.jetty.server.session.HouseKeeper startScavenging
    INFO: node0 Scavenging every 660000ms
    Jenkins home directory: /var/jenkins_home found at: EnvVars.masterEnvVars.get("JENKINS_HOME")
    May 08, 2019 12:00:50 AM org.eclipse.jetty.server.handler.ContextHandler doStart
    INFO: Started w.@a50b09c{Jenkins v2.164.2,/,file:///var/jenkins_home/war/,AVAILABLE}{/var/jenkins_home/war}
    May 08, 2019 12:00:50 AM org.eclipse.jetty.server.AbstractConnector doStart
    INFO: Started ServerConnector@5a38588f{HTTP/1.1,[http/1.1]}{0.0.0.0:8080}
    May 08, 2019 12:00:50 AM org.eclipse.jetty.server.Server doStart
    INFO: Started @31513ms
    May 08, 2019 12:00:50 AM winstone.Logger logInternal
    INFO: Winstone Servlet Engine v4.0 running: controlPort=disabled
    May 08, 2019 12:00:53 AM jenkins.InitReactorRunner$1 onAttained
    INFO: Started initialization
    May 08, 2019 12:02:20 AM hudson.ClassicPluginStrategy createClassJarFromWebInfClasses
    WARNING: Created /var/jenkins_home/plugins/job-dsl/WEB-INF/lib/classes.jar; update plugin to a version created with a newer harness
    May 08, 2019 12:02:36 AM jenkins.InitReactorRunner$1 onAttained
    INFO: Listed all plugins
    May 08, 2019 12:02:58 AM jenkins.InitReactorRunner$1 onAttained
    INFO: Prepared all plugins
    May 08, 2019 12:02:58 AM jenkins.InitReactorRunner$1 onAttained
    INFO: Started all plugins
    May 08, 2019 12:03:09 AM jenkins.InitReactorRunner$1 onAttained
    INFO: Augmented all extensions
    May 08, 2019 12:03:10 AM io.jenkins.plugins.casc.impl.configurators.DataBoundConfigurator tryConstructor
    INFO: Setting class hudson.plugins.git.GitTool.name = Default
    May 08, 2019 12:03:10 AM io.jenkins.plugins.casc.impl.configurators.DataBoundConfigurator tryConstructor
    INFO: Setting class hudson.plugins.git.GitTool.home = git
    May 08, 2019 12:03:10 AM io.jenkins.plugins.casc.impl.configurators.DataBoundConfigurator tryConstructor
    INFO: Setting class hudson.tasks.Maven$MavenInstallation.name = Maven 3
    May 08, 2019 12:03:10 AM io.jenkins.plugins.casc.impl.configurators.DataBoundConfigurator tryConstructor
    INFO: Setting class hudson.tasks.Maven$MavenInstaller.id = 3.5.4
    May 08, 2019 12:03:10 AM io.jenkins.plugins.casc.impl.configurators.DataBoundConfigurator tryConstructor
    INFO: Setting class hudson.tools.InstallSourceProperty.installers = [{maven={}}]
    May 08, 2019 12:03:10 AM io.jenkins.plugins.casc.impl.configurators.DataBoundConfigurator tryConstructor
    INFO: Setting class hudson.tasks.Maven$MavenInstallation.properties = [{installSource={}}]
    May 08, 2019 12:03:11 AM io.jenkins.plugins.casc.Attribute setValue
    INFO: Setting hudson.model.Hudson@4fbfd7e4.systemMessage = I did this using Jenkins Configuration as Code Plugin 
    
    Processing provided DSL script
    May 08, 2019 12:03:15 AM javaposse.jobdsl.plugin.JenkinsJobManagement createOrUpdateConfig
    INFO: createOrUpdateConfig for pipeline
    May 08, 2019 12:03:16 AM io.jenkins.plugins.casc.impl.configurators.DataBoundConfigurator tryConstructor
    INFO: Setting class hudson.plugins.git.GitTool.name = Default
    May 08, 2019 12:03:16 AM io.jenkins.plugins.casc.impl.configurators.DataBoundConfigurator tryConstructor
    INFO: Setting class hudson.plugins.git.GitTool.home = git
    May 08, 2019 12:03:16 AM io.jenkins.plugins.casc.Attribute setValue
    INFO: Setting hudson.plugins.git.GitTool$DescriptorImpl@7d18607f.installations = [GitTool[Default]]
    ....
    
    

    Here is the screenshot of our newly configured Jenkins.

    Jenkins setup

    Happy Automation!!!