Sunday, February 23, 2020

k8s lingo

k8s basic objects

  1. pods:  encapsulates an app’s container(s), storage, IP, and options
  2. service:  abstract way to expose an app running on a set of Pods as a network service (could be a deployment  or stuff)  as part of  the 'spec' there  is a 'selector' which targets pods.
    1. clusterIP:  only available intra cluster
    2. nodePort:  Exposes the Service on each Node’s IP at a static port
      (implies clusterIP too)
    3. loadBalancer:  Exposes the Service externally using a cloud provider’s load balancer.
      (implies nodePort+clusterIP too)
    4. externalName:  Maps the Service to the contents of the externalName.   (returns the CNAME instead of a the  clusterIP of the service.)
  3. volume:    offers some persistence  across restarts and sharing across containers in a pod. lots  of volume type choices.
  4. namespace
k8s higher-level  objects
  1. deployment:  kinda covers  pods+replicasets.   like 'what to run and  how many'
  2. daemonSet:  ensures that all (or some) Nodes run a copy of a Pod.  think logging/metrics  daemons per pod.
  3. statefulSet:  for managing  stateful apps.  provides guarantees about the ordering and uniqueness of these Pods.  sticky identifier.
  4. replicaSet:  maintain a stable set of replica Pods running at any given time.  you may never need this....its kinda baked into a Deployment.
  5. job:  creates one or more Pods and ensures that a specified number of them successfully terminate.
  6. cronJob:  creates Jobs on a time-based schedule.

k8s Control Plane =  k8s Master + kubelet processes


Node -  is a worker machine. NOT inherently created by k8s!
Services on a node include [container runtime, kubelet, kube-proxy]


kubectl get nodes
kubectl describe node minikube
(Addresses (internal/external  ip),  Conditions(Resource Pressures), Capacity+Available,  Info(versions) )


k8s  master ie  API  server  runs  a bunch  of controllers ex:  [nodeController, deploymentController, jobController, etc, etc]

minikube adventure: deployments 2 ways

Doin a deployment directly on  command  line...
then  a  deployment  using a  yaml.

https://kubernetes.io/docs/setup/learning-environment/minikube/
(and also  https://www.bmc.com/blogs/kubernetes-services/)


make deployment
kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.10

➜  ~ kubectl get services
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   83d 
➜  ~ kubectl  get deployments
NAME             READY   UP-TO-DATE   AVAILABLE   AGE
hello-minikube   1/1     1            1           7m1s 
➜  ~ kubectl  get pods       
NAME                              READY   STATUS    RESTARTS   AGE
hello-minikube-797f975945-44hlh   1/1     Running   0          7m10s


expose as service
4 types of services:  

  1. ClustereIP:  exposes the service on a cluster-internal IP.  You can reach the service only from within the cluster.
  2. NodePort:  exposes the service on each node’s IP at a static port. A ClusterIP service is created automatically, and the NodePort service will route to it. 
  3. LoadBalancer:  exposes the service externally using the load balancer of your cloud provider. The external load balancer routes to your NodePort and ClusterIP services, which are created automatically.
  4. ExternalName:  maps the service to the contents of the externalName field. It does this by returning a value for the CNAME record.

kubectl expose deployment hello-minikube --type=NodePort --port=8080

alternatively coulda done something  like:
kubectl expose deployment hello-world --type=ClusterIP --name=example-serviceservice "example-service" exposed
kubectl port-forward service/example-service 8080:8080 
minikube service hello-minikube --url

great. then i deleted that  service+deployment.
then  make a  yaml:
cat hello_minikube.yamlapiVersion: apps/v1kind: Deploymentmetadata:  name: hello-minikube-deployment  labels:    app: hello-minikubespec:  replicas: 1  selector:    matchLabels:      app: hello-minikube  template:    metadata:      labels:        app: hello-minikube    spec:      containers:      - name: hello-minikube        image: k8s.gcr.io/echoserver:1.10        ports:        - containerPort: 80
then  apply yaml:
kubectl apply  -f hello_minikube.yaml 
deployment.apps/hello-minikube-deployment created





minikube, kubectl installation

Installation based off this page:  https://kubernetes.io/docs/tasks/tools/install-minikube/

minkube starts a VM then leverages kubeadm to setup  k8s.

install kubectl (brew install kubectl)
install minikube  (brew install minikube)

FAIL:
minikube start --vm-driver=virtualbox🎉  minikube 1.7.3 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.7.3💡  To disable this notice, run: 'minikube config set WantUpdateNotification false'
🙄  minikube v1.5.2 on Darwin 10.12.6💥  The existing "minikube" VM that was created using the "hyperkit" driver, and is incompatible with the "virtualbox" driver.👉  To proceed, either:      1) Delete the existing VM using: 'minikube delete'      or      2) Restart with the existing driver: 'minikube start --vm-driver=hyperkit'💣  Exiting due to driver incompatibility


so choosing different vm driver........

SEMI FAIL:

minikube start --vm-driver=hyperkit😄  minikube v1.5.2 on Darwin 10.12.6💡  Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.🔄  Starting existing hyperkit VM for "minikube" ...  Waiting for the host to be provisioned ...🐳  Preparing Kubernetes v1.16.2 on Docker '18.09.9' ...🔄  Relaunching Kubernetes using kubeadm ...   Waiting for: apiserver🏄  Done! kubectl is now configured to use "minikube"⚠️  /usr/local/bin/kubectl is version 1.14.8, and is incompatible with Kubernetes 1.16.2. You will need to update /usr/local/bin/kubectl or use 'minikube kubectl' to connect with this cluster


so updating kubectl (installation)
Updating  kubectl so its compatible with the  k8s just installed by  minikube:
Warning: You are using macOS 10.12.We (and Apple) do not provide support for this old version.You will encounter build failures with some formulae.Please create pull requests instead of asking for help on Homebrew's GitHub,Discourse, Twitter or IRC. You are responsible for resolving any issues youexperience while you are running this old version.
==> Installing dependencies for kubernetes-cli: go
==> Installing kubernetes-cli dependency: go
==> Downloading https://dl.google.com/go/go1.13.8.src.tar.gz
######################################################################## 100.0%
==> Downloading https://storage.googleapis.com/golang/go1.7.darwin-amd64.tar.gz
Already downloaded: /Users/shawn/Library/Caches/Homebrew/downloads/ad0901a23a51bac69b65f20bbc8e3fe998bc87a3be91d0859ef27bd1fe537709--go1.7.darwin-amd64.tar.gz
==> ./make.bash --no-clean
==> /usr/local/Cellar/go/1.13.8/bin/go install -race std
==> Cloning https://go.googlesource.com/tools.git
Updating /Users/shawn/Library/Caches/Homebrew/go--gotools--git
==> Checking out branch release-branch.go1.13
Already on 'release-branch.go1.13'
Your branch is up to date with 'origin/release-branch.go1.13'.
HEAD is now at 65e3620a internal/telemetry: add the ability to flush telemetry data
==> go build
🍺  /usr/local/Cellar/go/1.13.8: 9,275 files, 414.1MB, built in 7 minutes 11 seconds
==> Installing kubernetes-cli
==> Cloning https://github.com/kubernetes/kubernetes.git
Updating /Users/shawn/Library/Caches/Homebrew/kubernetes-cli--git
From https://github.com/kubernetes/kubernetes
 * [new tag]               v1.17.3    -> v1.17.3
==> Checking out tag v1.17.3
Previous HEAD position was 59603c6e50... Merge pull request #87334 from justaugustus/cl-117-bump-tag
HEAD is now at 06ad960bfd... Release commit for Kubernetes v1.17.3
HEAD is now at 06ad960bfd Release commit for Kubernetes v1.17.3








minikube status
host: Running
kubelet: Running
apiserver: Running

kubeconfig: Configured




Saturday, February 22, 2020

Docker - entrypoint.sh and alternatives and a bit of gosu & exec


Entrypoint scripts can be used to bootstrap a container and then hand off control to a long-running process.   (maybe using exec+gosu)
https://success.docker.com/article/use-a-script-to-initialize-stateful-container-data

(the exec Bash command can be used so that the final running application becomes the container’s PID 1. This allows the application to receive any Unix signals sent to the container.  exec replaces the shell with a given program (executing it, Not as new process))

(The core use case for gosu is to step down from root to a non-privileged user during container startup (specifically in the ENTRYPOINT, usually) (gosu alternatives)



An alternative:
These guys execute "run-parts" on a startup folder.
That way containers derived from a base container can just dump other startup scripts in the that folder.
https://www.camptocamp.com/en/actualite/flexible-docker-entrypoints-scripts/


Chamber & Vault - Opensource tools for Aws Secrets & Creds

Chamber (of  Secrets)
https://github.com/segmentio/chamber
Chamber is a tool for managing secrets.
Currently it does so by storing secrets in SSM Parameter Store, an AWS service for storing secrets.
(read, write, delete, list, populate-enviro-vars-from-secrets-and-run-program).
Clearly you need to be authenticated to work with SSM-PS and they recommend using AwsVault.


AWS-Vault
https://github.com/99designs/aws-vault
A vault for securely storing and accessing AWS credentials in development environments.
AwsVault uses Amazon's STS service to generate temporary credentials via the GetSessionToken or AssumeRole API calls.
These expire in a short period of time, so the risk of leaking credentials is reduced.


SSM Parameter Store !=  AWS Secrets Manager
SSM Parameter Store has optional encryption  using  KMS.

Monday, January 21, 2019

Terraform and Automation


  1. https://medium.com/terrahub/introducing-terrahub-io-devops-hub-for-terraform-b7856f96d665
  2. https://learn.hashicorp.com/terraform/development/running-terraform-in-automation#automated-workflow-overview
  3. https://github.com/robmorgan/terraform-rolling-deploys




Thursday, December 27, 2018

Spinnaker vs Terraform

Essentially, use Terraform to set up your infrastructure and use Spinnaker to manage your deployments. You CAN use Terraform to accomplish Spinnaker tasks but it would require a lot of glue-code that is difficult to copy for other tasks. We see a lot of companies attempting to extend Terraform to do Spinnaker’s tasks when they can simply use Spinnaker.
Where Spinnaker Excels Over Terraform:
  • AMI Creation
  • Deployment strategies out of the box:
    • Highlander
    • Blue/Green (also called Red/Black)
    • Custom
    • Canary

Where Terraform Excels Over Spinnaker:

  • Non-deployment related resource creation and management
    • VPCs
    • Route 53 entries
    • Subnets
    • S3 buckets
    • SQS
    • SNS
    • Etc.
  • One time creation of resources
  • Large scale modification of resources
  • Lambda
  • Defining and managing your infrastructure as code
Source:
https://blog.armory.io/comparing-terraform-and-spinnaker/