1. What is Kubernetes ?
Kubernetes is a software system that allows you to easily deploy and manage containerized applications on top of it. Kubernetes is an open source system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications.
2. What are benefit of using kubernetes ?
- Simplifying application deployment
Because Kubernetes exposes all its worker nodes as a single deployment platform, application developers can start deploying applications on their own and don’t need to know anything about the servers that make up the cluster.
- Achieving better utilization of hardware
By using containers and not tying the app down to a specific node in your cluster, you’re allowing the app to freely move around the cluster at any time, so the different app components running on the cluster can be mixed and matched to be packed tightly onto the cluster nodes. This ensures the node’s hardware resources are utilized as best as possible
- Health checking and self-healing
Kubernetes monitors your app components and the nodes they run on and automatically reschedules them to other nodes in the event of a node failure. This frees the ops team from having to migrate app components manually and allows the team to immediately focus on fixing the node itself and returning it to the pool of available hardware resources instead of focusing on relocating the app.If your infrastructure has enough spare resources to allow normal system operation even without the failed node, the ops team doesn’t even need to react to the failure immediately, such as at 3 a.m. They can sleep tight and deal with the failed node during regular work hours.
- Automatic scaling
Using Kubernetes to manage your deployed applications also means the ops team doesn’t need to constantly monitor the load of individual applications to react to sudden load spikes. As previously mentioned, Kubernetes can be told to monitor the resources used by each application and to keep adjusting the number of running instances of each application.
If Kubernetes is running on cloud infrastructure, where adding additional nodes is as easy as requesting them through the cloud provider’s API, Kubernetes can even automatically scale the whole cluster size up or down based on the needs of the deployed applications.
- Simplifying application development
If you turn back to the fact that apps run in the same environment both during development and in production, this has a big effect on when bugs are discovered. We all agree the sooner you discover a bug, the easier it is to fix it, and fixing it requires less work. It’s the developers who do the fixing, so this means less work for them.
3. What are the components make a kubernetes cluster ?.
4. What is etcd ?
etcd, a reliable distributed data store that persistently stores the cluster configuration
5. What is the role of scheduler and Controller Manager in kubernetes cluster ?
The Scheduler, which schedules your apps (assigns a worker node to each deployable component of your application). The Controller Manager, which performs cluster-level functions, such as replicating components, keeping track of worker nodes,
6. What is the role of API server in kubernetes cluster ?
The Kubernetes API Server, which you and the other Control Plane components communicate with
7. What Kubernets components need to be running on worker nodes ?.
On worker nodes Kubelet and kube-proxy should be running. Kubelet manager the container run-time. Container can be any container solution i.e(docker, rkt etc..)
8. What is kubectl, why we need it ?.
Kubectl is a client through which we communicate with API server component of kubernets master. So it’s provides a command line Interface (cli) to run commands against kubernets.
9. What do you understand from pod ?.
Pod is the smallest unit in kubernet cluster. So we can say pod is a group of one or more containers (such as Docker containers), with shared storage/network, and a specification for how to run the containers.
10. Why we need POD when we can run containers without pod ?.
So let’s take an example, we can run containers individually, but suppose a case when we need to run more than one container together and in same linux name space. so that two containers are not much isolated from each other. So pod is a group of one or more tightly related containers that will always run together on the same worker node and in the same Linux namespace(s). Now the containers in same pod will share same hostname, ipadadres etc..
11. Can a pod in which two containers are there run on two different worker nodes ?.
No containers in same pod will always run on same worker node.
12. What type of networking is configured between pods ?
Flat networking is configured between pods, they do not work on NAT.
13. How to check how many pods are running, and on which node they are running ?.
Below is the command to check how many pods are running and on which node they are running
# kubectl get po -o wide
14. What are orphan pods ?
Generally pods run under a different service which take care of the pods, when pods are deleted it recreates the pods, so pods are in continues surveillance. When the pods run individual and they are in no surveillance they are considered as orphan pods .
15. What are name spaces in kubernetes ?.
Namespaces allows you to split complex systems with numerous components into smaller distinct groups. They can also be used for separating resources in a multi-tenant environment, splitting up resources into production, development, and QA environments, or in any other way you may need.