My Spring Internship at Red Hat and things I have learned

Hey all! Here I am going to share my experience of interning at Red Hat so far. My internship started on 4th January 2021. So it’s been around two months of the internship as I am writing this.
Firstly I’ll talk about the interview process in brief. So the process started in the month of October for me when I came to know about this opening and contacted the recruiter on LinkedIn. The process started about a week after that and I was asked to appear for an online round which consisted of 5 aptitude questions and 3 coding questions. Within that week itself, I got the results and was called for a technical round interview next week. In this round, they focused on all the technical stuff including my projects, problem-solving, concepts related to Operating systems, Computer networks, etc. The week next to that I was called for another round. This one was more of a managerial round and there were two interviewers, both were very experienced. At the beginning of November, I came to know that I have been accepted into the intern program. Overall I would rate the process as medium in terms of difficulty.
The internship started on the 4th of January and I was part of the Developer Engineering team. One thing that I really loved about Red Hat is that they don’t rush interns through the internship and devote enough time to our training. The training at Red Hat unfolds in two phases. The first one is general to the entire team and covers things like project management, open-source, introduction to some of the technical topics like JS, Golang, etc. This goes for about a month (January for me).
After that, based on our interests and past works we are assigned to projects, some ongoing and some new. And then we go under training specific to this project. This again goes for about a month. During both these phases, we are taught something and then are given some small assignments to apply those concepts then and there, which of course helps in a much better understanding of the concept.
Now those two months have passed for me with a lot of leanings and I have been assigned to the DevSecOps team. We are five interns in that team. Our project mainly deals with building security into application delivery. Let’s talk about it in detail some other day :)
During the project-specific training, we mainly focused on 4 things: Containerization, Kubernetes, Openshift, and Tekton pipelines. For now, I’ll be discussing two of these in brief and only to the understanding level: Containerization and Kubernetes. Let’s take this discussion forward as a story.
So as the complexity of software products grew it became difficult and cumbersome to deploy and maintain them. It was hard for anyone to arrange and maintain the required number of machines for a large project. Also, there were portability issues when different versions of components were used by different teams. So we had to come up with something that can do the job with a lesser number of resources. So people started thinking about virtual machines, which is nothing but emulation of an actual computer. So we had a host OS running on the hardware with a number of guest OSes running on top of it and then something called a hypervisor running between these two groups of OSes and this guy is responsible for all the resource allocation to the guest OS(all of this on the same physical machine). So again all this became heavyweight and very expensive in terms of efficiency, every time there was something running on the guest OS it had to repeatedly request the hypervisor for even the smallest bit of resources. So people started thinking about something that is intrinsic to our existing operating systems and can be lighter than our virtual machines. So, if somehow we could isolate a subsystem within our existing system, our job is done! People started experimenting on Linux systems and soon they could create an isolated system using the Linux file system and somehow make a directory feel that it is the root directory. Then all the important contents of a Linux system like bash and all were copied into that directory and tada a system was created. But ….. how is this thing even remotely isolated?? Yeah, it’s not yet but that’s when namespaces step in. Linux namespaces help us in specifying the territory of some component in a Linux system. So we define (or restrict) the area of that system we created within our larger system. And this is how we have isolated this system. But wait … what if all the available resources are consumed by one such system?? Well thanks to Linux we have this thing called cgroups, this will take care of the limit to which our created systems can go and tap into the general pool of resources. Finally all this process we did above gives us what we know as containers. Now if we observe, all this was possible because we had those said features of Linux at our disposal. So I (and yeah everyone else) think we should call them “Linux Containers”.
Now if you tell developers that they have to do this every time they want to build even the smallest thing (and believe me it’s harder than it sounds ;)), they will probably say “We are good away from it”. So a lot of people in this field started working on something that can actually do all this process by itself and give someone an absolutely ready container. They succeeded and now we see a lot many versions of that “something”, a very famous one out of these is Docker. Another famous one is Podman which is built by Red Hat itself. They mainly differ by their architecture as Docker runs on client-server whereas Podman is Deamonless. But by far docker is the leader. We simply write docker files, build images from them and then run containers using those images. Let’s talk about the layering of images and the copy-on-write aspect of container images in some other blog.
Now the question is “do you expect me to go and write these docker files for any number of containers I wanna run and every time I run them, even a thousand containers?” Well no this is not the case. We have something called a docker-compose. But even docker-compose comes with very limited features like one of the major problems is that you don’t have anything watching your containers if something goes wrong. That is when Kubernetes comes into the picture. It is an orchestration tool, mostly you can see it as a system of APIs that continuously monitors your system and does the changes as required. All the problems we can think of while dealing with multiple containers are taken care of by our friend Kubernetes. Almost everything there is in your control and can be configured using the well-known YAML files.
I have already come beyond the expected length of this blog. So next time let’s pick up from here and deep dive into Kubernetes. I’ll try to crack it open (not literally) and we’ll peek at what’s inside ;). We’ll also look at some of its downsides, not many are there though.
I’ll also try to cover up other things like Openshift, Tekton, etc as I move through this journey (But that’s not a promise as of now).
I am an ever-learning creature and may have gone wrong both syntactically and semantically at times in this article so your feedback and corrections (or even concerns) are most welcomed and appreciated. Also, a blog is something I am trying to do for the very first time in my life. This was just my small effort to simply blabber on things I think I know :). Feel free to get in touch. I didn’t mention any links or readings but I think there is a lot you can learn on these and everything is just one a search away. So go ahead dig out and then maybe we can discuss it in the comments. Hope to see you reading my next blog, if I write one;).