As a software engineering student, it can be daunting to even begin thinking about how large companies build and maintain software. Going from building simple apps with no users to an enterprise setting where an app has multiple teams working in tandem, new features being deployed, and millions of users who expect no downtime can be unfathomable to novice engineers (myself included).

In light of this, I set out to break down one of the most common software architecture styles that I encountered when researching how large companies build and maintain software.

At some point in our lives, we’ve all come across the idea that computers “speak” binary — a cryptic wall of 0s and 1s that somehow builds up to everything we see and do in our digital lives. Even as you read these words, the device you are using is somehow manipulating 0s and 1s to make it possible.

As a software engineering student, and more generally as a curious person, I wanted to bridge the gap between the polished interfaces that I interact with on a daily basis and the underlying mechanisms at work to enable them. This is my attempt at creating a broad overview of how the various elements we encounter as we navigate operating systems, apps, games, etc. …

As an artist transitioning into computer science, nothing struck fear into my heart like seeing mathematical notation.

When it came to understanding the runtime of algorithms and how it scales given an input, (time complexity) I was overwhelmed with the mathematical notation, coefficients, graphs, proofs, and concepts that seemed a lot more complex than they actually are. I thought I’d need to solve equations to be able to effectively find the runtime of my functions. Thankfully, I discovered that for the most part the runtime of algorithms can be figured out intuitively by following a small set of rules.

First, a small crash course on time complexity. You may hear it called many names so it can get confusing, especially when you realize that there are multiple notations for time complexity. Generally when looking for the time complexity of an algorithm, we are looking for the upper bounds (or the worst case) of the growth rate of an algorithm runtime, so for our intents and purposes — and in most cases — big O and time complexity can be used interchangeably. …

When I started to learn about software engineering few things stumped me as much as recursion. I was able to grasp loops, data structures, and algorithms, but a succinct four line recursive function left my brain short circuiting.

Recursive functions are functions that call themselves, thereby repeating the logic they encapsulate until (hopefully) an end condition, or a base case is met. Since a recursive function calls itself and repeats its own logic, without a base case it would theoretically repeat forever, but in practice you’ll get a stack overflow when your function is called too many times.

The part that confused me the most about recursive functions was that I didn’t understand what a recursive function was doing when it called itself, it just seemed to call itself once, and suddenly problems that would normally need loops and iteration were solved with one line. Recursion was a cryptic puzzle to me, software engineers tend to call functions that use it “elegant” because of its simplicity, but that was lost on me. Compared to the intuitive explicit logic of a loop, all of the logic of the call stack that makes recursion work is implicit rather than stated in the function itself, so it just seemed like some weird magic happened behind the scenes to make my functions work. …

About