# How to learn to solve coding interview questions using Leetcode (Part II)

# Introduction

We are here to talk about the Part II (see Part I) of learning how to solve coding interview questions. In this article series I am mainly reporting all the notes I took in my life when preparing for a coding interview. These are not the top-secret notes of how to pass an interview (I think there are none, apart from preparing) but rather helping you all on how to approach with the right mentality when preparing for it.

# Dynamic Programming

There is really a lot to say here, but you can find so many material out there about dynamic programming, so I think I’ll keep being practical and focus on how to solve some exercises.

I really liked this article because it tries to find patterns on how to approach to a DP problem: you can also see clustered exercises so that you can practice it. I also liked this other Medium Article: use this to organise your knowledge about DP.

When approaching to 1-dimensional or multi-dimensional problems, usually try to use an array of length *n + 1.* The first element is initialised with 0. The idea is that *dp[i]* is the best value (or the best result) you can get so far if the problem would stop at length *i*. *dp[i]* is always obtained with the previous so-far gotten results. The concept of *best is really tights to Dynamic Programming…best means *whatever makes you sure that that problem wouldn’t be better solvable if it was stopping at position i

*For example*: Find if a string can be decomposed with the following dictionary *[“leet”, “code”]* in a string like *“leetcodeleetleet”.*

Or *for example*: find what is the best combination (to minimise price) of ticket buying given that you can buy a 1-day, 7-day or 30-day ticket and considering that you know how many days you have to travel from today in the future (given as an array).

**Always try to start with the easiest problem (bottom up approach).**Usually this requires two nested for loops

*O(n²)*and therefore you should use two indexes i and j

The solution at index i is usually the best that you get by moving the index j between 0 and i.

The solution at index i is usually the best that you get by moving the index j between 0 and i.

Sometimes you need a two or three dimensional array, in harder cases you need to go to O(n³). Some other times the element of the array could actually be a map.

*For example:* you need to find the longest arithmetic subsequence (the one where all diffs are the same): in this case for every element you want to compute the diff with all the previous elements: if that diff already appeared, then you have to increment the subsequence that is referring to that diff value. In this case you can either have a unique map (**note**: in this case you have to specially consider the arithmetic subsequence with all numbers as the same) or you have an array for DP where every cell is actually a Map. This and that are related examples.

This is another example of simplifying the problem in subproblems and apply the same routine.

When a problem contains a matrix, try to see if there is a pattern for which you have to aggregate rows in all possible contiguous ways: this will take O(n²) time (for example: row 0 alone, then row 1 alone and row 1 with row0, then r2 alone and r2 with r1 and r2 with r1 with r0, …). This will simplify the problem in small problems, from a matrix to an array.

A learned lesson is that sometimes you model the problem so that you need a for-loop to go through the possible interval sizes, and then you need another for loop that goes through the length of your input and tries all the possible intervals for that size…

Typical 2D Matrices are like:

- One dimension represents the length of String S, the other represents the different type of values it can take
- One represents the length of String S, the other represents the start and end interval
- Both dimensions represents the length of string S

Another tip is to try to build the DP either from the start (from left) or from the end (from right).

## Paint Houses (exercise)

Use dynamic programming because the subproblem here is to find the best between two adjacent houses and then move the problem to the next house so that you will compare it with the previously best got result.

The first house has no constraints: so you leave all the possibilities open. **Important**: you save the best and *second best *minimum.

At the second house you sum the cost for every colour by taking the best previous minimum OR (if you can’t) you take the second best. Now you compute again the best and second best and proceed with the third house.

The result will be the minimum of the array of *k* colours after the last round.

Alternatively, you can use recursion with memoization.

## Towers of Hanoi

You have 3 stacks, and the trick is to start from the base problems and repeat them iteratively flipping the role of the three towers.

- Moving one disk from T1 to T3 is trivial
- Moving two disks from T1 to T3 is trivial
- Moving three disks is equivalent to:
- Move two disks from T1 to T2 (it is trivial)
- Move the remaining disk from T1 to T3
- Move two disks from T2 to T3 (is trivial)
- And so on…

## Best time to sell/buy

If you have freedom to buy and sell anytime you want (as long as you have one active tx per time) the problem is actually very easy. You keep summing up everything arr[t+1] > arr[t] and you restart the moment it is less: the idea is that you keep selling and buying to simulate a one-only transaction that happens between [t, t+k]. This is an interval such that it makes no sense in selling before or after t+k because you would lose profit.

A bit more formally you can still use a DP programming strategy by keeping a bi-dimensional array (the 2nd dimension is size 2 — one to track the max as if you were buying today, and one for the max as if you were selling today) → this leads to O(n²) though and is not really needed.

## Regular Expression Matching

Using characters * and ? to respectively match any character any time (even 0) and match a single char.

The strategy can be either with recursion + memoization or DP.

The recursion is easy to understand: use two pointers: if the characters match or *pattern[i] == ?* then move ahead, if they don’t match then return false. If pattern[i] == * you have the option of moving both ahead, moving pattern ahead or moving the target ahead. You can use memoization to cache the result that you get from the substring problems.

For DP as usual it is probably a bit trickier: you need to use a matrix and reason with the following mindset: *compute if pattern and target matches up until the indexes i and j*.

That is why you need a matrix that is (*n+1) x m*, n is the length of the pattern (plus the *empty string *case) and m the length of the target.

**You really need one length more to model the empty string** → that is actually essential to let the algorithm succeed** **→ in DP the base case is always needed.

## Trapping Rain

Problem here.

There are different strategies. The more intuitive is Dynamic Programming: you take the highest rect from the left and the high rect from the right and for every position you “look” left and right, take the minimum height and see if you can store water on that cell.

Another strategy (a bit more tricky) is with a Stack (store the indexes): you fill the stack every time the new cell has a rect less than the one at the top without doing nothing else . When you find a cell with a higher rect, you know that the one at the top of the stack for sure can contain water but you don’t exactly know how much (you know the right boundary — a.k.a. The current rect but you are not 100% of the highest rect on the left) — so for this reason the trick is to think about horizontal rectangles (in a while loop you can pop the element from the stack and get its height — then you multiply *the diff of height with the new stack.peek() rect *with *the distance from the current position with the current top*. So that is how you get the total height by summing up *horizontal rectangles. — *This is just a different perspective of the same problem.

Another solution to save space is the **two-pointer** and is based on the intuition that if the right side has a higher rectangle than the left side of a cell *i*, then the cell *i *contribution to water trapped depends only on the left side… and vice versa (eventually contribution is 0 if the current cell has a higher rect than the left side, in which case it because *the new* left side rectangle). This process repeats from left to right based on where the highest rectangle is.

# Recursion and backtracking

Recursion together with backtracking is another huge topic. Once again you can find a lot of theory out there about what it means and why is so important in computer science.

Bear in mind that recursion has some limitations in real scenarios: because of the way is implemented (a function that calls itself hence repeated allocated memory in the stack), overflow issues can easily arise.

When thinking about recursion try to define how the formula is defined, that will help creating the algorithm.

**Recursion and backtracking usually work together.**

A good strategy is to pass the “partial result” to the subproblem so that in the end the final subproblem (the empty one) can come up to a conclusion and update the list of valid results if needed. Similarly, recursion can be used in the opposite way by **returning a value** so the bigger problem decides what to do.

I will just focus on sample questions that may need it.

## Find k groups of digits that sum to the same number?

How can you group array elements into sub-arrays such that their sum is the same?

In this case position does not matter, so trying a *cumulative sum approach *does not help. This is a case where you should try all the possible combinations (read more of what is a **power set**).

**Depth-first search**

Depth-first search is the most common way (together with breadth-first search) to explore a graph.

Recursion and depth-first search come usually together as a similar concept since the most common (if not the only practically used) way to implement a depth-first search is by using a recursion.

## Recursive Multiply

You can see it as the sum of n1 repeated n2 times

**OR**You can see it as

*n1 * n2 = (n1 >> 1 * n2) + (n1 >> 1 * n2) + [n2] iff n1 % 2 == 1*

You can recursively do this because you will make n1 smaller and smaller until you arrive at the base case (either 1 or 0) → you should put n1 smaller than n2

## Find all possibilities to add **binary** operators to get a number

Example here.

When you have to find all the possible combinations between a set of digits and a set of operators the problem tends to be a recursive problem but its complexity can vary based on the requirements.

The **easy **proposal is: every number is a single digit and you have + and - only.

The **medium** proposal is: every number is a single digit and you have +, - and *

The **hard **proposal is: a number can be composed by aggregating digits and you have +, — and *

The easy is achieved by starting with the base problem of calling the recursive function with the first value already set (you don’t have any choice there…). In the recursion, either you add value[i] or subtract value[i] and you call the same function for i+1 with the updated runningSum. When all values have been used, you check if that is equal to the target. Otherwise, you **backtrack.**

The medium is achieved like the easy but you need to take into account the precedence of the operator. Because there are no parentheses, if you have just done 1 + 4 and at the next step you decide to do * 2, this expression in reality should be 1 + (2*4) but if we don’t write the algorithm properly, it will be (1 + 4) * 2. So to do that, we call the recursive function by also passing the last operand we used (in this case +4, but if we used the subtraction it would be -4). That way, when we have to multiply, we first revert the *lastOperand* and then we reapply it by multiplying it with the current value (e.g.: 8–4 → 4 (lastOperand = -4) *2 → (4 — (-4) + ((-4) * 2) → 0)

The hard problem sits on top of the medium with the addition that the creation of a non-single digit number is treated as an operator. The trick here is to pass to the recursive function a variable called currentOperand that you initially set to 0. In the recursive function you always try to take the currentOperand * 10 + currentDigit, so the first time is actually elegantly covered.

## Permutation

Use a binary mask. Use an array of boolean to indicate when already used or not and backtrack on it.

To avoid duplicates in the permutation you can actually order the array and then:

- If the
*curr_num**= prev_num*and*prev_num*has not been used in this current permutation trial, it means that exploring this path will result in a repeated path since in some permutation before I already went through similar case.

To build the solution:

- You either pass the partially built one to the subproblem and let it append its solution
**OR**you take the finished solution of your subproblem and you append you data to it. (The thing to pay attention here is that in the last ending case you don’t have to return a null / list with 0 elem because otherwise there will be no list to append things on… instead you return an existing empty list)

# Memoization

**Memoization** comes handy when you know that you are going to call a subproblem in the way that DOES NOT depend on how you arrived so far to the current position. This is important and typically this requires using a matrix as a memo. **Memoization is like dynamic programming**, it only changes the *top/down* or *bottom/up* approach.