ChaosDB Vulnerability

This is commentary on work done by the Wiz Research Team published here. You should read that article carefully before continuing. I was motivated to write this post because I felt like the incident provided a great example of how the theory of security best practices in software development related to the ground reality of how attackers infiltrate systems.

Trust Minimization

Bug #1 was the entry point to the attack: users were permitted to execute arbitrary C# code as root on Jupyter notebooks. This was probably a configuration error, judging by the fact that users otherwise executed coded as cosmosuser. There are two key takeaways from this point:

(1) Configuration errors are common in practice. Despite our best efforts to have zero errors, we can expect that in a sufficiently complex system, some errors will always creep in. Security best practices therefore have to operate in the context of a reality where errors permeate the system and parts of the system can break at any point. It is not enough to try to prevent errors; it is vital that we detect problems quickly and design the system in a way that limits the blast radius when things break.

Zero Trust is a term that refers to the extreme version of the same idea. In the security realm, trust is a bad thing: it means we expect the object of our trust to operate without error, which deviates from the reality we see around us (note that intentions don’t matter). We can avoid taking an ideological stance on this subject by accepting that if we want to build a robust system, we need to ensure that parts of the system don’t trust each other unless reasonably necessary.

(2) Systems that allow execution of arbitrary code most likely allow execution of arbitrary actions. This might seem like a tautological statement but it has deeper meaning. On one side of the equation, software developers might see arbitrary code execution systems as the epitome of loose coupling, a desirable property of the system. It means that parts of the system don’t need to know about each other, and can evolve mostly independently. On the other hand, no one really wants to allow arbitrary code to be run, because that would mean allowing the system to be taken over (for example, allowing operating system files to be modified).

In reality, we have a common objective of allowing the user of the system to perform specific authorized actions while limiting all others, though it’s not always clear ahead of time what actions need to be authorized. Our choices are (a) to identify what actions need to be authorized and allow just those, or (b) to identify what actions should be considered unauthorized and disallow just those. When people talk about supporting arbitrary code execution, they are, in fact, choosing (b) over (a) at some level within the virtual machine stack.

The trouble with approach (b) is that every level is ridden with errors and escape hatches, and it is rather difficult to plug all the holes, or even be aware of them in the first place. It may be best to avoid designing systems of this kind.

Blast Radius

Bug #2 allowed the root user to bypass firewall rules and gain access to forbidden network destinations. The problem with the configuration was that these firewall rules were set up on the Jupyter notebook container itself. As the article points out, a better alternative would have been to enforce these rules outside the Jupyter notebook container. This example demonstrates the importance of designing the system with a keen understanding of blast radius. A typical design process might start by asking:

Q1. Is network access restricted?
A1. Yes, via iptables rules.
Q2. Can a user bypass the iptables rules?
A2. No, they have to be root for that.

In practice, good security design needs to consider additional questions. For instance:

Q3. If the user becomes root, can we still limit the damage?
A3. Yes, we can implement the control outside the container.

It is often helpful to think of these controls as defenses. Defenses are designed to be robust, but they are not perfect, and can fail under clever and sustained attacks. When defenses fail, the system is weakened of course, but shouldn’t fail catastrophically. In parallel, failure of defenses should be monitored, and quick action should be taken to fortify the system.

Least Privilege Principle

Later on, the article points out another example of unexpectedly large blast radius:

“…we expected to get two keys: a private key and a public key used to encrypt and decrypt the protected settings. […] In reality, we got back 25 keys.

Could the impact have been reduced? For one thing, it isn’t clear if the service truly needed to vend all 25 keys to all clusters. It may also have been judicious to create distinct secrets for distinct purposes, and provide access to only the ones that the particular sub-system needed to do its job correctly. It’s easy to get lazy and assume that there is an ‘administrator’ with global access and super-powers, but this is recipe for disaster in a world where mistakes are inevitable.

Security Through Obscurity

The researchers were able to query a ‘certificates’ endpoint to fetch the secrets needed to intercept and gain access to hundreds of customer accounts. One aspect of this attack I’d call out is how easy it was for them to disassemble the WindowsAzureGuestAgent.exe file, and discover the certificate package format. This is something that software developers need to keep in mind as they develop robust software: the security of the system should never be contingent on attackers’ ignorance of system behavior or other knowledge (besides cryptographic secrets).

That’s all for today, folks! 🖖

From Fixed Points to Recursion

Recursion refers to self-referential code. Most people are familiar with recursion in the form of names that are used before their values are fully computed. The classic Fibonacci function can be used to illustrate this. As you can see, the definition of fib references itself.

fib :: Int -> Int
fib 0 = 0
fib 1 = 1
fib n = fib (n - 1) + fib (n - 2)

The fix function in Haskell calculates the least fixed point of the function provided as argument, if it exists. From numerical analysis, you may recall a fixed point as a value at which the output remains unaltered no matter how many times you apply the function to it. In other words, the following statement is true for all f, for which the fix function can be computed:

f (fix f) = fix f

To use the fix function, we first need to a redefine fib as a function that can be supplied to it. We define a new variable f that represents the Fibonacci function, moving fib over to the right side.

let f = \fib -> \n -> if n < 2 then n else fib (n - 1) + fib (n - 2)

In plain English, you could read this as: given the Fibonacci function and a number, we can calculate the value as (a) the number itself if it is less than 2, or (b) the sum of the Fibonacci function applied to the previous two numbers respectively.

Notice that the definition above no longer uses recursion; it simply accepts fib and n as arguments, and calculates the result.

We now find the least fixed point for the Fibonacci function, and apply it to the desired input. For instance, to find the 11th Fibonacci number, we write:

import Data.Function (fix)
fix f 11

…and voila! it prints the result 89.

If you open up the source code of fix, this is how it is defined:

fix :: (a -> a) -> a
fix f = let x = f x in x

Again, in plain English, replace x with the supplied function f applied to (f applied to (f applied to (…))), then return x. You would think this would go into an infinite loop — and it does, if the function doesn’t converge — but it actually works! One of the advantages of a language with non-strict evaluation semantics like Haskell is the ability to work effectively with infinite regress.

Link to GitHub

That’s all for today, folks! 🖖

First Principles

Over the years, with inspiration from many sources, I have come up with a number of “first principles” or beliefs that guide me on a daily basis. I would like to share these with you here.

Update 2021-11-13: Grouped the principles by a call-to-action, and softened the language to eliminate absolutes.

Follow Your Dreams

1. Every day of life merits planning and purpose. Plan goals for every year, week, and day; work backwards from your goals.

2. Perfection is an enemy; it is driven by others’ perception of you. Embrace chaos; always operate in ‘draft’ mode.

Experience Life In Color

1. Life is about adding value, but it is also about adventure. Do things that scare you; don’t give other people the power to assign value to your work.

2. Emotions are messy, but they also add a lot to life. Let down your guard sooner; get out of your head and say what’s going on in your mind.

Don’t Get Stuck

1. Prioritizing means intentionally dropping the less important stuff. Identify things to drop and shut them down, even if it seems hard to let go.

2. Resistance to action is triggered by fear of failure. Avoid fear of failure by breaking down large goals into smaller, manageable, chunks.

3. Even the smallest chunk of progress keeps the momentum going. Start now to record and track your progress; don’t set the bar unrealistically high.

Do Stuff & Learn

1. Ideas are useless until they are experimentally tested. Build systems; record experimental results in your journal, learn from what you create.

2. Minimalism is your friend. Opt for simple, bare tools to get the job done; eschew sophisticated features.

3. Making mistakes presents opportunities to learn from them. Don’t stick to the known paths, explore new avenues, break stuff to see what happens.