# What Is Dimensional Analysis?

Mathematics is famously concerned with numbers. However, most of the physical quantities that we encounter in everyday experience cannot be expressed as numbers alone.

For example, distance can only be written as a number once we have fixed a "measuring stick" to serve as a unit. If we draw a right triangle on paper and express its sides as fractions of our measuring stick, the numbers we obtain will verify the famous relation $a^2 + b^2 = c^2.$ Apparently, this is an equation of numbers. But, as geometers, we would really like to speak of a relationship between lengths. After all, no measuring stick is needed to speak of right triangles, or even to prove Pythagoras's theorem. What is the mathematical formalism for physical quantities like length?

In standard notation, the algebra of physical quantities is generated by certain free variables, such as $\m$ and $\kg,$ called "units." This algebra is also equipped with a grading-like operation $[-]$ that attributes dimensions—products of powers of base units—to certain "dimensionful quantities." We then assert that physically meaningful expressions will be dimensionful quantities and that meaningful equations will have consistent dimensions.

However, this handful of practical rules leaves me uncomfortable. The Buckingham $\pi$ theorem, which tells us that physical laws can be expressed as relationships between dimensionless quantities, doesn't follow from the vague principle that "dimensions should agree." It is also unclear how to rigorously justify new rules for computing dimensions, like the identity $\left[ \int_a^b f(x) \, dx \right] \stackrel{??}{=} [f(x)][x]$ for integration. We know that dimensions exist and have adumbrated their behavior, but we have not yet put our finger on what dimensions are or why we care about them!

In this post, we'll see how dimensional analysis arises naturally from the principle of scale invariance. By understanding dimensioned quantities as objects that transform in simple ways under group actions, vague principles turn into simple mathematical statements. Our simple idea also will also have some less familiar consequences; we'll see how Fourier coefficients can be assigned dimensions with complex powers, and reminisce on how a more elaborate sort of "dimensional analysis" arises in differential geometry.

## The Principle of Scale-Invariance

As we noted a moment ago, the numbers we assign to physical quantities are a function of our reference quantities, or "measuring sticks." So, let us consider a group $G = (\R^+)^n$ whose action transforms numerical measurements under a change of our measuring sticks. In our triangle example, $G$ would simply be the group $\R^+$ of positive numbers with multiplication, an element $\lambda \in G$ would correspond to scaling our measuring stick by the proportion $\lambda$, and its action would send our vector $(a, b, c)$ of side length measurements to $(\lambda^{-1} a, \lambda^{-1} b, \lambda^{-1} c).$

Now we make an observation: when expressed in terms of measurements, physical relationships are invariant under the action of $G$. Informally, this is true so long as our measuring sticks can be altered independently from the phenomenon we're observing. This is the case in our geometry example because the notion of a right triangle is independent from any notion of length. However, it wouldn't hold if we used different measuring sticks to measure different lengths, because in that case scaling one but not the other would affect their ratio—something that geometry "cares about." We will call the assumption of invariance under a scaling group $G$ the principle of scale invariance.

We can easily extend the action $\sigma$ of $G$ to act on the algebra of functions of measurements. Now, given a function $f,$ there may exist a map $\lambda \colon G \to \R$ so that, for all $g \in G,$ $\sigma_g(f) = \lambda_g f.$ Such a map $\lambda,$ which is necessarily unique and a Lie group homomorphism if $f$ is non-zero and continuous, is called the dimension of $f$ and denoted $[f].$ Conversely, we define a dimension to be any Lie group homomorphism from $G$ to $\R.$ When $f$ admits a dimension we will call it a dimensioned quantity.

We know from the theory of Lie groups that "dimensions" can be identified with the dual space of the Lie algebra of $G.$ In the language of representation theory of Lie groups, a dimensioned quantity is an element of a weight space for a representation of $G,$ and its dimension is a weight vector. More informally, a dimensioned quantity is a number equipped with a multiplicative action of a scaling group. This is my suggested formalism for a "physical quantity."

In practice, we will denote dimensions with capital letters like $L$ and use multiplicative notation to operate on them. For example, we may say that $[f] = L,$ in which case it will follow that $[f^2] = L^2.$ The calculus of dimensional analysis is now on firm footing, and all the usual properties of the dimensional grading $[-]$ follow from our definition. For example, if $f$ and $g$ are dimensioned, then so is their product and $[f g] = [f][g].$ Let's take a moment to examine the general condition for a function of dimensionful quantities to have a certain dimension. Suppose a transformation $H(f_1, f_2, \dots, f_n)$ of $n$ quantities by a function $H \colon \R^n \to \R$ has dimension $L_0$ and that each $f_i$ has dimension $L_i.$ If we choose a certain one-parameter subgroup $\gamma(t)$ of $G,$ then there exist constants $c_i$ so that $L_i(\gamma(t)) = e^{c_i t}.$ (Here we are writing the group element $\gamma(t)$ as a parameter rather than as a subscript to the dimension $L_i$.) Our composition having dimension $L_0$ means that, for all $t,$ $e^{c_0 t} H(f_1, f_2, \dots, f_n) = H(e^{c_1 t} f_1, e^{c_2 t} f_2, \dots, e^{c_n t} f_n).$ Differentiating in $t$ at $t = 0$ lets us express this as an equivalent differential equation $c_0 H = c_1 f_1 \frac{\partial H}{\partial f_1} + \dots + c_n f_n \frac{\partial H}{\partial f_n},$ understood to hold over the image of $(f_1, \dots, f_n).$ The vectors of coefficients $(c_0, \dots, c_n)$ we would obtain by running $\gamma$ over other one-parameter subgroups form a subspace, so we only need to check a finite system of equations of this form—one for each "measuring stick." Of course, in practice we generally combine dimensionful quantities by summing products of powers, in which case we need only remember that $[f^\alpha] = [f]^\alpha.$ As one final example of the rules of dimensional analysis, consider the problem of assigning a dimension to the integral $\int_a^b f(x) \, dx.$ Suppose that $f$ and $x$ are dimensioned. By saying this, we are implicitly asserting that $f$ is scale-invariant, meaning that $f([x]_g x) = [f]_g f(x)$ for all $g \in G.$ Now, we know from integral calculus that a change of variable will give $\int_a^b f(x) \, dx = [x]_g^{-1} \int_{[x]_g a}^{[x]_g b} f([x]_g ^{-1} x) \, dx = [x]_g^{-1} [f]^{-1}_g \int_{[x]_g a}^{[x]_g b} f(x) \, dx.$ In other words, when we scale the bounds by $[x]_g,$ the value of our integral scales by $[x]_g [f]_g.$ So, we can extend the action of $G$ to act on the bounds and the value of our integral in such a way that integration becomes a scale-invariant operation. The dimensions associated with our action will be $[a] = [b] = [x], \quad \left[ \int_a^b f(x) \, dx \right] = [f][x],$ as we naively expected. Note, however, that we did not exactly "compute" these dimensions; we found an invariance of the relationship between the integral and its bounds of integration and expressed this in shorthand by the equations of dimensions above.

But what, after all, is the use of tracking dimensions? Earlier, we mentioned the folk wisdom that physical expressions will have dimensions and that physical equations will be dimensionally consistent. Why is this?

The most basic observation we can make is that, if a dimensioned quantity $f$ does not have unit dimension $[f] = 1,$ then it will scale over the orbits of $G$ and therefore cannot equal a non-zero constant over a scale-invariant relationship. For example, this excludes an equation like $a^2 + b^2 = c$ in our triangle example, since the quantity $(a^2 + b^2)/c$ has non-unit dimension $L.$ (Remember that this means it will scale inversely to the length of our measuring stick, and so the equation above will be true for one measuring stick but not true for another.) In general, two dimensioned quantities must "agree in dimensions" to be equal (to a non-zero value) over a scale-invariant relationship. So, dimensional analysis can be used as a way to "type-check" equations.

However, it's certainly not true that an equation must be dimensioned to be physically meaningful. After all, the equation $\exp(a^2 + b^2) = \exp(c^2)$ is also scale-invariant and indeed equivalent to our previous expression, even though neither side is a dimensionful quantity. On the other hand, the relationship $\sin(a^2 + b^2) = \sin(c^2)$ is not scale-invariant everywhere. This shows that the simple procedure of dimensional type-checking is not enough to test for scale invariance in general, although in practice it almost always is. Dimensional analysis is a tool in the service of scale invariance and not the other way around!

In fact, I believe that the most fundamental use for scale invariance has nothing to do with "type-checking," but instead to do with geometric symmetry. In general, if the orbits of a Lie group $G$ are $k$-dimensional in an $n$-dimensional ambient space, then a $G$-invariant relationship may be expressed—at least locally—as a relationship on the $(n - k)$ dimensional manifold of orbits of $G.$ In the special case that we have a coordinate system of $n$ dimensioned quantities, it is possible to generate $(n - k)$ dimensionless quantities, expressed as products of powers of our coordinates, that jointly index the orbits of $G.$ These expressions are customarily denoted as $\pi_1, \pi_2, \dots,$ and the statement we have made is known as the Buckingham $\pi$ theorem. From our current standpoint, it's just an application of the rank-nullity theorem of linear algebra.

For me, the fundamental idea behind the Buckingham $\pi$ theorem is not the technique of dimensional analysis, but the geometric picture of a $G$-invariant submanifold. Dimensional analysis is merely a tool that arises to investigate the algebra of $G$-invariant functions—and the larger algebra of functions that are $G$-invariant "up to uniform scaling," which we called the dimensioned quantities—when our group of invariant transformations is represented in a particularly simple way.

## Dimensions with Complex Powers

Indeed, as representation theorists, we would say that the scaling of $n$ measuring sticks acting on vectors of physical measurements is a semisimple representation of the group $(\R^+)^n.$ More specifically, it is a direct sum of one-dimensional representations of $(\R^+)^n,$ corresponding to choices of dimensions for each physical measurement in our problem. One simple generalization of this situation would be to allow a two-dimensional irreducible subrepresentation. A pair of measurements acted on by such a subrepresentation could reasonably be described as a complex number whose dimension involves a complex number in the exponent.

Consider, for example, the relationship between the coordinates of a vector and its length. We could measure these quantities using an oriented measuring stick. We'll record the length $c$ in the usual way, but measure the vector as two real numbers $a$ and $b$ so that $a + i b$ gives the complex-valued proportion between our vector and our oriented stick. We could reasonably describe the dimensions of our measurements as $[c] = L, \quad [a + ib] = L R^i$ where $L$ and $R$ are the two canonical projections of $(\R^+)^2,$ viewing $L$ as the parameter that rescales our stick and $R$ as the parameter that rotates it. Keep in mind that this is not a vague appeal to some abstruse intuition; we are implicitly defining the action of a group on our space of measurements and claiming merely that its orbits give different measurements we could have obtained from the same vector if a different oriented measuring stick had been chosen. Our use of the complex exponential would have been a conceptual leap for Pythagoras, but the properties we need are pretty tame—just formalizations of simple observations about how rotation works.

For this new scaling group, $a$ and $b$ themselves are not dimensioned quantities. This is because the variation of $a$ as we turn our measuring stick depends on both $a$ and $b.$ However, conjugating $a + ib$ gives a third dimensioned quantity $[\overline{(a + i b)}] = \overline{L R^i} = L R^{-i}.$ We can now "cancel out" the strange $R^i$ dimension, obtaining $[(a + ib)(a - ib)] = [a^2 + b^2] = L^2.$ We conclude that $\frac{a^2 + b^2}{c^2}$ is dimensionless. If $c$ is a function of $a$ and $b,$ then scale-invariance implies the Pythagorean theorem up to a constant: $c \propto \sqrt{a^2 + b^2}.$ An action whose irreducible subrepresentations rotate and scale the complex plane also appears naturally in the problem of measuring a periodic, one-dimensional signal. When we write such a measurement down as a function $f(t) \colon [0, 2 \pi] \to \R,$ we are making two arbitrary choices: a scale for the signal value, and a choice of phase. Let's write $L$ for the scaling of the signal and $T$ for the translation of the phase, and define an action that takes a periodic function $f$ to $\sigma_{L,T}(f)(t) = L f(t + \ln(T)).$ The "dimensionful functions" in this situation are exactly the elements of the usual Fourier basis, equipped with dimensions $[e^{i k t}] = L T^{ik}.$ Indeed, it seems appropriate that lengths are to geometers as sinusoidal oscillations are to signal engineers! Dually, we find that Fourier coefficients are also dimensionful quantities—that is, phase shifting and scaling a function will rotate and scale its Fourier coefficients. Explicitly, \begin{align*} \sigma(c_k) & = \frac{1}{2 \pi} \int_0^{2 \pi} \sigma(f)(t) e^{-i k t} \, dt \\ & = \frac{1}{2} \int_0^{2 \pi} Lf(t + \ln(T)) e^{-i k t} \, dt \\ & = \frac{L}{2 \pi} \int_0^{2 \pi} f(t) e^{-i k (t - \ln(T))} \, dt \\ & = L T^{ik} c_k, \end{align*} and so $[c_k] = L T^{ik}.$ Now, what can dimensional analysis tell us about an identity like $f(t) = \sum_{k \in \Z} c_k e^{i k t}?$ A minute ago we said that $[e^{ikt}] = L T^{i k}.$ However, the functions $e^{i k t}$ here are not affected by an adjustment to our "measuring sticks" for $f,$ so we will treat them as dimensionless. Mathematically, we will be asking if the two sides of the equation above transform identically under scalings and phase shifts of $f.$

If we restrict ourselves to the scaling component of our action, we find that each side of this equation indeed has dimension $L.$ Invariance under the whole action is harder to study, since in general $f$ will not be a dimensioned quantity under the action of phase shifting. We can avoid this issue by checking the identity at $f(t) = e^{i k t},$ where $[f] = L T^{ik}.$ In this case, the right-hand side also becomes something of dimension $L T^{ik}$ since all coefficients except $c_k$ vanish. On the other hand, plowing ahead and differentiating the action of phase shifting in the general case would give $f'(t) = \sum_{k \in \Z} ik c_k e^{ikt}.$ This is consistent with what we would obtain by differentiating each term in the sum, so we might say that the "dimensions check out."

Dimensional analysis is easier to use when considering quantities that are invariant under phase shift. For example, checking dimensions reminds us to include the conjugate in the expression for the inner product $\frac{1}{2 \pi}\int_0^{2 \pi} f g = \sum_{k \in \Z} f_k \overline{g_k},$ since each term in the sum ought to have dimension $L^2.$ In general, keeping track of the "translational dimensions" $[c_k] = T^{ik}$ will tell us whether a certain function of Fourier coefficients is invariant under phase shifts of the original function. For example, we find that products of the form $c_i c_j c_k$ are invariant in this sense exactly when $i + j + k = 0.$ This is not difficult to see by other means, but the notion of translational dimension for Fourier coefficients upgrades our remark to something particularly obvious.

## Dimensions in the Wild

Back when I started learning differential geometry, I thought that the distinction between different types of objects—vector fields and $1$-forms, for example—felt a little like dimensional analysis. In a given coordinate system, vector fields and $1$-forms can both be written as functions of the type $\R^n \to \R^n,$ but in differential geometry we are not "allowed" to convert one into the other without using additional structure of the manifold, like a symplectic form or Riemannian metric. Why? Because any such identification would not be invariant under a change of coordinates, and manifolds do not have "distinguished" coordinate systems. To some extent this is possible to see from standard dimensional analysis; if we consider uniform rescalings of our coordinate system, the coordinates of vector fields and the coordinates of covector fields will have inverse dimensions, say $L$ and $L^{-1}.$

This remark is more than mathematical pedantry. For example, when optimizing a loss function via gradient descent, it's extremely important to keep in mind that the vector-valued gradient is sensitive to the relative scalings of the different parameters we're optimizing. In other words, the gradient depends both on the loss function and on the particular way it's being parameterized.

Just as I argued that "physical quantities" ought to be formalized as numbers equipped with a multiplicative action of a scaling group, differential geometers have decided that a "neighborhood of a manifold" should be a neighborhood of $\R^n$ equipped with the action of a (local) group of diffeomorphisms. Instead of "scale invariant" differential geometers use the word "natural." (It's quite uncoincidental that this brings to mind the natural transformations of category theory…) The constraint of naturality leads us to invent special notations for covariant and contravariant tensors when working in coordinates, because keeping track of how things transform helps us make sure that our end result—a formula for some natural operation like a Lie derivative—is in fact natural. Conceptually, this is very similar to our story for dimensional analysis of physical quantities.

In general, suppose we want to talk about a kind of thing $X$ that can be written down in a certain way $Y$ after we have made some "irrelevant" choices, which are rechosen by the action of a group-like thing $Z.$ Then naturally it makes sense to keep track of the $Z$-invariant properties of $Y$-things, since these are exactly the properties that admit well-defined values on $X$-things. On the other hand, many $Y$-properties (like the actual value of a length, or the coordinates of a vector field) may not be exactly invariant under $Z,$ but at least transform in a predictable way. Then maybe we can gainfully define a calculus of "dimensions" that describe these predictable transformations and use them to "type-check" the construction of objects that we hope are really $Z$-invariant (like the Pythagorean theorem or the Liouville $1$-form.) The simpler our construction ends up being modulo an invariant action, the more information our calculus of dimensions will give us. I'll keep my eyes open for other examples like this!

Anyway, I can't finish this post without mentioning how normal, everyday dimensional analysis is so unreasonably effective on physical problems. Suppose that we've forgotten how the angular frequency of a weight on a spring scales with the mass of the weight. We have three dimensions in principle, which we will denote $T$ for time, $M$ for mass and $L$ for length. Our physical variables are \begin{align*} & [\text{mass of weight}] = M, \\ & [\text{spring constant}] = \frac{\text{force}}{L} = M T^{-2} \\ & [\text{angular frequency}] = T^{-1}. \end{align*} (Note that, although we are distinguishing dimensions like $M$ from base units like $\kg$ in this post, it's quite unproblematic in practice to refer to dimensions by their base units.) We find that $L$ turns out to not act on our measurements, and the scaling group for our problem has a $2$-dimensional action. We conclude that our relationship can be expressed in terms of one dimensionless quantity. Solving a little linear system in our heads, we find that $\frac{\text{mass of weight}}{\text{spring constant}} \text{angular frequency}^2$ is dimensionless. Fixing the spring constant and assuming a functional relationship between angular frequency and mass of weight gives $\text{angular frequency} \propto \frac{1}{\sqrt{\text{mass of weight}}}.$ Magic!

For more on the topic of dimensional analysis, there is a longer and generally more complete blog post by Terry Tao at terrytao.wordpress.com. The answer that I was delighted to discover in my own post is what Tao calls the "parametric approach." Among other things, he also goes into dimensional analysis for inequalities and on dimensions arising from discrete invariances. However, he appears to not deal with dimensions with complex powers.