Monday, February 29, 2016

Building representations

\(\newcommand{\rar}{\rightarrow} \newcommand{\mb}{\mathbb} \newcommand{\mf}{\mathfrak}\) Given a set \(S\) and representations of \(S\) on various vector spaces, we can make more representations on other vector spaces in a few ways.

Suppose we have two representations \((V,\rho:S\rar End(V))\) and \((W,\sigma:S\rar End(W))\). We say that a linear map \(\phi:V\rar W\) is an intertwiner from \((V,\rho)\) to \((W,\sigma)\) if for all \(s\in S\), we have that
$$\sigma(s) \circ \phi = \phi \circ \rho(s)$$ In other words, first moving from \(V\) to \(W\) and then acting by \(s\) is the same as first acting by \(s\) on \(V\) and then moving to \(W\).
Intertwiners serve as the appropriate notion of map between representations, since they carry the actions from one representation to the other. You might also see the term \(S\)-equivariant map for the corresponding notion for \(S\)-sets.
Now suppose that \(\phi\) is not just a linear map, but an isomorphism. Then \(V\) and \(W\) are isomorphic as vector spaces, and we can rewrite the intertwiner equation as
$$\sigma(s) = \phi \circ \rho(s) \circ \phi^{-1}$$ which looks a lot like the change-of-basis formula. Since changing basis is really just viewing things from a different perspective, we don't consider it to change anything important. Similarly, we say that if two representations have an intertwiner that's an isomorphism, the two representations are equivalent, differing only in labeling details.
In general we don't distinguish between equivalent representations. So for instance, if we ask for all of the representations of some set, we don't actually mean all representations, but rather for all equivalence classes of representations, or often one representation from each equivalence class.

Now that we've talked about what it means for two representations to be equivalent, let's see how to get inequivalent representations.

Suppose we have a representation \((V,\rho)\), and suppose that for some subspace \(W\subset V\), for all \(w\in W\) and for all \(s\in S\), \(\rho(s)w \in W\). We thus say that \(W\) is a submodule of \(V\), or that \((W,\rho|_W)\) is a subrepresentation of \((V,\rho)\).
Given a module and a submodule, we can form the quotient module, \(V/W\), whose elements are, as described in the post on quotients, equivalence classes of elements of \(V\). We note that we can form an action of \(S\) on \(V/W\), because if \(v\) is equivalent to \(v'\), then \(v-v' \in W\), and so \(\rho(s)v - \rho(s)v' = \rho(s)(v-v') \in W\), and thus \(\rho(s)v\) is equivalent to \(\rho(s)v'\).
We say that \(V\) is a simple module if it has no proper submodules (i.e. submodules that are not all of \(V\) and are not the \(0\) vector space\), i.e. no proper quotient modules; alternatively, we say that \((V,\rho)\) is an irreducible representation.

Now suppose that we have two representations \((V,\rho)\) and \((W,\sigma)\). We can put these together in a few different ways.

Look at the direct sum of \(V\) and \(W\), i.e. \(V \oplus W\) whose elements can be written as \((v, w)\) for \(v \in V\) and \(w \in W\), with the usual addition of \((v,w)+(v',w') = (v+v',w+w')\) and scalar multiplication \(c(v,w) = (cv,cw)\). The corresponding representation is given by
\((V\oplus W, \rho \oplus \sigma)\) where the map \(\rho \oplus \sigma\) acts as
$$(\rho \oplus \sigma)(s)((v,w)) = (\rho(s)v,\sigma(s)w).$$ Translating all of this into matrix form gives that \((\rho\oplus\sigma)(s)\) is a block-diagonal matrix with one block being \(\rho(s)\) and the other being \(\sigma(s)\).
We say that a representation \((U,\pi)\) is decomposable if it is equivalent to \((V\oplus W,\rho\oplus \sigma)\) for some pair of representations \((V,\rho)\) and \((W,\sigma)\) where both \(V\) and \(W\) are not the 0 vector space. A representation that isn't equivalent to a direct sum is thus indecomposable. This is distinct from being simple, as \(V/W\) is not itself a submodule of \((V,\rho)\), although it is sometimes equivalent to one, so modules can fail to be simple but still be indecomposable.
We say that a representation is fully decomposable if it is the direct sum of simple representations. There are some criteria for when we should expect representations to be fully decomposable. One case is if \(S\) is a finite group; another case is if \(S\) is a finite-dimensional simple Lie algebra or Lie group and \(V\) is finite-dimensional. These are the cases I'm mostly concerned with.
Going a bit further, we can also ask how to determine what simple representations a given representation decomposes or reduces into.
Example: Consider the group \(Uni_n(\mb R)\) of upper-triangular \(n\times n\) real-valued matrices with 1s down the diagonal, acting on \(\mb R^n\) in the usual fashion. The vector \((1,0,0,\ldots)\) is sent to itself by everything in this group, and so it is the basis of an invariant 1-dimensional subspace \(V\). However there is no \((n-1)\)-dimensional subspace complementary to \(V\) that is also invariant, since for any \(u\) outside of \(V\), there is a group element \(g\) that sends \(u\) into \(V\). So while \(\mb R^n\) is not simple as a \(Uni_n(\mb R)\) module as it has an invariant subspace, it's indecomposable since we can't find another complementary invariant subspace.

We've also talked about tensor products before, when we talked about coalgebra actions. Recall that for a coalgebra \(C\) and representations \((V,\rho)\) and \((W,\sigma)\), \(C\) acts on \(V\otimes W\) via
$$(\rho\hat\otimes \sigma)(c) = (\rho\otimes \sigma)\Delta(c).$$ We can of course extend this to Hopf algebras. Notably, sets that don't have some sort of comultiplication are not considered to be able to act on tensor products.

Example:
Consider the group \(SO(3)\) acting on \(\mb R^3\) via the usual rotation action. We can examine its action on \(\mb R^3 \otimes \mb R^3\), which as noted above fully decomposes into simple modules. Firstly we note that \(SO(3)\) preserves the symmetric and antisymmetric subspaces of \(\mb R^3 \otimes \mb R^3\), by which we mean the subspaces spanned by elements of the form \(a \otimes b + b \otimes a\) and \(a\otimes b - b \otimes a\) respectively. The antisymmetric subspace is 3-dimensional, and indeed the resulting representation is equivalent to the original representation on \(\mb R^3\).
In the case of the symmetric subspace, if we write the standard basis elements of \(\mb R^3\) as \(a,b,\) and \(c\), we get that the element \(a \otimes a + b \otimes b + c\otimes c\) gets sent to itself by any element of \(SO(3)\), so the vector space spanned by that element is invariant, giving us the trivial representation. So the 6-dimensional symmetric subspace decomposes into a copy of the trivial representation and a 5-dimensional representation, which turns out to also be simple.
Hence as an \(SO(3)\)-module, \(\mb R^3 \otimes \mb R^3\) decomposes into a 1-dimensional, a 3-dimensional, and a 5-dimensional simple module.

Example:
Suppose that we have a cocommutative Hopf algebra \(H\), for instance a group algebra or the universal enveloping algebra of a Lie algebra. Suppose that \(H\) has a module \(V\). Then we can build actions of \(H\) on tensor powers of \(V\), and thus on the entire tensor algebra \(T(V) = \bigotimes^* V\).
Now consider the ideal \(I_S = \left\langle xy - yx\right\rangle\) in the tensor algebra, remembering that we don't use \(\otimes\) to indicate multiplication inside the tensor algebra itself. \(I_S\) is invariant under the action of \(H\) since \(H\) is cocommutative, so we can form the quotient module, \(Sym(V) = T(V)/I_S\), which is the symmetric algebra on \(V\), i.e. the space generated by products of basis vectors of \(V\) where the order of multiplication doesn't matter.
Similarly, given the ideal \(I_A = \left\langle xy + yx \right\rangle\), we can form the quotient module \(Alt(V) = T(V)/I_A\), the alternating or exterior algebra on \(V\) where swapping two vectors in a product gives you a minus sign.
Another example is when \(V\) has a Lie bracket, and thus we can form the ideal \(I_L = \left\langle xy - yx - [x,y]\right\rangle\), which gives us the quotient module \(U(V) = T(V)/I_L\), i.e. the universal enveloping algebra.
Because \(H\) has a coalgebra structure, in all of these cases it preserves the algebraic structures of the quotient modules. How these various algebras decompose into simple or at least simpler \(H\)-modules is of quite some interest to representation theorists. Depending on where \(V\) comes from and the relationship of \(H\) and \(V\), these examples also give interesting results in combinatorics, geometry and theoretical physics.

No comments:

Post a Comment