So our multiplication map sends pairs of vectors to vectors: \(m: A \times A \rightarrow A\). Furthermore we ask that it be bilinear, i.e. if we fix a vector \(u\), the maps \(l_u: v \mapsto m(u, v)\) and \(r_u: v \mapsto m(v, u)\) are both linear maps.
We still end up talking about tensor products, however, because tensor products give us the most general form of bilinear multiplication. In particular, because of the bilinearity, we can in fact view \(m\) not as a map from \(A\times A\) to \(A\) upon which we need to impose bilinearity, but as a map from \(A \otimes A\) to \(A\) which is linear, remembering that \(A\otimes A\) is itself a vector space.
If we have our basis vectors \(e_i\) of \(A\), we can consider products of basis vectors. In particular, for \(e_i\) and \(e_j\), their product can be written as a linear combination of basis vectors, i.e.
$$e_ie_j = m(e_i \otimes e_j) = m_{ij}^k e_k$$ We call the numbers \(m_{ij}^k\) the structure constants of the algebra with respect to the given basis.
As noted in the previous posts, we have familiar algebras like Lie algebras, where the multiplication is the bracket. Other examples include the complex numbers, which can be viewed as a 2-dimensional real vector space with basis elements \(e_1 = 1\) and \(e_2 = i\). There are 8 structure constants with respect to this basis, half of them 0; the nonzero ones are easy to figure out.
There are actually algebras built from vector spaces \(V\) where the tensor product is the multiplication operation. Recall the sum of two vector spaces \(V\oplus W\), each element of which can be written in exactly one way as the sum of an element of \(V\) and an element of \(W\). We define the tensor algebra of \(V\) as \(\bigotimes^* V = \mathbb{k} \oplus V \oplus (V\otimes V) \oplus (V\otimes V \otimes V) \oplus \ldots\); to keep things nice we assume that each element of \(\bigotimes^* V\) can be written as a unique sum of one vector from finitely many of the various \(V\otimes...\otimes V\) terms, so that we don't have to worry about convergence issues.
Now the multiplication here is not a map from \(V\otimes V\) to \(V\), but rather from \((\bigotimes^* V) \otimes (\bigotimes^* V)\) to \(\bigotimes^* V\). So when we write the product of two elements in \(u\) and \(v\) in \(\bigotimes^* V\), we won't write \(u \otimes v\), we'll just use concatenation and write \(uv\) to denote multiplication inside the tensor algebra,
We often ask that our algebras \(A\) obey certain conditions to make them nicer. The ones we will look at are associativity, commutativity, and being unital.
Associativity tells us that we don't need so many parentheses: \(a(bc) = (ab)c\). In terms of our map \(m\), we have \(m(a \otimes m(b \otimes c)) = m(m(a\otimes b)\otimes c)\). We can rewrite this in two ways which will be useful later. The first uses tensor indices:
$$m_{il}^h m_{jk}^l = m_{ij}^l m_{lk}^h$$ The other is in terms of the maps but without any vectors:
Associativity tells us that we don't need so many parentheses: \(a(bc) = (ab)c\). In terms of our map \(m\), we have \(m(a \otimes m(b \otimes c)) = m(m(a\otimes b)\otimes c)\). We can rewrite this in two ways which will be useful later. The first uses tensor indices:
$$m_{il}^h m_{jk}^l = m_{ij}^l m_{lk}^h$$ The other is in terms of the maps but without any vectors:
$$(m\circ(m\otimes id) = m\circ(id\otimes m)$$ You can check that these both are equivalent to associativity.
For commutativity, we would say that \(ab = ba\). In terms of our map \(m\), we have \(m(a \otimes b) = m(b \otimes a)\). In tensor indices,
For commutativity, we would say that \(ab = ba\). In terms of our map \(m\), we have \(m(a \otimes b) = m(b \otimes a)\). In tensor indices,
$$m_{ij}^k = m_{ji}^k$$ Compare to the antisymmetry/anticommutativity of the Lie bracket, expressed as \(u_{ij}^k = -u_{ji}^k\).
In vector-less maps, we need to introduce an operation \(\sigma_{VW}\), which takes an element of \(V\otimes W\) and returns an element of \(W \otimes V\) by sending \(v \otimes w\) to \(w \otimes v\). This operation will come in handy later. At the moment, we want \(\sigma_{AA}\), to express commutativity as
In vector-less maps, we need to introduce an operation \(\sigma_{VW}\), which takes an element of \(V\otimes W\) and returns an element of \(W \otimes V\) by sending \(v \otimes w\) to \(w \otimes v\). This operation will come in handy later. At the moment, we want \(\sigma_{AA}\), to express commutativity as
$$m = m \circ \sigma_{AA}$$ Finally we have being unital. This means picking out a particular vector to act as the multiplicative identity, the way \(1\) does for scalars. For future purposes I'm going to make this a map as well, \(\eta: \mathbb{k} \rightarrow A\), and we impose the following rule:
$$m(\eta(c) \otimes v) = m(v\otimes \eta(c)) = cv$$ Here the vector \(\eta(1)\) is our multiplicative identity in \(A\).
We can read this as a vector: \(\eta = \eta^i e_i\), so that
$$m_{ij}^k \eta^i = m_{ji}^k \eta^i = I_j^k$$ where \(I_j^k\) is 1 if \(j = k\) and 0 otherwise, giving the identity map on vectors.
If we want to write this out as maps without vectors, we need to remember that for a scalar \(c\) we defined \(c \otimes v\) to be \(cv\), so that there's an obvious isomorphism between \(\mathbb{k}\otimes A\) and \(A\), which we'll call \(lm\) for left (scalar) multiplication, and similarly a map \(rm\) from \(A \otimes \mathbb{k}\) to \(A\). Then we get that
$$m \circ (\eta \otimes id) = lm$$ $$m \circ (id \otimes \eta) = rm$$
Examples!
A field \(\mathbb{k}\) is a 1-dimensional vector space over itself, with the obvious multiplication and the unit map being the identity.
A field \(\mathbb{k}\) is a 1-dimensional vector space over itself, with the obvious multiplication and the unit map being the identity.
The complex numbers are associative, commutative, and unital as an algebra over \(\mathbb{R}\); Lie algebras are generally not commutative or unital, and when they're not commutative then they're generally not associative by the Jacobi identity, which in fact describes how far from associative a given Lie algebra is in terms of how far from commutative it is.
The set of \(n\times n\) matrices with entries in \(\mathbb{k}\) form an algebra, with the usual addition, scalar multiplication, and multiplication operations. This algebra is associative, since matrix multiplication is associative, and unital, since we have the matrix \(I\) to serve as \(\eta(1)\), but not commutative if \(n > 1\).
The set of \(n\times n\) matrices with entries in \(\mathbb{k}\) form an algebra, with the usual addition, scalar multiplication, and multiplication operations. This algebra is associative, since matrix multiplication is associative, and unital, since we have the matrix \(I\) to serve as \(\eta(1)\), but not commutative if \(n > 1\).
From an associative-but-not-commutative algebra \(A\) we can get a commutative-but-not-associative algebra called a Jordan algebra, \(J\), which has the same elements as \(A\), and hence the same vector space structure, but the multiplication looks different:
$$m_J(u \otimes v) = a(m(u\otimes v) + m(v\otimes u))$$ where the value of \(a\) depends on who you ask; sometimes it's 1, sometimes it's \(1/2\). Whether \(J\) is unital depends on whether \(A\) is unital, and also whether we can divide by \(2\) in \(\mathbb{k}\) (for the kinds of \(\mathbb{k}\) we're considering here, namely the real and complex numbers, we can always divide by 2). There are actually Jordan algebras that we can't get in this fashion; those are called exceptional Jordan algebras. The ones we can get this way are called special Jordan algebras. It says something about the people involved that all Jordan algebras are either special or exceptional.
Consider a set \(S\), and consider the set \(Fun(S)\) all of the functions from \(S\) to \(\mathbb{k}\). We're not going to put any restrictions on these functions yet, so a function just assigns to each element of \(S\) a number in \(\mathbb{k}\). Given two functions \(f\) and \(g\) and any number \(c \in \mathbb{k}\), we can form the functions \(f + g\), \(cf\) and \(f\cdot g\) by defining what they do on elements of \(S\):
$$(f + g)(s) = f(s) + g(s), (cf)(s) = cf(s), (f\cdot g)(s) = f(s)g(s)$$ The first two rules tell us that we have a vector space, the last rule tells us that we have an associative, commutative algebra. We have a multiplicative identity and thus a unit map that takes a number \(c\) and spits out the function denoted by \(\hat c\), where \(\hat c(s)\) is always \(c\) for all \(s \in S\).
If \(S\) has some structure or properties, we could restrict our functions via those structures or properties, like demanding only continuous functions or smooth functions or polynomial functions, etc.
If \(S\) has some structure or properties, we could restrict our functions via those structures or properties, like demanding only continuous functions or smooth functions or polynomial functions, etc.
For the last class of examples, take a finite group, \(G\), and make a vector space \(\mathbb{k}G\) which we define to have a basis vector for each element of \(G\). In other words, for each \(g \in G\), we have a basis element \(\hat g\) that are all linearly independent. So our elements are linear combinations:
$$\sum_{g \in G} c_g \hat g$$ We define our multiplication in \(\mathbb{k}G\) as just group composition: \(\hat g \hat h = \hat{gh}\). This multiplication is associative, since group composition is associative, and unital, since groups have to have identity elements. It's commutative if and only if \(G\) is commutative.
We call this construction the group algebra for \(G\).
We call this construction the group algebra for \(G\).
We could do this construction with infinite groups as well, but we're not going to be using those as examples.
At this point you might be wondering why I insisted on writing everything in terms of both tensor indices and maps without vectors. The reason is that next time we're going to dualize everything, swapping our upper and lower indices and flipping all our arrows around to talk about coalgebras.
No comments:
Post a Comment