Duality, polynomial representations and plethysms

Wildon's Weblog 2019-08-20

The purpose of this post is to collect some standard results on duality and symmetric and exterior powers for representations of general and special linear groups. As a warm-up, here is a small true/false quiz with some, I think, rather unintuitive answers. Let K be the algebraic closure of \mathbb{F}_2 and let E be the natural representation of \mathrm{GL}_2(K).

  1. \mathrm{Sym}^2 E is an irreducible representation of \mathrm{GL}_2(K).
  2. The restriction of E to \mathrm{GL}_2(\mathbb{F}_4) is self-dual.
  3. E is self-dual.
  4. The restriction of E to \mathrm{SL}_2(\mathbb{F}_4) is self-dual.
  5. The restriction of \mathrm{Sym}^2 E to \mathrm{SL}_2(\mathbb{F}_4) is self-dual.
  6. The restriction of \mathrm{Sym}^2 E to \mathrm{GL}_2(\mathbb{F}_2) is self-dual, reducible and semisimple.

Answers, all of which should be clear by the end of this post: 1. False, 2. False, 3. False (but true if you interpreted duality as contragredient duality for algebraic groups), 4. True, 5. False, 6. True.

For deeper results and more of the general theory from coalgebras and algebraic groups, see Green, Polynomial representations of \mathrm{GL}_n, Springer Lecture Notes in Mathematics 830. These notes of mine might also be of interest.

Background

Let K be an infinite field. Fix d \in \mathbb{N}. Let x_{ij} for 1 \le i,j \le d be d^2 commuting indeterminates and let K[x_{ij}] be the polynomial ring they generate. Given f \in K[x_{ij}] and a d \times d matrix X, let f(X) denote f evaluated at the d^2 matrix coefficients X_{ij} for 1 \le i,j \le d. A representation \rho : \mathrm{GL}_d(K) \rightarrow \mathrm{GL}_D(K) of the general linear group is said to be polynomial of degree r is there exist polynomials f_{IJ} \in K[x_{ij}] for 1 \le I, J \le D such that \rho(X)_{IJ} = f_{IJ}(X) for all matrices X.

The natural left \mathrm{GL}_d(K)-module E is a polynomial representation of \mathrm{GL}_d(K) of dimension d and degree 1, in which f_{IJ} = x_{ij} for 1 \le I \le J \le d. The symmetric power \mathrm{Sym}^r E (defined as a quotient of E^{\otimes r}) is a polynomial representation of degree r. For example if d = 2 then \mathrm{Sym}^2 E = \langle e_1^2, e_2^2, e_1e_2 \rangle is the polynomial representation of degree 2 in which

\left( \begin{matrix} \alpha & \beta \\ \gamma & \delta \end{matrix} \right) \mapsto \left( \begin{matrix} \alpha^2 & \beta^2 & \alpha \beta \\ \gamma^2 & \delta^2 & \gamma \delta \\ 2\alpha \gamma & 2\beta \delta &  \alpha\delta + \beta\gamma  \\ \end{matrix} \right).

As a check on notational conventions, observe that the first column of the matrix on the right-hand side records the image

(\alpha e_1 + \gamma e_2)^2 = \alpha^2 e_1^2 + \gamma^2 e_2^2 + 2\alpha\gamma e_1e_2

of e_1^2. Hence f_{11} = x_{11}^2, f_{21} = x_{21}^2 and f_{31} = 2x_{11}x_{21}. Note that since K is infinite, the degree of the matrix coefficients on the right-hand side is unambiguously two. This fails over finite fields: in fact, by polynomial interpolation, any function f : \mathrm{GL}_d(\mathbb{F}_q) \rightarrow \mathbb{F}_q is a polynomial, of degree at most (q-1)d^2. Another difference between the infinite algebraic group \mathrm{GL}_d(K) and the finite group \mathrm{GL}_d(\mathbb{F}_q) arises when we define duality.

Duality and symmetric powers

Mimicking the definition from finite groups, we would define the dual of a representation \rho : \mathrm{GL}_d(K) \rightarrow \mathrm{GL}(V) to be the vector space dual V^\star with the action

\rho^\star(X) \theta = [v \mapsto \theta(\rho(X^{-1}) v)].

Let v_1, \ldots, v_D be a basis for V, with dual basis v_1^\star, \ldots, v_D^\star. The calculation

(\rho^\star(X) v_j^\star)v_i = v_j^\star(\rho(X^{-1})v_i) = \rho(X^{-1})_{ji}

shows that \rho^\star(X) v_j^\star = \sum_i \rho(X^{-1})_{ji} v_i^\star, and so (carefully remembering the convention for matrices acting on column vectors), we have \rho^\star(X) = \rho(X^{-1})^t. The inverse and transpose combine to make this a well-defined representation, but except when V is a direct sum of trivial representations, each of degree 0, it is not polynomial.

Example 1 In the basis e_1^2, e_2^2, e_1e_2 of \mathrm{Sym}^2 E, we have

\begin{aligned} \rho_{(\mathrm{Sym}^2 E)^\star}\left( \begin{matrix} \alpha & \beta \\ \gamma & \delta \end{matrix} \right) &= \left( \rho_{\mathrm{Sym}^2 E} \left( \frac{1}{\Delta} \left( \begin{matrix}  \delta & -\beta \\ -\gamma & \alpha \end{matrix} \right) \right) \right)^t \\ &= \frac{1}{\Delta^2} \left( \begin{matrix} \delta^2  & \beta^2 & - \beta\delta\\ \gamma^2 & \alpha^2 & -\alpha\gamma \\ -2\gamma \delta &  -2\alpha\beta & \alpha \delta + \beta \gamma  \end{matrix} \right)^t \\ &= \frac{1}{\Delta^2} \left( \begin{matrix} \delta^2 & \gamma^2 & -2\gamma\delta \\ \beta^2 & \alpha^2 & -2\alpha\beta \\ -\beta\delta & -\alpha\gamma & \alpha \delta + \beta \gamma \\  \end{matrix} \right) \end{aligned}

where \Delta = \alpha \delta - \beta\gamma denotes the determinant. The top-left coefficient is \delta^2 / \Delta^2, and this is not a polynomial in \alpha, \beta, \gamma, \delta. We shall see later that when this representation is restricted to the special linear group \mathrm{SL}_2(K), duality is much better behaved.

Because conventional duality takes us out of the category of polynomial representations, a different definition of duality is used. We define the contragredient dual of a polynomial representation V of \mathrm{GL}_d(K) to be the vector space dual V^\star with the action

\rho^\circ(X) \theta = [v \mapsto \theta(\rho(X^t)v)]

for X \in \mathrm{GL}_d(K) and \theta \in V^\star. A very similar calculation to the above shows that \rho^\circ(X) = \rho(X^t)^t. Note that typically, the two transposes do not cancel. For instance,

\begin{aligned} \rho_{(\mathrm{Sym}^2 E)^\circ}\left( \begin{matrix} \alpha & \beta \\ \gamma & \delta \end{matrix} \right) &= \left( \rho_{\mathrm{Sym}^2 E} \left( \begin{matrix}  \alpha & \gamma \\ \beta & \delta \end{matrix} \right) \right)^t \\ &= \left( \begin{matrix} \alpha^2 & \gamma^2 & \alpha\gamma \\ \beta^2 & \delta^2 & \beta\delta \\  2\alpha\beta &  2\gamma\delta & \alpha \delta + \beta \gamma \end{matrix} \right)^t \\ &=  \left( \begin{matrix} \alpha^2 & \beta^2 & 2\alpha\beta \\  \gamma^2 & \delta^2 & 2\gamma\delta \\ \alpha\gamma & \beta\delta &  \alpha \delta + \beta \gamma \end{matrix} \right)\end{aligned}

is the matrix for \mathrm{Sym}^2 E but with \alpha\beta and \gamma\delta doubled and 2\alpha\gamma and 2\beta\delta halved.

To make this construction more explicit, we shall prove that (\mathrm{Sym}^2 E)^\circ \cong \mathrm{Sym}_2E, where

\mathrm{Sym}_2E = \langle e_1 \otimes e_1, e_2 \otimes e_2, e_1 \otimes e_2 + e_2 \otimes e_1 \rangle

is the symmetric square, but now constructed as a submodule of E \otimes E. (The formal definition is below.) Indeed, with respect to the indicated basis of \mathrm{Sym}_2E, the action is

\rho_{\mathrm{Sym}_2E} \left( \begin{matrix} \alpha & \beta \\ \gamma & \delta \end{matrix} \right) = \left( \begin{matrix} \alpha^2 & \beta^2 & 2\alpha\beta  \\ \gamma^2 &  \delta^2 & \beta\delta \\  \alpha\gamma & \beta \delta & \alpha \delta + \beta \gamma \end{matrix} \right)

exactly as for \rho_{(\mathrm{Sym}^2 E)^\circ}. For instance, the first column is now given by the image of e_1 \otimes e_1, namely

\begin{aligned} (\alpha e_1 + \gamma e_2) \otimes (\alpha e_1 + \gamma e_2) = & \alpha^2 (e_1 \otimes e_1) + \gamma^2 (e_2 \otimes e_2) \\ &\qquad + \alpha\gamma (e_1 \otimes e_2 + e_2 \otimes e_1). \end{aligned}

If \mathrm{char} K \not= 2 then \mathrm{Sym}^2 E \cong \mathrm{Sym}_2 E, via the isomorphism defined by

\begin{aligned} e_1^2 &\mapsto e_1 \otimes e_1 \\  e_2^2 &\mapsto e_2 \otimes e_2, \\  e_1e_2 &\mapsto \frac{1}{2}(e_1 \otimes e_2 + e_2 \otimes e_1).   \end{aligned}

(Correspondingly, changing the basis of \mathrm{Sym}^2 E to e_1^2,e_2^2, 2e_1e_2 doubles the third column and halves the third row of a matrix \rho_{\mathrm{Sym}^2 E}(X), giving the matrix \rho_{\mathrm{Sym}_2 E}(X).) But if \mathrm{char} K = 2 then the representations are non-isomorphic. Indeed, from the matrices above, it’s clear that \mathrm{Sym}^2 E has \langle e_1^2, e_2^2 \rangle as a subrepresentation, and not hard to see that in fact this is its unique non-trivial proper subrepresentation. The quotient is the one-dimensional determinant representation. (Some calculation is required: the warning example for \mathrm{GL}_2(\mathbb{F}_2) in Example 11 below shows that uniqueness fails if K is replaced with \mathbb{F}_2.) Similarly, \mathrm{Sym}_2 E has \langle e_1 \otimes e_2 + e_2 \otimes e_1 \rangle as its unique non-trivial proper subrepresentation. Thus, as expected from duality, the head of \mathrm{Sym}^2 E is isomorphic to the socle of \mathrm{Sym}_2 E, both being the determinant representation.

More generally, \mathrm{Sym}^r E has L^{(r)}(E) as a subrepresentation, where L^{(r)}(E) = \langle v^r : v \in E \rangle; since \mathrm{GL}_d(K) acts transitively on the set \bigl\{v^r : v \in E\backslash \{0\} \bigr\}, L^{(r)}(E) is irreducible. Whenever r \ge \mathrm{char} K it is a proper subrepresentation of \mathrm{Sym}^r E. For example, if \mathrm{char} K = 2 and \dim E = 2 then L^{(5)}(E) is spanned by v_1^4 v_2 and v_1 v_2^4. (This indicates the general proof.)

Returning to contragredient duality, it is a good exercise (performed in Proposition 5 below) to show that, generalizing our running example, (\mathrm{Sym}^r E)^\circ \cong \mathrm{Sym}_r E. For this it is important to be clear on the definition of \mathrm{Sym}_r E. We define it using the right action of the symmetric group S_r by place permutation on E^{\otimes r}. This is specified on basis elements by

(e_{i_1} \otimes \cdots \otimes e_{i_r}) \cdot \sigma = e_{i_{1\sigma^{-1}}} \otimes \cdot \otimes e_{i_{r\sigma^{-1}}}.

(The inverse is correct for a right action of the symmetric group where permutations are composed by multiplying from left to right: it ensures that the basis element e_{i_j} in position j on the left-hand side appears in position j\sigma on the right-hand side.)

Definition 2. Let I(d,r) = \{1,\ldots, d\}^r. We call the elements of I(d,r) multi-indices. Given \mathbf{i} \in I(d,r) we define

e_\mathbf{i} = e_{i_1} \otimes \cdots \otimes e_{i_r} \in E^{\otimes r}

and

v_\mathbf{i} = \sum_{\sigma} e_\mathbf{i} \cdot \sigma

where the sum is over a set of coset representatives for the stabiliser of e_\mathbf{i} in the place permutation action. Then

\mathrm{Sym}_r (V) = \langle v_\mathbf{i} : i \in I(d,r), i_1 \le \ldots \le i_r \rangle. For instance if r=3 and d = 2 then

v_{(1,2,2)} = e_1 \otimes e_2 \otimes e_2 + e_2 \otimes e_1 \otimes e_2 + e_2 \otimes e_2 \otimes e_1.

Suitable coset representatives for the stabiliser \langle (2,3)\rangle of e_1 \otimes e_2 \otimes e_2 in S_3 are \mathrm{id}, (1,2) and (1,3).

Example 3. As an example of the multi-index notation, note that if X \in \mathrm{GL}_d(K) and \mathbf{j} \in I(d,r) then

\begin{aligned} \rho_{E^{\otimes r}}(X) e_\mathbf{j} &= \sum_{i_1, \ldots, i_r \in \{1,\ldots, d\}} X_{i_1j_1} \ldots X_{i_rj_r} e_{i_1} \otimes \cdots \otimes e_{i_r} \\ &=\sum_{\mathbf{i} \in I(d,r)} X_{i_1j_1} \ldots X_{i_rj_r} e_\mathbf{i}. \end{aligned}

Summing over coset representatives in Definition 2 ensures that each summand in v_\mathbf{i} is a different rearrangement of e_\mathbf{i} and so, no matter what the characteristic, \mathrm{Sym}_r V is the subspace of E^{\otimes r} of fixed points for the place permutation action of S_r. By this observation and the following lemma, whose proof is left as an exercise, \mathrm{Sym}_r V is a subrepresentation of V^{\otimes r}.

Lemma 4. The right action of S_r on E^{\otimes r} commutes with the left action of \mathrm{GL}_d(K). \Box

Proposition 5. (\mathrm{Sym}^r E)^\circ \cong \mathrm{Sym}_r E.

Proof. Fix X \in \mathrm{GL}_d(K) and \mathbf{j} \in I(d,r). By the remark about rearrangements above, v_\mathbf{j} = \sum_{\mathbf{j}'} e_\mathbf{j'}, where, as a standing convention, the sum is over all distinct rearrangements \mathbf{j}' of \mathbf{j}. By Example 3 we have

\rho_{E^{\otimes r}}(X) e_\mathbf{j'} = \sum_{i \in I(d,r)} X_{i_1j'_1} \ldots X_{i_rj'_r} e_\mathbf{i}.

Therefore

\rho_{\mathrm{Sym}_r}(X) v_\mathbf{j} = \sum_{\mathbf{i}\in I(d,r)} \sum_{\mathbf{j}'} X_{i_1j'_1} \ldots X_{i_rj'_r} e_\mathbf{i}.

The right-hand side is known to be in \mathrm{Sym}_r E. Looking at the coefficient of e_\mathbf{i} where i_1 \le \ldots \le i_r we see that the coefficient of v_\mathbf{i} in the right hand side is

\sum_{\mathbf{j}'} X_{i_1j'_1} \ldots X_{i_rj'_r}X_{i_rj'_r}.

We now turn to (\mathrm{Sym}^r E)^\circ. Given \mathbf{i} \in I(d,r) define

w_{\mathbf{i}} = e_{i_1} \ldots e_{i_r} \in \mathrm{Sym}^r E.

Observe that the w_\mathbf{i} for i_1 \le \ldots \le i_r are a basis of \mathrm{Sym}^r E. Let w_\mathbf{i}^\star denote the corresponding element of the dual basis of (\mathrm{Sym}^r V)^\circ. We have

\rho_{(\mathrm{Sym}^r E)^\circ}(X) w_\mathbf{j}^\star = [w_\mathbf{i} \mapsto w_\mathbf{j}^\star \bigl( \rho_{\mathrm{Sym}^r V}(X^t) w_\mathbf{i}].

Since \rho_{\mathrm{Sym}^r E}(X^t) w_\mathbf{i} = \sum_\mathbf{k} X^t_{k_1i_1} \ldots X^t_{k_ri_r} w_\mathbf{k}, the coefficient of w_\mathbf{i}^\star in the image of w_\mathbf{j}^\star is

\sum_{\mathbf{j}'} X^t_{j'_1i_1} \ldots X^t_{j'_ri_r}.

Noting the transpose in X^t, this agrees with the coefficient of v_\mathbf{i} in the image of v_\mathbf{j}. Therefore the map v_\mathbf{k} \mapsto w_\mathbf{k}^\star defines a isomorphism of representations \mathrm{Sym}_r E \cong (\mathrm{Sym}^r E)^\circ. \Box

The proposition is an interesting example of a `self-generalizing’ result. To be entirely honest, some human intervention is required, but it is essentially a matter of book-keeping.

Corollary 6. Let V be a representation of \mathrm{GL}_d(K). Then

(\mathrm{Sym}^r V)^\circ \cong \mathrm{Sym}_r V^\circ.

Proof. Let D = \mathrm{dim} V. By Proposition 5, taking E = V, there is a D \times D matrix P such that

P^{-1} \bigl(\rho_{\mathrm{Sym}^r V}(Y^t)\bigr)^t P = \rho_{\mathrm{Sym}_r(V)}(Y)

for all matrices Y \in \mathrm{GL}_D(K). Hence, setting Y = \rho_V(X^t)^t, we have

P^{-1} \bigl(\rho_{\mathrm{Sym}^r V}(\rho_V(X^t))\bigr)^t P = \rho_{\mathrm{Sym}_r(V)}\bigl(\rho_V(X^t)^t\bigr)

for all matrices X \in \mathrm{GL}_d(K). In the representation (\mathrm{Sym}^r V)^\circ the matrix representing X \in \mathrm{GL}_d(K) is, by definition, \bigl(\rho_{\mathrm{Sym}^r V}(\rho_V(X^t))\bigr)^t, while in the representation \mathrm{Sym}_r V^\circ, the matrix representing X \in \mathrm{GL}_d(K) is, again by definition, \rho_{\mathrm{Sym}_r V}\bigl(\rho(X^t)^t\bigr). By the previous displayed equation, these representing matrices are conjugate by the fixed matrix P. \Box.

Exterior powers

Perhaps surprisingly, copying the construction of \mathrm{Sym}_r E for exterior powers does not give a new representation. Let d \ge r. We first find the matrices for \bigwedge^r E.

Lemma 7. Let X \in \mathrm{GL}_d(K). Let Y = \rho_{\wedge^r E}(X) be the matrix representing X in its action on \bigwedge^r E, with respect to the basis of all e_{i_1} \wedge \cdots \wedge e_{i_r} where i_1 < \ldots < i_r. Then

Y_{\mathbf{i} \mathbf{j}} = \sum_{\sigma \in S_r} X_{i_{1\sigma} j_1} \ldots X_{i_{r\sigma} j_r}.

Proof. We have

\begin{aligned} \rho_{\wedge^r E}(X)& (e_{j_1} \wedge \cdots \wedge e_{j_r}) \\ & =\sum_{\mathbf{i} \in I(d,r)} X_{i_1j_1} \cdots X_{i_rj_r} (e_{i_1} \wedge \cdots \wedge e_{i_r}) \\ & =\sum_{i_1 < \ldots < i_r} \sum_{\sigma \in S_r} \mathrm{sgn}(\sigma) X_{i_{1\sigma}j_1} \ldots X_{i_{r\sigma}j_r} (e_{i_1} \wedge \cdots \wedge e_{i_r}). \end{aligned}

as required. \Box

Given \mathbf{i} \in I(d,r), let

u_\mathbf{i} = \sum_{\sigma \in S_r} \mathrm{sgn}(\sigma) e_\mathbf{i} \cdot \sigma.

Observe that u_\mathbf{i} = 0 if the multi-index \mathbf{i} has two equal entries. (This holds even if \mathrm{char} K = 2.) Let \bigwedge_r E be the subspace of E^{\otimes r} with basis all u_\mathbf{i} for \mathbf{i} \in I(d,r) such that i_1 < \ldots < i_r. It follows from Lemma 4 that \bigwedge_r V is a subrepresentation of V^{\otimes r}. Let X \in \mathrm{GL}_d(K). By Example 3,

\rho_{\wedge_r E}(X) u_\mathbf{j} = \sum_{\mathbf{i} \in I(d,r)} X_{i_1j_1} \ldots X_{i_rj_r} \sum_{\sigma \in S_r} \mathrm{sgn}(\sigma) e_\mathbf{i} \cdot \sigma.

The right-hand side is known to be in \bigwedge_r V. The coefficient of e_\mathbf{i} where i_1 < \ldots < i_r in the right-hand side is

\sum_{\sigma \in S_r} \mathrm{sgn}(\sigma) X_{i_{1\sigma} j_1} \ldots X_{i_{r\sigma}j_r}.

Comparing with Lemma 7, we see that the matrices are the same. Therefore \bigwedge^r E \cong \bigwedge_r E.

Similarly one can show that \bigwedge^r E \cong (\bigwedge^r E)^\circ. For a more conceptual proof of this, one could argue that since \bigwedge^r E is generated by the highest weight vector e_1 \wedge \cdots \wedge e_r, of weight (1,\ldots,1,0,\ldots,0), it is irreducible; over any field the irreducible polynomial representations of \mathrm{GL}_d(K) are self-dual.

Special linear groups

We saw above that the contragredient duality used for representations of the infinite algebraic group \mathrm{GL}_d(K) does not, in general, agree with duality as defined for representations of finite groups. There is an interesting exception to this which occurs when d = 2 and we restrict to the action of \mathrm{SL}_2(K). Write \cong_{\mathrm{SL}_2(K)} for an isomorphism that holds after this restriction. Here it is in our running example.

Example 8. Let X = \left( \begin{matrix} \alpha & \beta \\ \gamma &\delta \end{matrix}\right). We saw in Example 1 that

\rho_{(\mathrm{Sym}^2 E)^\star} (X) = \frac{1}{\Delta^2} \left( \begin{matrix} \delta^2 & \gamma^2 & -2\gamma\delta \\ \beta^2 & \alpha^2 & -2\alpha\beta \\ -\beta\delta & -\alpha\gamma & \alpha \delta + \beta \gamma   \end{matrix} \right).

If X \in \mathrm{SL}_2(K) then \Delta = 1. Changing the basis of (\mathrm{Sym^2}E)^\star by swapping the first and second basis vectors and negating the third we get an isomorphic representation \rho' in which

\rho'(X) = \left( \begin{matrix} \alpha & \beta^2 & 2\alpha\beta \\ \gamma^2 & \delta^2 &  2\gamma\delta\\ \alpha\gamma & \beta\delta & \alpha \delta + \beta \gamma   \end{matrix} \right).

This agrees with the matrix representing X in its action on (\mathrm{Sym}^2 E)^\circ \cong (\mathrm{Sym}_2 E). Therefore (\mathrm{Sym}^2 E)^\star \cong_{\mathrm{SL}_2(K)} (\mathrm{Sym}^2 E)^\circ.

More generally we have the following proposition.

Proposition 9. Let V be a representation of \mathrm{GL}_2(K). Then V^\star \cong_{\mathrm{SL}_2(K)} V^\circ.

Proof. Let J = \left( \begin{matrix} 0 & 1 \\ -1 & 0 \end{matrix} \right). By the following calculation

\begin{aligned} J \left(\begin{matrix} \alpha & \beta \\ \gamma & \delta \end{matrix} \right) J^{-1} &= \left( \begin{matrix} 0 & 1 \\ -1 & 0 \end{matrix}\right)  \left( \begin{matrix} \alpha & \beta \\ \gamma & \delta \end{matrix}\right)^{-1} \left( \begin{matrix} 0 & -1 \\ 1 & 0 \end{matrix} \right) \\ &= \left( \begin{matrix} 0 & 1 \\ -1 & 0 \end{matrix}\right) \left( \begin{matrix} \delta & -\beta \\ -\gamma & \alpha \end{matrix} \right) \left( \begin{matrix} 0 & -1 \\ 1 & 0 \end{matrix} \right) \\ &= \left( \begin{matrix} -\gamma  & \alpha \\  \delta & \beta \end{matrix} \right) \left( \begin{matrix} 0 & -1 \\ 1 & 0 \end{matrix} \right) \\ &= \left( \begin{matrix} \alpha & \gamma \\ \beta & \delta \end{matrix}\right), \end{aligned}

for any matrix X, the matrices X^{-1} and X^t are conjugate by the fixed matrix J. Hence so are \rho_V(X^{-1}) and \rho_{V}(X^t), and since transposition preserves conjugacy, so are \rho_V(X^{-1})^t and \rho_V(X^t)^t. The proposition follows. \Box.

Plethysms

If K has characteristic zero then it is known that

\mathrm{Sym}^2 \mathrm{Sym}^n E \cong_{\mathrm{SL}_2(K)} \bigwedge^2 \mathrm{Sym}^{n+1} E.

Example 11. We shall prove that when n = 1, this isomorphism holds for arbitrary K, provided that one side is replaced with its contragredient dual. By Lemma 7, given a matrix X \in \mathrm{GL}_3(K) acting on a basis z_1, z_2, z_3, the matrix \bigwedge^2 X representing X in its action on \langle z_1 \wedge z_2, z_1 \wedge z_3, z_2 \wedge z_3 \rangle with respect to the indicated basis is Y where

Y_{(i_1,i_2)(j_1,j_2)} = X_{i_1j_1}X_{i_2j_2}-X_{i_2j_1}X_{i_1j_2}.

Applying this result when X acts on \mathrm{Sym}^2 E with respect to the basis e_1^2, e_2^2, e_1e_2, and setting \Delta = (\alpha \delta - \beta \gamma) and \Gamma = \alpha\delta  + \beta\gamma, we find that

\begin{aligned} \rho_{\wedge^2 \mathrm{Sym}^2 E}&\left( \begin{matrix} \alpha & \beta \\ \gamma & \delta \end{matrix}\right) \\  &= \left( \begin{matrix} \alpha^2 \delta^2 - \beta^2\gamma^2 & \alpha^2 \gamma \delta - \gamma^2 \alpha\beta & \beta^2\gamma\delta - \delta^2 \alpha \beta \\ 2\alpha^2 \beta\delta - 2\alpha\gamma \beta^2 & \alpha^2 \Gamma - 2\alpha^2\gamma \beta & \beta^2 \Gamma - 2\beta^2\delta \alpha \\ 2\gamma^2 \beta\delta - 2\alpha\gamma \delta^2 & \gamma^2 \Gamma  - 2\alpha\gamma^2\delta & \delta^2 \Gamma - 2\beta\delta^2 \gamma \end{matrix} \right) \\ &= \left( \begin{matrix} \Delta (\alpha \delta + \beta \gamma) & \Delta \alpha\gamma & -\Delta \beta\delta \\ 2 \Delta \alpha \beta & \Delta \alpha^2 & -\Delta \beta^2 \\ -2\Delta \gamma\delta & -\Delta\gamma^2 &  \Delta \delta^2 \end{matrix} \right). \end{aligned}

Now use that \Delta = 1 for matrices in \mathrm{SL}_2(K) and change basis by negating the third basis element to get the conjugate matrix on the left below

\left(\begin{matrix} \alpha \delta + \beta \gamma & \alpha\gamma & \beta\delta \\ 2  \alpha \beta &  \alpha^2 & \beta^2 \\ 2 \gamma\delta & \gamma^2 &  \delta^2 \end{matrix} \right) \sim \left(\begin{matrix} \alpha^2 & \beta^2 & 2  \alpha \beta  \\  \gamma^2 &  \delta^2 & 2 \gamma\delta \\ \alpha\gamma & \beta\delta & \alpha \delta + \beta \gamma \end{matrix} \right).

The matrix on the right is its conjugate by a permutation of the basis. We saw above in the example of contragredient duality that

\rho_{(\mathrm{Sym}^2 E)^\circ}\left( \begin{matrix} \alpha & \beta \\ \gamma & \delta \end{matrix} \right) = \left( \begin{matrix} \alpha^2 & \beta^2 & 2\alpha\beta \\  \gamma^2 & \delta^2 & 2\gamma\delta \\ \alpha\gamma & \beta\delta &  \alpha \delta + \beta \gamma \end{matrix} \right).

The matrices agree and so (\mathrm{Sym}^2 E)^\circ \cong \bigwedge^2 \mathrm{Sym}^2 E. This isomorphism can be recast in various ways, for example, \mathrm{Sym}_2 E \cong \bigwedge^2 \mathrm{Sym}^2 E or \mathrm{Sym}^2 E \cong \bigwedge^2 \mathrm{Sym}_2 E.

Finite groups

Let \mathrm{char} K = p. By Proposition 9 any isomorphism of \mathrm{SL}_2(K)-representations restricts to an isomorphism of \mathrm{SL}_2(\mathbb{F}_{p^m})-representations, provided we systematically replace contragredient duality \circ with conventional duality \star. For instance, Example 10 implies that

(\mathrm{Sym}^2 E)^\star \cong_{\mathrm{SL}_2(\mathbb{F}_{p^m})} \bigwedge^2 \mathrm{Sym}^2 E.

Here is a warning example showing that some features of the infinite case are not preserved by restriction.

Example 11. Let E = \mathbb{F}_2^2. We have

\mathrm{GL}_2(\mathbb{F}_2) = \mathrm{SL}_2(\mathbb{F}_2) = \Bigl\langle \left(\begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix} \right), \left( \begin{matrix} 1 & 1 \\ 1 & 0 \end{matrix} \right) \Bigr\rangle

and, in the usual basis e_1^2, e_2^2, e_1e_2 of \mathrm{Sym}^2 E,

\begin{aligned}\rho_{\mathrm{Sym^2} E} \left( \begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix} \right) &= \left( \begin{matrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 &  1 \end{matrix} \right) \\ \rho_{\mathrm{Sym^2} E} \left( \begin{matrix} 1 & 1 \\ 1 & 0 \end{matrix} \right) &= \left( \begin{matrix} 1 & 1 & 1 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{matrix} \right) .\end{aligned}

Observe that the sums of the rows of both 3 \times 3-matrices are constantly 1. Therefore in the alternative basis e_1^2, e_2^2, e_1^2+e_2^2 + e_1e_2 we have

\begin{aligned}\rho'_{\mathrm{Sym^2} E} \left( \begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix} \right) &= \left( \begin{matrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 &  1 \end{matrix} \right) \\ \rho'_{\mathrm{Sym^2} E} \left( \begin{matrix} 1 & 1 \\ 1 & 0 \end{matrix} \right) &= \left( \begin{matrix} 1 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{matrix} \right) .\end{aligned}

Hence \mathrm{Sym}^2 E \cong \mathbb{F}_2 \oplus E is reducible and semisimple, as claimed in the answer to the final quiz question.

To end here is a problem combining ordinary and contragredient duality, to which I do not immediately have a complete solution.

Problem 12. Let G be a finite subgroup of \mathrm{GL}_d(K) and let \rho : G \rightarrow \mathrm{GL}(V) be a representation of G. Let V^\circ be the representation defined by \rho^\circ(X) = \rho(X^t)^t. What is the relationship between V, V^\circ and V^\star?

For instance, if K is a subfield of the real numbers then there is an orthogonal form on K^d invariant under G. (This is part of the standard proof of Maschke’s Theorem.) Hence in this case we may assume that G is a subgroup of the orthogonal group, and so X^t = X^{-1} for all X \in G. It follows that

\rho^\circ(X) = \rho(X^t)^t = \rho(X^{-1})^t = \rho^\star(X)

for all X \in G, and so V \cong V^\circ \cong V^\star. The example \mathrm{Sym}^2 E, where E is the natural representation of \mathrm{SL}_2(\mathbb{F}_{2^m}) and m \ge 2, has V \not\cong V^\circ \cong V^\star. It is also possible to have V \cong V^\circ \not\cong V^\star. For example, this is clearly the case if G is a finite subgroup of \mathrm{GL}_1(\mathbb{C}) not contained in \{ \pm 1\} and V is the natural 1-dimensional representation of G.