Lecture 17 (23.11.2015)

We started with some repetition of the graphical calculus introduced last time, see these slides.

I next demonstrated the solution to Exercise 4.1 from last lecture: Let $I$ be an index set, and $E_{ab}$ the elementary matrices in $M_I$, i.e. $(E_{ab})^i_j=\delta^{ia}\delta_{jb}$ for all $a,b,i,j\in I$.

Then $$(E_{rs}E_{ij})^a_b = \sum_c (E_{rs})^a_c(E_{ij})^c_b=\sum_c \delta^{ra}\delta_{sc}\delta^{ic}\delta_{jb}.$$ In this sum, each term with $c\neq s$ is zero because of the factor $\delta_{sc}$. We may thus drop the sum and $\delta_{sc}$, and put $c=s$ everywhere. Thus

$$(E_{rs}E_{ij})^a_b = \delta^{ra}\delta^{is}\delta_{jb}=\delta_{is}\cdot(E_{rj})^a_b.$$ As $a,b$ where arbitrary, these shows $$E_{rs}E_{ij}=\delta_{is}\,E_{rj}.$$ We will frequently use this identity in the following.

Next we consider the matrix $U:=\sum_{i,j}E_{ij}\otimes E_{ij}$, and compute its matrix elements. Since this matrix lies in $M_{I^2}$, we need two upper and two lower indices.

$$(\sum_{i,j}E_{ij}\otimes E_{ij})^{ab}_{cd}=\sum_{i,j}(E_{ij})^a_c(E_{ij})^b_d=\sum_{i,j}\delta^{ai}\delta_{jc}\delta^{bi}\delta_{jd}=\delta^{ab}\delta_{cd}$$

Thus $U^{ab}_{cd}=\delta^{ab}\delta_{cd}$.

Finally, we want to show that $(U\otimes 1)(1\otimes U)(U\otimes 1)=(U\otimes 1)$ holds. To this end, we write down the left hand side, making use of the representation $U=\sum_{i,j}E_{ij}\otimes E_{ij}$ that we just derived, and the rules for matrix/tensor products.

$$\begin{align*}(U\otimes 1)(1\otimes U)(U\otimes 1)&=\left(\sum_{i,j}E_{ij}\otimes E_{ij}\otimes 1\right)\left(1\otimes\sum_{a,b}E_{ab}\otimes E_{ab}\right)\left(\sum_{c,d}E_{cd}\otimes E_{cd}\otimes 1\right)\\&=\sum_{i,j,a,b,c,d}E_{ij}E_{cd}\otimes E_{ij}E_{ab}E_{cd}\otimes E_{ab}\\&=\sum_{i,j,a,b,c,d}\delta_{jc}E_{id}\otimes \delta_{ja}E_{ib}E_{cd}\otimes E_{ab}\\&=\sum_{i,j,b,d}E_{id}\otimes E_{ib}E_{jd}\otimes E_{jb}\\&=\sum_{i,j,b,d}E_{id}\otimes \delta_{bj}E_{id}\otimes E_{jb}\\&=\sum_{i,j,d}E_{id}\otimes E_{id}\otimes E_{jj}\\&=\left(\sum_{i,d}E_{id}\otimes E_{id}\right)\otimes \sum_j E_{jj}\\&=U\otimes 1\end{align*}$$

In the third step of this calculation, we have used the relation $E_{ab}E_{cd}=\delta_{bc}E_{ad}$ which we derived in the first part. In step 4, we simplified the sum over Kronecker $\delta$ as in our earlier calculation. In step 5 and 6, we again used $E_{ab}E_{cd}=\delta_{bc}E_{ad}$ and carried out the sum over one index. In step 7, we used the linearity of the tensor product in both factors. Then the first tensor factor is seen to be $U$ (see part two of this exercise), and the second factor is $\sum_j E_{jj}=1$, as this matrix has a 1 on each entry of its diagonal (sum over all $j$), and zeros everywhere else.

This concludes the solution of Exercise 4.1, and our general discussion of graphical notation for matrices/tensors.

We now want to come back to knots/links with our new techniques. Before we can do that, we need to add one more element into our notation. To explain it, consider the rotated pictures
RX1In the first instance, the two lines on top clearly correspond to the two upper indices of $R$, and the lines at the bottom to the two lower indices of $R$. This might also be understood in the second picture. But at least in the third one it is not clear anymore what is top and bottom.

To have diagrams that can be unambiguously read even when rotated (or deformed with some type 0 move), we will therefore put arrows (an orientation) on the index lines. Our convention is that

  • The lines connect to two opposite sides of the box.
  • On one side, all lines are incoming (arrows point towards the box), and on the other side, all arrows are outgoing (point away from the box).
  • The incoming arrows correspond to upper indices, the outgoing arrows correspond to lower indices.
  • Looking at the box from such an angle that the incoming arrows go into the top side, the order of the indices is from the left to the right.

With these conventions, it is always clear which line corresponds to which index positition, like in
RX2

— also if we do not rotate the letter “$R$”, as we did here for additional clarity.

The idea of the partition function, to be introduced subsequently, is to read a (oriented) knot diagram in terms of the graphical calculus for matrices. This can not be done right away, because a knot diagram does not contain the boxes representing matrices/tensors. We first need to “translate” the diagram. This shall be done by the following two rules:

  1. Take an oriented link diagram, and replace all crossings by boxes (tensors with two upper and two lower lines/indices), but keep all the lines as they were.
  2. For a positive crossing, replace with a tensor $R$, and for a negative crossing, replace with a tensor $\bar R$.

$\bar R$ is the usual notation here — the overline does not refer to complex conjugation, it just means another tensor, independent of $R$.

Let us look at an example, the diagram $D$
RX3

For the translation, it is often helpful to write down many arrows indicating the orientation, so that one sees clearly which lines are incoming/outgoing at which crossing:
RX4

We see in particular that there are three crossings, i.e. we need three tensors. The leftmost crossing is positive, whereas the other two are negative. Thus we have to translate the above diagram to
RX5

This can now be read in terms of our graphical calculus for matrices. We attach indices to the lines:
RX6

We have not yet said which index set these indices should belong to, but we take them all from the same set $I$. As all lines are internal, all these indices are summed over. Carefully checking which index goes into which position (top/bottom, left/right), we see that the above diagram represents the number $$Z(D):=\sum_{a,b,c,d,e,f} R^{fa}_{db}\overline{R}^{db}_{ec}\overline{R}^{ec}_{fa}.$$ Since this diagram was particularly simple, the corresponding sum can be simplified. Looking at the positions of the indices, we realize two matrix products, namely $Z(D)=\sum_{fa}(R\overline{R}^2)^{fa}_{fa}.$

If we define the trace of a tensor $T$ with the number of upper and lower indices to be the number ${\rm Tr}(T):=\sum_{i_1,…,i_n}T^{i_1…i_n}_{i_1…i_n}$, then we have $$Z(D)={\rm Tr}(R\overline{R}^2).$$

In this manner, any link diagram $D$ can be translated to graphical matrix notation, and then interpreted as a number, which we call $Z(D)$ (A number, i.e. a tensor without any upper or lower indices, because there are no lines with open ends in a link diagram). Clearly $Z(D)$ depends on the choice of index set $I$, and the choice of tensors $R,\overline{R}\in M_{I^2}$.

Let us look at two more examples to get familiar with this procedure. First a positive Hopf link $H$:
RX7
which we translate to
RX8and read off $$Z(H)=\sum_{a,b,c,d}R^{ab}_{cd}R^{cd}_{ab}=\sum_{a,b}(R^2)^{ab}_{ab}={\rm Tr}(R^2).$$

Let me mention that not in all cases, such a short formulation in terms of a trace of a power of $R$ or $\bar R$ is possible, as one might end up with sums like $\sum_{a,b,c,d,e,f}R^{ab}_{cd}R^{ec}_{ea}R^{df}_{bf}$, where the index positions don’t simplify to matrix products and traces.

Second, let us look at a diagram $U$ of the unknot:
RX9How to read this in terms of the graphical matrix notation? Remember that we have abbreviated the matrix elements of the identity matrix by a line without a box (see last lecture). Hence we may translate this diagram toRX10
and find $$Z(U)=\sum_i \delta^i_i=\sum_i 1=|I|,$$ the number of elements in $I$.