rankrcentral.workingwiki.tex
\subsection{Decomposition Theorem}
Let $\C[\SL]$ be the coordinate ring of the variety $\SL$.

The following theorem is a consequence of the unitary trick'', the Peter-Weyl Theorem, and the fact that the set of
matrix coefficients of $\SL$ is exactly its coordinate ring. See \cite{LP} for a detailed proof.

\begin{theorem}[Decomposition]\label{decomposition}
There is an $\SL$-module isomorphism $$\C[\SL]\approx\sum_{k\in \N}V_k^*\otimes V_k\approx \sum_{k\in \N}\End(V_k).$$
\end{theorem}

The isomorphism is given by defining $$\Upsilon :\sum_{n\ge0} V_n^*\otimes V_n\longrightarrow \C [\SL]$$ by linear extension of
the mapping
$${\sf n}^*_{n-k}\otimes {\sf n}_{n-l}\mapsto {\sf n}^*_{n-k}(\xb\cdot {\sf n}_{n-l}),$$ where
$\xb=\imx{x}$ is a generic matrix.

In particular,
\begin{align}\label{eq:tensormatrixcontraction}
{\sf n}^*_{n-k}(\xb\cdot {\sf n}_{n-l})
= \tikz[trivalent,heighttwo]{
\draw(0,0)node[small vector]{${\sf n}_{n-l}$}
to node[small matrix]{$\mathbf{X}$}node[pos=.75,rightlabel]{$n$}(0,2)node[small vector]{${\sf n}_{n-k}$};}
&=  {\sf n}^*_{n-k}\left((x_{11} e_1+x_{21} e_2)^{n-l}(x_{12} e_1 + x_{22} e_2)^l\right)\nonumber\\
&=\sum_{\substack{i+j=k\\0 \le i \le n-l \\ 0 \le j \le l}}
\tbinom{n}{k}^{-1}\tbinom{n-l}{i}\tbinom{l}{j}x_{11}^{n-l-i}x_{12}^{l-j}x_{21}^ix_{22}^j.
\end{align}

\subsection{Applying the decompostion.}

\begin{eqnarray*}
\C[\SL^{\times r}] & \approx& \C[\SL]^{\otimes r}\\
& \approx& \bigotimes_{1\leq k\leq r}\left(\sum_{i_k\in \N}V^*_{i_k}\otimes V_{i_k}\right)\\
& \approx& \sum_{(i_1,...,i_r)\in\N^r} V_{i_1}^*\otimes V_{i_1}\otimes \cdots \otimes V_{i_r}^* \otimes V_{i_r}\\
& \approx& \sum_{(i_1,...,i_r)\in\N^r} V_{i_1}^*\otimes \cdots \otimes V^*_{i_r}\otimes V_{i_1}\otimes \cdots \otimes V_{i_r} \ .\\
\end{eqnarray*}

We note that, as is stated in \cite{Pet06}, the isomorphism above is determined by the following association:  $(e^*_{i_1}\otimes e^*_{i_2}\otimes \cdots \otimes e^*_{i_r} )\otimes (e_{j_1}\otimes e_{j_2}\otimes \cdots \otimes e_{j_r} )$ maps to the polynomial function
$$(\xb_1, \xb_2,...,\xb_r) \mapsto e^*_{i_1}(\xb_1\cdot e_{j_1})e^*_{i_2}(\xb_2\cdot e_{j_2})\cdots e^*_{i_r} (\xb_r\cdot e_{j_r} ).$$  We will call this tensorial contraction.''

Our principal interest is with the {\it invariant} polynomial functions that arise in this fashion.  To determine these polynomials we will need a notion of admissibility.''

We say $(\{V_{i_1},...,V_{i_r}\}, V_x)$ is an admissible pair if and only if $V_x\hookrightarrow V_{i_1}\otimes\cdots \otimes V_{i_r}$ is a $G$-module injection.  In this case there exists a $G$-module $W$ so $V_{i_1}\otimes\cdots \otimes V_{i_r}\approx V_x\oplus W$ (as $G$-modules). The existence of an injection corresponds to a way to connect a single $x$ strand to the $i_1,i_2,\ldots,i_r$ strands in an admissible way:
$$\tikz[heighttwo]{ \node[cloud,fill=red!20,draw=red!20!gray,inner sep=3pt](middle)at(0,1){??} edge[trivalent]node[rightlabel,pos=1]{x}(0,-.5) edge[trivalent,bend left=10]node[rightlabel,pos=1]{i_1}(-1.2,2.5) edge[trivalent,bend left=10]node[rightlabel,pos=1]{i_2}(-.5,2.5) edge[trivalent,draw=none]node[basiclabel]{\cdots}(.35,2.5) edge[trivalent,bend right=10]node[rightlabel,pos=1]{i_r}(1,2.5); }$$
Several injections are possible, but in this paper we focus on the \emph{left-associative} injection
$$\label{eq:leftassocdiagram} \tikz[heightthree,scale=1.5]{ \draw[trivalent](0,0)node[rightlabel]{x}to(0,1)to[bend left]node[leftlabel]{m_{r-1}}(-.5,1.5) (0,1)to[bend right](1.5,3.5)node[rightlabel]{i_r} (-1,2)to[bend left]node[leftlabel]{m_2}(-1.5,2.5)to[bend left]node[leftlabel]{m_1}(-2,3) to[bend left](-2.3,3.5)node[leftlabel]{i_1} (-2,3)to[bend right](-1.75,3.5)node[rightlabel]{i_2} (-1.5,2.5)to[bend right](-1,3.5)node[rightlabel]{i_3}; \draw[dotdotdot,bend right](-.7,3)to(.7,2.5); }$$
This diagram is only admissible if the triples at each vertex are admissible, meaning $m_1\in\iadm{i_1,i_2}$, and for $l>1$, $m_l\in\iadm{m_{l-1},i_{l+1}}$.

From Proposition \ref{clebshgordan}, $\{V_{i_1},V_{i_2}\}$ and $V_x$ are admissible if and only
if $x=i_1+i_2-2j_1$ for $0\leq j_1\leq \mathrm{min}(i_1,i_2)$.

Now consider $\{V_{i_1},V_{i_2},V_{i_3}\}$ and $V_x$.  Using the above example and Proposition \ref{clebshgordan} a second time we have $$V_x\hookrightarrow (V_{i_1}\otimes V_{i_2}) \otimes V_{i_3}\approx \sum V_{i_1+i_2-2j_1}\otimes V_{i_3}\approx \sum V_{i_1+i_2+i_3-2(j_1+j_2)}$$ where $0\leq j_1\leq \mathrm{min}(i_1,i_2)$ and $0\leq j_2\leq \mathrm{min}(i_1+i_2-2j_1,i_3)$.  Therefore, $(\{V_{i_1},V_{i_2},V_{i_3}\},V_x)$ is an admissible pair whenever $x=i_1+i_2+i_3-2(j_1+j_2)$ and both inequalities $0\leq j_1\leq \mathrm{min}(i_1,i_2)$ and $0\leq j_2\leq \mathrm{min}(i_1+i_2-2j_1,i_3)$ are satisfied.

Generalizing these examples by iteratively using the Clebsch-Gordan formula to decompose $V_{i_1}\otimes\cdots\otimes V_{i_r}$ we come to following notation and definition.

Let $\vec{i}=(i_1,i_2,...,i_r)\in \N^r$, and let $|\vec{i}|=i_1+\cdots + i_r$.
\begin{definition}
We say that $\vec{j}=(j_1,...,j_{r-1})\in \N^{r-1}$ is $\vec{i}$-admissible $($and denote it by $\vec{j}\in \iadm{\vec{i}})$ if and only if for all $1\leq l \leq r-1$ we have
$$0\leq j_l \leq \mathrm{min}(i_1+\cdots + i_l -2(j_1 +\cdots + j_{l-1}),i_{l+1}).$$
\end{definition}
Note that this is precisely the condition $m_i\in\iadm{i_1,i_2}, \: m_l\in\iadm{m_{l-1},i_{l+1}}$ given earlier with $m_l=i_1+\cdots+i_{l+1}-2(j_1+\cdots+j_n)$.

We then use Clebsch-Gordon iteratively with respect to Theorem \ref{decomposition} to conclude:
\begin{eqnarray*}
\C[\SL^{\times r}] & \approx& \C[\SL]^{\otimes r}\\
& \approx& \sum_{(i_1,...,i_r)\in\N^r} V_{i_1}^*\otimes \cdots \otimes V^*_{i_r}\otimes V_{i_1}\otimes \cdots \otimes V_{i_r}\\
& \approx& \sum_{\vec{i}\in\N^r} \left( \sum_{\vec{j}\in \iadm{\vec{i}}} V_{\left(|\vec{i}|-2|\vec{j}|\right)}^*\right)\otimes \left(\sum_{\vec{k}\in
& \approx& \sum_{\vec{i}\in\N^r} \sum_{\vec{j},\vec{k} \in \iadm{\vec{i}}} V_{\left(|\vec{i}|-2|\vec{j}|\right)}^*\otimes V_{\left(|\vec{i}|-2|\vec{k}|\right)}\ .\\
\end{eqnarray*}
Since the above maps are $\SL$-equivariant,
\begin{equation*}
\C[\X_r]=\C[\SL^{\times r}]^{\SL} \approx \sum_{\vec{i}\in\N^r} \sum_{\vec{j},\vec{k} \in \iadm{\vec{i}}} \left(V_{\left(|\vec{i}|-2|\vec{j}|\right)}^*\otimes
V_{\left(|\vec{i}|-2|\vec{k}|\right)}\right)^{\SL}.
\end{equation*}

By Schur's Lemma,
$$\mathrm{dim}_{\C}\left(V_{\left(|\vec{i}|-2|\vec{j}|\right)}^*\otimes V_{\left(|\vec{i}|-2|\vec{k}|\right)}\right)^{\SL} = \left\{ \begin{array}{ll} 1 & \textrm{if |\vec{k}|=|\vec{j}|}\\ 0 & \textrm{if |\vec{j}|\not=|\vec{k}|}\end{array}\right.$$
Therefore,

$$\C[\X_r]=\C[\SL^{\times r}]^{\SL} \approx \sum_{\vec{i}\in\N^r} \sum_{\substack{ \vec{j},\vec{k} \in \iadm{\vec{i}} \\ |\vec{k}|=|\vec{j}| }} \End \left(V_{\left(|\vec{i}|-2|\vec{j}|\right)}\right)^{\SL}.$$

\begin{definition}
Given the above isomorphism, for each triple $\vec{i},\vec{j},\vec{k}$ such that $\vec{i}\in\N^r$, $\vec{j},\vec{k}\in \iadm{\vec{i}}$, and $|\vec{j}|=|\vec{k}|$,  there exists a class function $\chh\vi\vj\vk\in\C[\X_r]$ which corresponds to a generating homothety $($unique up to scalar$)$ in $\mathrm{End}(V_{\left(|\vec{i}|-2|\vec{j}|\right)})^{\SL}$. We refer to the functions $\chh\vi\vj\vk$ as \emph{central functions}.
\end{definition}

Denote by $\cspan\chh\vi\vj\vk \subset \C[\X_r]$ the linear span over $\C$ of $\chh\vi\vj\vk$.

In these terms,
$$\C[\X_r]\approx \sum_{\vec{i}\in\N^r} \sum_{\substack{\vec{j},\vec{k} \in \iadm{\vec{i}} \\ |\vec{k}|=|\vec{j}|}} \cspan\chh\vi\vj\vk.$$

Thus, the central functions $\chh\vi\vj\vk$ form an additive basis for the ring of regular functions on $\X_r$.  However, the multiplicative structure in terms of this basis is very complicated and not at all obvious.

We note that $\vi$ has $r$ entries, $\vk$ and $\vj$ have $r-1$ and the index relation $|\vj|=|\vk|$ shows that each central function is in terms of exactly $3r-3$ indices, the Krull dimension
of the variety.

Let the Clebsch-Gordan injection be denoted by $$\iota^{\sss \vk}_{\sss \vi}:V_{\left(|\vec{i}|-2|\vec{k}|\right)}\hookrightarrow V_{i_1}\otimes\cdots \otimes V_{i_r}.$$
Also, let $\{{\sf c}^*_s\}$ be a basis for $V^*_{\left(|\vec{i}|-2|\vec{j}|\right)}$ and $\{{\sf d}_t\}$ is a basis for $V_{\left(|\vec{i}|-2|\vec{k}|\right)}$ (assuming $|\vj|=|\vk|$).

In these terms, define $$\mathcal{M}_{\vi}^{\vj,\vk}=\bigg(\iota^{\sss \vj}_{\sss \vi}({\sf c}_s^*)\Big((\xb_1,...,\xb_r)\cdot\iota^{\sss \vk}_{\sss \vi}({\sf d}_t)\Big)\bigg)_{\!\!st}.$$  $\mathcal{M}_{\vi}^{\vj,\vk}$ is a $\left(|\vec{i}|-2|\vec{k}|+1\right) \times \left(|\vec{i}|-2|\vec{k}|+1\right)$ matrix with noted $s,t$ entries.

In these terms we can see that

$$\chh\vi\vj\vk(\xb_1,...,\xb_r)=\Tr{\mathcal{M}_{\vi}^{\vj,\vk}}.$$

Since these injections are given by iteratively using the injections from the rank 2 case (that is decomposing a product of tensors two at a time), our computation of these injections in \cite{LP} determine all such injections in general (up to a choice of associativity).

With this in mind, these functions take natural diagrammatic form. Beginning with \eqref{eq:leftassocdiagram} and its vertical reflection (providing the decomposition of the dual), tensorial contraction corresponds to gluing copies of the matrix variables $\xb_l$ in between the two diagrams. Taking the trace corresponds to adding a closing loop to the diagram. The resulting diagram is
$$\label{eq:rankrdiagram} \chh\vi\vj\vk(\xb_1,\ldots,\xb_r) \equiv \chi_{\vi,\vec m,\vec p} \equiv \tikz[scale=1.2]{ \draw[trivalent] (0,0)to[bend left=80]node[small matrix]{X_1}(0,1)node[leftlabel,pos=.8]{i_1} (0,0)to[bend right=80]node[small matrix]{X_2}(0,1)node[rightlabel,pos=.8]{i_2} (0,0)to[bend right=20](.5,-.2)node[bottomlabel,pos=.5]{m_1} to[bend right=80]node[small matrix]{X_3}(.5,1.2)node[rightlabel,pos=.8]{i_3} to[bend right=20](0,1)node[toplabel,pos=.5]{p_1} (.5,-.2)to[bend right=20](1,-.4)node[bottomlabel,pos=.5]{m_2} (1,1.4)to[bend right=20](.5,1.2)node[toplabel,pos=.5]{p_2}; \draw[draw=none](1.25,0)--(2.25,-.2)node[pos=.2]{.}node[pos=.5]{.}node[pos=.8]{.}; \draw[draw=none](1.25,1)--(2.25,1.2)node[pos=.2]{.}node[pos=.5]{.}node[pos=.8]{.}; \draw[trivalent,shift={(.5,0)}] (1.5,-.6)to[bend right=20](2,-.8)node[bottomlabel,pos=.4]{m_{r-2}} to[bend right=80]node[small matrix]{X_r}(2,1.8)node[leftlabel,pos=.8]{i_r} to[bend right=20](1.5,1.6)node[toplabel,pos=.6]{p_{r-2}} (2,-.8)to[bend right=20](2.5,-1) to[bend right=80](2.5,2)node[rightlabel,pos=.4]{m_{r-1}} to[bend right=20](2,1.8); }$$
where $m_l=i_1+\cdots+i_{l+1}-2(j_1+\cdots+j_l)$ and $p_l=i_1+\cdots+i_{l+1}-2(k_1+\cdots+k_l)$. In these terms, the requirement $|\vj|=|\vk|$ becomes $m_{r-1}=p_{r-1}$.

\subsection{Example $r=1$}
The diagram is a single loop:
$$\chi_c= \tikz[trivalent]{ \draw(0,.5)circle(.5); \node[basiclabel]at(.5,1){c}; \node[small matrix]at(-.5,.5){\mathbf{X}}; }$$

The trivial representation $V_0$ gives $\chxx_0=1$. The standard representation $V_1$ has diagonal matrix coefficients $x_{11}$ and $x_{22}$, hence
$$\chxx_1= x_{11}+x_{22}=\Tr{\xb}.$$

The remaining functions may be computed directly, or by using the following product formula:
$$\label{eq:rank1product} \chxx_a \chxx_b = \sum_{c\in \lceil a,b\rfloor} \chxx_c$$
Explicitly, the particular case $b=1$ is (for $a\ge1$)
$$\label{eq:rank1recurrence} \chxx_a \chxx_1 = \chxx_{a+1} + \chxx_{a-1},$$
from which the recurrence $\chxx_{a+1} = \Tr{\xb} \chxx_a - \chxx_{a+1}$ can be derived. These polynomials, shown in Table \ref{t:rank1centralfunctions}, are closely related to the Chebyshev polynomials of the second kind.
\begin{table}
\begin{align*}
\chi_0 &= 1 \\
\chi_1 &= x \\
\chi_2 &= x^2-1 \\
\chi_3 &= x^3-2x \\
\chi_4 &= x^4-3x^2+1 \\
\chi_5 &= x^5-4x^3+3x.
\end{align*}
\caption{Rank 1 Central Functions.}\label{t:rank1centralfunctions}
\end{table}
Note that the ring structure is not that of $\C[\Tr{\xb}]$.

\subsection{Example $r=2$}
The diagram is:
$$\ch{c}{a}{b} = \tikz[trivalent,every node/.style={basiclabel}]{ \draw(0,.5)circle(.4)(0,.1)arc(-145:145:.7); \node[small matrix]at(-.4,.5){\mathbf{X}_1}; \node[small matrix]at(.4,.5){\mathbf{X}_2}; \node at(-.4,1){a};\node at(.5,.95){b};\node at(1.2,1.1){c}; }$$

Recall the decomposition $$\C[\SL\times \SL]^{\SL}\approx\sum_{\substack{a,b\in\N\\ \Adm abc}}\cspan{\ch{c}{a}{b}},$$ where $\ch{c}{a}{b}$ corresponds to the image of
$$\sum_{k=0}^c{\sf c}_k({\sf c}_k)^T\mapsto\sum_{k=0}^c\tbinom{c}{k}\bs{\sf c}{k}{k}$$
under the injection $V^*_c \otimes V_c\hookrightarrow V^*_a\otimes V^*_b \otimes V_a\otimes V_b$.

This inclusion is determined by the Clebsch-Gordan injection $\iota:V_c\hookrightarrow V_a\otimes V_b.$ Hence, an explicit formula for $\iota$ provides a means to compute $\ch cab$ directly.

Since the general injections are determined by the rank 2 injections,  we now review their construction.

A few simple examples will motivate the construction of $\iota$.

For $k=1,2$, let $\xb_k=(x_{ij}^k)$ be $2\times2$ generic matrices, and let
\begin{align*}
x&=\Tr{\xb_1}=x^1_{11}+x^1_{22},\\
y&=\Tr{\xb_2}=x^2_{11}+x^2_{22},\\
z&=\Tr{\xb_1 \xb_2^{-1}}=(x^1_{11}x^2_{22}+x^1_{22}x^2_{11})-(x^1_{12}x^2_{21}+x^1_{21}x^2_{12}).
\end{align*}

The map $\cup:V_0\hookrightarrow V_1\otimes V_1$ given by
$${\sf c}_0\mapsto{\sf a}_0\otimes {\sf b}_1-{\sf a}_1\otimes {\sf b}_0$$ is invariant.

More generally, the injection $V_0 \hookrightarrow V_a\otimes V_a$ is given by
\begin{equation*}
\cup^a:{\sf c}_0\longmapsto\sum_{m=0}^{a}(-1)^m\tbinom{a}{m}{\sf a}_{a-m}\otimes{\sf b}_m.
\end{equation*}

Hence, $\ch000=1$ and $\ch011$ may be computed by:
\begin{align*}
\ch011 & \mapsto \bs{\sf c}{0}{0}\\
& \mapsto ({\sf a}^*_0\otimes {\sf b}_1^*-{\sf a}_1^*\otimes {\sf b}^*_0)\otimes({\sf a}_0\otimes {\sf b}_1-{\sf a}_1\otimes {\sf b}_0)\\
& \mapsto (\bs{\sf a}{0}{0})\otimes(\bs{\sf b}{1})-(\bs{\sf a}{0})\otimes (\bs{\sf b}{0}{1}) \\
& \hspace{.5in} -(\bs{\sf a}{0}{1})\otimes (\bs{\sf b}{0})+(\bs{\sf a}{1})\otimes (\bs{\sf b}{0}{0})\\
& \mapsto x^1_{11}\otimes x^2_{22}-x^1_{12}\otimes x^2_{21}-x^1_{21}\otimes x^2_{12}+x^1_{22}\otimes x^2_{11}\\
& \mapsto (x^1_{11}x^2_{22}+x^1_{22}x^2_{11})-(x^1_{12}x^2_{21}+x^1_{21}x^2_{12})=z.
\end{align*}

The representation $V_c$ may be identified with a subset of $\vprod c$ via the equivariant maps
$$\xymatrix{V_c\ar@/^1pc/[r]^-{\sss\sf Sym} & \vprod{c}\ar@/^1pc/[l]^-{\sss\sf Proj}}$$
where ${\sf Proj}\circ{\sf Sym}={\rm id}$.

Thus, when $c=a+b$, $\iota$ is given by the commutative diagram
$$\xymatrix{ \vprod{c}\ar@{=}[r]\ar@{}[dr]|-{\text{\Large \circlearrowright}} & \vprod a\otimes\vprod b\ar[d]^{\sss\sf Proj\otimes Proj}\\ V_c\ar[r]_-\iota\ar[u]^{\sss\sf Sym} & V_a\otimes V_b.}$$

In particular,
\begin{equation*}
\tbinom{c}{k}{\sf c}_k\overset{\iota}{\longmapsto}\sum_{\substack{0\le i \le a\\ 0 \leq j \leq b \\
i+j=k}}\tbinom{a}{i}{\sf a}_i\otimes \tbinom{b}{j}{\sf b}_j.
\end{equation*}

For example, consider $\ch110$. In this case, ${\sf c}_0 \mapsto {\sf a}_0\otimes {\sf b}_0$ and
${\sf c}_1 \mapsto {\sf a}_1\otimes {\sf b}_0$.

Hence, {\small
\begin{align*}
\ch110&\mapsto\bs{\sf c}00+\bs{\sf c}11\mapsto(\bs{\sf a}{0}{0})\otimes(\bs{\sf b}00)+(\bs{\sf a}11)\otimes(\bs{\sf b}00)\\
&\mapsto x^1_{11}\otimes 1+x^1_{22}\otimes 1\mapsto x^1_{11}+x^1_{22}=x.
\end{align*}}

A similar computation shows that $\ch101\mapsto y$.

Let $\gamma=1/2(a+b-c)$.  The general form of $\iota$ is determined by combining these cases in the
following diagram:
$$\xymatrix{ V_c\ar[r]^-\iota\ar[d]_-\iota\ar@{}[dr]|-{\text{\Large \circlearrowright}} & V_{\beta}\otimes V_{\alpha}\ar[d]^{{\rm id}\otimes{\cup^\gamma}\otimes{\rm id}}\\ V_a\otimes V_b & V_{\beta}\otimes V_\gamma\otimes V_\gamma\otimes V_{\alpha}\ar[l].}$$

Now, let $\alpha=1/2(b+c-a)$ and  $\beta=1/2(a-b+c)$.

It follows that the mapping $\iota:V_c\to V_a\otimes V_b$ is explicitly given by:
\begin{align*}
\tbinom{c}{k}{\sf c}_k
&\longmapsto\sum_{\substack{0\le i\le\beta\\0\le j\le\alpha\\0\le m\le\gamma\\i+j=k}}%
\tbinom{\beta}{i}{\sf a}_i
\otimes\left[(-1)^m\tbinom{\gamma}{m}{\sf a}_{\gamma-m}\otimes{\sf b}_m\right]
\otimes\tbinom{\alpha}{j}{\sf b}_j\\ %
&\longmapsto\sum_{\substack{0\le i\le\beta\\0\le j\le\alpha\\0\le m\le\gamma\\i+j=k}}%
(-1)^m\tbinom{\beta}{i}\tbinom{\alpha}{j}\tbinom{\gamma}{m}{\sf a}_{i+\gamma-m}\otimes{\sf b}_{j+m}.%
\end{align*}

In \cite{LP} it is shown
\begin{theorem}\label{t:ranktworecurrencex}
Provided $a>1$ and $c>1$, we can write
\begin{multline*}
\ch cab=x\cdot\ch {a-1}{b}{c-1}-\tfrac{(a+b-c)^2}{4a(a-1)}\ch c{a-2}b%
-\tfrac{(-a+b+c)^2}{4c(c-1)}\ch {c-2}ab\\
-\tfrac{(a+b+c)^2(a-b+c-2)^2}{16a(a-1)c(c-1)}\ch{c-2}{a-2}b.
\end{multline*}
The relation still holds for $a=1$ or $c=1$, provided we exclude the terms with $a-1$ or $c-1$ in
the denominator.
\end{theorem}

Also, note that formulae for multiplication by $y$ and $z$ may be obtained by applying the following symmetry relation.

Suppose a central function is expressed as a polynomial $P$ in the variables $x=\Tr{\xb_1}$, $y=\Tr{\xb_2}$, and
$z=\Tr{\xb_1\xb_2^{-1}}$, so that $P_{\sss a,b,c}(y,x,z)=\ch cab(\xb_1,\xb_2)$ for some admissible triple
$(a,b,c)$.

\begin{theorem}For any permutation $\sigma$,
$$P_{\sss \sigma(a,b,c)}(y,x,z)=P_{\sss a,b,c}(\sigma^{-1}(y,x,z)).$$
\end{theorem}

Using this symmetry and the above recursion, the ring structure is completely determined. As stated in the introduction, for $r>2$, this ring structure is not known and for $r=1,2$ it was worked out in \cite{LP}.

Section \ref{s:rankthree} explores the $r=3$ case using computations made with {\it Mathematica}.  We use both the tensorial contraction method discussed above (which reflects our definition of central functions), and a purely combinatorial method that uses spin network techniques. The next few sections lay the groundwork for the combinatorial method, which comes from the representation of the central functions as spin networks.

\begin{comment}
For any reductive complex algebraic group $G$ acting on a variety $V$, there is a $G$-module surjection $($not a ring homomorphism!$)$ $\C[V]\to \C[V]^G$.  In general, this is given by $$f(v)\mapsto \frac{1}{\mathrm{vol}(K)}\int_K f(k^{-1}\cdot v)\mathrm{d}k,$$ where $K$ is a choice of maximal compact subgroup of $G$ and $\mathrm{vol}(K)=\int_K\mathrm{d}k.$  Since all maximal compact subgroups are conjugate this is well defined, and since $G$ is reductive any $K$-invariant polynomial extends to a $G$-invariant polynomial.  This mapping is called the Reynold's operator and is essentially unique.  Unfortunately it is not discrete since it is given by an integral.  It is a $G$-module mapping despite its limitation of not being a ring homomorphism.  Since our mapping, via spin networks, from $\C[\R_r]^G\to \sum \C\chi_{\vi}^{\vj,\vk}$ is a $G$-module isomorphsim, we get a $G$-module surjection $\C[\R_r]\to \C[\R_r]^G$ which maps any regular function $f$ to a sum of spin-networks and whose image is exactly the image of the Reyonold's operator.  In other words, we have a discretization'' of the Reynolds operator; making the infinite sum'' a finite sum.  It would be iteresting to graphically formulate how a given regular function is realized in this fashion without perfoming any integration.
\end{comment}


Return