\subsection{Tensorial Algorithm}
We now heuristically describe an effective algorithm to compute central functions for ranks 1,2, and 3 using a tensorial description and contraction method.  This algorithm has been successfully implemented in {\it Mathematica}, and a semi-documented ``notebook'' is available to the reader (at the first author's website).

Let $t:\X_r\to \C^{N_r}$ be the inclusion given by specifying $N_r$ minimal generators for $\C[\R_r]^G$, and let $\R_r\to \X_r$ be the projection dual to the inclusion $\C[\R_r]^G\subset \C[\R_r]$. Denote by $\overline{[\rho]}$ the equivalence class of $\rho\in \R_r$ where $\rho\sim \rho'$ if and only if $\overline{G\rho}\cap\overline{G\rho'}\not=\emptyset$, that is their orbit closures intersect non-trivially. In the case $r=3$ this mapping is given by
$\overline{[\rho]}\mapsto$ $$\left(\Tr{\rho(\xt_1)},\Tr{\rho(\xt_2)},\Tr{\rho(\xt_3)},\Tr{\rho(\xt_1\xt_2)},\Tr{\rho(\xt_1\xt_3)},\Tr{\rho(\xt_2\xt_3)},\Tr{\rho(\xt_1\xt_2\xt_3)}\right).$$
In \cite{G9} an explicit global slice to this projection is constructed.  Using this slice and an iterative application of the work in \cite{LP} we compute rank 3 central functions ``tensorially'' and describe this algorithm below.

To ease the notation let $t_k=\Tr{\xb_k}=\Tr{\rho(\xt_k)}$, $t_{ij}=\Tr{\xb_i\xb_k}=\Tr{\rho(\xt_i\xt_j)}$, and $t_{ijk}=\Tr{\xb_i\xb_j\xb_k}=\Tr{\rho(\xt_i\xt_j\xt_k)}$.

The projection $\pi:\X_r\to \C^6$ given by $\overline{[\rho]}\mapsto (t_1,t_2,t_3,t_{12},t_{13},t_{23})$ is a branched double cover.  In particular, there exists polynomials in $P,Q\in\C[t_1,t_2,t_3,t_{12},t_{13},t_{23}]$ so that $$\C[\X_3]\approx \C[t_1,t_2,t_3,t_{12},t_{13},t_{23}][t_{123}]/(t_{123}^2-Pt_{123}+Q).$$  The two roots of the irreducible generator of the ideal are related by the formula:

$$t_{132}= -t_1 t_2 t_3+t_{12} t_3+t_2
   t_{13}+t_1 t_{23}-t_{123}.$$

This provided the formula for $P$, the formula for $Q$ is
$$t_{123} t_{132} = (t_{1}^2
+ t_{2}^2+ t_{3}^2) + (t_{12}^2 + t_{23}^2 + t_{13}^2) -
(t_{1}t_{2}t_{12} + t_{2}t_{3}t_{23} + t_{3}t_{1}t_{13}) + t_{12}t_{23}t_{13} - 4.$$

To construct a slice we must construct a triple of matrices $(\xb_1,\xb_2,\xb_3)$ for every 7-tuple in the image of $t$.

Let $\xb_1=\left(
 t_1 & -1 \\
 1 & 0
\right)$ and $\xb_2=\left(
 0 & w \\
 -\frac{1}{w} & t_2
\right)$, where $w+1/w=t_{12}$.  Then letting $$\xb_3(s)=\left(
 s \left(\frac{1}{w}-w\right)+t_3 & s \left(w
   t_1-t_2\right)+\frac{w \left(w
   t_3\right)+t_{23}\right)}{w^2-1} \\
 s \left(\frac{t_1}{w}-t_2\right)+\frac{-t_1
   t_3+t_{13}+w t_{23}}{w^2-1} & s
\right),$$ where $s$ is a solution to the equation $\mathrm{det}(\xb_3(s))-1=0$, will give the desired slice (see \cite{G9}).

\subsubsection{Step 1}
Define tensor products and duality scalar products.  The scalar product satisfies

$${\sf n}^*_{n-k}({\sf n}_{n-l})=\frac{(n-k)!k!}{n!}\delta_{kl}=\raisebox{2pt}{$\delta_{kl}$}\!\Big/\!\raisebox{-2pt}{$\tbinom{n}{k}$},$$ for ${\sf n}_{n-k}\in V_n$.

\subsubsection{Step 2}
Define an algorithm which determines the form of a linear representation of an element in $\SL$ on the symmetric power $V_n$.

This comes from
{\sf n}^*_{n-k}(\xb\cdot {\sf n}_{n-l})
&=  {\sf n}^*_{n-k}\left((x_{11} e_1+x_{21} e_2)^{n-l}(x_{12} e_1 + x_{22} e_2)^l\right)\\
&=\sum_{\substack{i+j=k\\0 \le i \le n-l \\ 0 \le j \le l}}

For instance, if $g=\left(
 a & b \\
 c & d
\right)$, then the induced action on $V_2$ is given by $\left(
 a^2 & a b & b^2 \\
 a c & \frac{1}{2} (b c+a d) & b d \\
 c^2 & c d & d^2
\right)$, and on $V_3$ by

 a^3 & a^2 b & a b^2 & b^3 \\
 a^2 c & \frac{1}{3} \left(d a^2+2 b c a\right)
   & \frac{1}{3} \left(c b^2+2 a d b\right) &
   b^2 d \\
 a c^2 & \frac{1}{3} \left(b c^2+2 a d c\right)
   & \frac{1}{3} \left(a d^2+2 b c d\right) & b
   d^2 \\
 c^3 & c^2 d & c d^2 & d^3

Upon contracting tensors, we will need to know the polynomial matrix coefficients of the action on a given $V_n$.  This routine will allow us to read off such matrix coefficients, since the pairing for ${\sf n}^*_{n-k}(\xb\cdot {\sf n}_{n-l})$ just becomes the $n-k+1,n-l+1$ entry of the symmetrized matrix.

\subsubsection{Step 3}
Given a triple $(a,b,c)$ determine all admissible 6-tuples, and mark them by an enumeration of the multiplicities.  We iteratively decompose tensor triples $V_a\otimes V_b\otimes V_c$ using a left associative iteration algorithm.  In other words, we decompose $(V_a\otimes V_b)\otimes V_c$, using the decomposition of $V_a\otimes V_b\approx \sum V_{a+b-2k}$.  Then for each allowed value of $0\leq k\leq a+b$, we decompose $V_{a+b-2k}\otimes V_c$ in the same fashion.  We know that $(a,b,a+b-2k)$ is admissible if and only if  $|a-b| \leq  a+b-2k \leq  a+b$.

For instance $V_d$ injects into $V_3\otimes V_2\otimes V_2$ if and only if $d\in \{7, 5, 3, 5, 3, 1, 3, 1\}$.  The fact that some numbers are repeated tells us the multiplicity.  For instance, $V_1$ occurs with multiplicity 2.  In more detail, we have

(V_3\otimes V_2)\otimes V_2&\approx (V_5\oplus V_3 \oplus V_1)\otimes V_2 \approx V_5\otimes V_2\oplus V_3\otimes V_2 \oplus V_1\otimes V_2\\
&\approx V_7\oplus V_5\oplus V_3 \oplus V_5\oplus V_3\oplus V_1 \oplus V_3 \oplus V_1

\subsubsection{Step 4}
Define the injection of basic elements.  This is done using the formula for the mapping $\iota:V_c\to V_a\otimes V_b$:
  \tbinom{c}{k}{\sf c}_k
    &\longmapsto\sum_{\substack{0\le i\le\beta\\0\le j\le\alpha\\0\le m\le\gamma\\i+j=k}}%
      \tbinom{\beta}{i}{\sf a}_i
        \otimes\left[(-1)^m\tbinom{\gamma}{m}{\sf a}_{\gamma-m}\otimes{\sf b}_m\right]
        \otimes\tbinom{\alpha}{j}{\sf b}_j\\ %
    &\longmapsto\sum_{\substack{0\le i\le\beta\\0\le j\le\alpha\\0\le m\le\gamma\\i+j=k}}%
      (-1)^m\tbinom{\beta}{i}\tbinom{\alpha}{j}\tbinom{\gamma}{m}{\sf a}_{i+\gamma-m}\otimes{\sf b}_{j+m},%
and interating for each summand.

Also since we are using left associations, i.e. grouping $V_a$ and $V_b$ in $V_a\otimes V_b\otimes V_c$, we first inject $V_d\hookrightarrow V_{a+b-2k}\otimes V_c$ and then inject $V_{a+b-2k}\otimes V_c\hookrightarrow V_a\otimes V_b\otimes V_c$ using the injection $V_{a+b-2k}\hookrightarrow V_a\otimes V_b$ tensored with the identity.

Likewise, we have an injection for the dual injection $V_d^*\hookrightarrow V_a^*\otimes V_b^*\otimes V_c^*$.

\subsubsection{Step 5}

The central function begins in $\mathrm{End}(V_d)^{\SL}$ and then is mapped to $(V^*_d \otimes V_d)^{\SL}$ by
$$\sum_{k=0}^d{\sf d}_k({\sf d}_k)^T\mapsto\sum_{k=0}^c\tbinom{d}{k}\bs{\sf d}{k}{k}.$$
The central function is then determined by the composit injections $\mathrm{End}(V_d)^{\SL}\approx (V^*_d \otimes V_d)^{\SL} \hookrightarrow (V^*_a\otimes V^*_b \otimes V^*_c\otimes V_a\otimes V_b\otimes V_c)^{\SL}.$  Once we realize the explicit form of the injections $V_d\hookrightarrow V_a\otimes V_b\otimes V_c$ we can write the central function $\sum_{k=0}^d{\sf d}_k({\sf d}_k)^T$ as a central tensor in terms of $V_a\otimes V_b\otimes V_c$.  We note that the coeffcients are a bit delicate here:  first one chooses to include $\tbinom{d}{k}$ on only one the vectos ${\sf d_k}$ or ${\sf d}^*_k$ but not both for each summand of the central function.  Second one must make sure to include a $1/\tbinom{d}{k}$ when including each factor of each summand (both the vector and its dual) into a tensor $V_a\otimes V_b$.  We observe that the rank 2 coefficients cancel, but for rank 3 they generally do not cancel since we are interating injections.  Lastly, we have included the dual pairing binomial coefficients in our expression of the symmetrization of a generic matrix, so we do not need to include any further binomial coefficients (there are alot included already!).

\subsubsection{Step 6}

Once we have the central function as a linear combination of tensors, we must ``contract the tensor'' using the mapping which associates a linear combination of tensor (of the type we have been considering) to a polynomial function in the coordinate ring $\C[\R_r]$.  Recall that this mapping is determined by mapping
$(e^*_{i_1}\otimes e^*_{i_2}\otimes \cdots \otimes e^*_{i_r} )\otimes (e_{j_1}\otimes e_{j_2}\otimes \cdots \otimes e_{j_r} )$ to the polynomial function
$$(\xb_1, \xb_2,...,\xb_r) \mapsto e^*_{i_1}(\xb_1\cdot e_{j_1})e^*_{i_2}(\xb_2\cdot e_{j_2})\cdots e^*_{i_r} (\xb_r\cdot e_{j_r} ).$$  Once a generic matrix $\xb_i$ is mapped to an automorphism of $V_d$, call it $\mathrm{Sym}_d(\xb_i)$, then $e^*_{i_1}(\xb_1\cdot e_{j_1})$ is just the $(i_1+1, j_1+1)$ entry of $\mathrm{Sym}_d(\xb_i)$.

\subsubsection{Step 7}

Lastly, using the Goldman slice (see \cite{G9}) $$\X_3=(\SL^{\times 3})\aq \SL \hookrightarrow \R_3=\SL^{\times 3}$$ to the categorical projection $\R_3\to\X_3$, we express the invariant polynomials (which are polynomials in the generic matrix entries) from the tensorial contraction in terms of the seven invariants: $$\{\Tr{\xb_1},\Tr{\xb_2},\Tr{\xb_3},\Tr{\xb_1\xb_2},\Tr{\xb_1\xb_3},\Tr{\xb_2\xb_3},\Tr{\xb_1\xb_2\xb_3}\}.$$

It is only this last step that does not generalize to $r\geq 4$, since we do not have a slice.  For $r\geq 4$, we can still contract the tensor and then use a Gr\"obner Basis routine algorithmically to express the polynomials in terms of traces (minimally in fact since minimal generators are known in this case for $r\geq 4$).

Using the spin network calculus another algorithm was written that uses only combinatorics, and not tensorial contractions.  This makes the computing time significantly faster.

\subsection{Combinatorial Algorithm}\label{ss:rank3combinatorial}
Trace diagrams allow a more combinatorial approach to computing the central functions. The key point is that any central function can be reduced in terms of simpler central functions. As discussed in section \ref{simplerecurrence-section}, a recurrence formula exists for each loop in the diagrammatic depiction of the central function. These recurrences are generalizations of those computed, for the rank two case, in \cite{LP}. Such recurrences can be used to construct an algorithm for computing any central function, or in general any spin network labeled with matrices.

In this section, we show how this process works in the rank three case. Recall that the rank three diagrams have the form:
            (0,0)to[bend left=80]node[small matrix]{$\xb_1$}(0,1)node[leftlabel,pos=.8]{$a$}
            (0,0)to[bend right=80]node[small matrix]{$\xb_2$}(0,1)node[rightlabel,pos=.8]{$b$}
            (0,0)to[bend right=20](.5,-.2)node[bottomlabel,pos=.5]{$e$}
            to[bend right=80]node[small matrix]{$\xb_3$}(.5,1.2)node[rightlabel,pos=.75]{$c$}
            to[bend right=20](0,1)node[toplabel,pos=.5]{$f$}
            (.5,-.2)to[bend right=20](1,-.4)
            to[bend right=80](1,1.4)node[rightlabel,pos=.75]{$d$}
            to[bend right=20](.5,1.2);
We will describe a process for reducing any such diagram to a sum of diagrams of lesser rank, where the rank is defined to be $a+b+c$. The algorithm terminates with the rank 0 base case $\chi_{0,0,0,0,0,0}=1$. The process will use eight different recurrence formulas, corresponding to the following trace variables:
    \{\tr(\xb_1), \tr(\xb_2), \tr(\xb_3), \tr(\xb_1 \xb_2^{-1}), \tr(\xb_1 \xb_3^{-1}), \tr(\xb_2 \xb_3^{-1}),\\ \tr(\xb_1 \xb_2^{-1} \xb_3), \tr(\xb_1 \xb_3 \xb_2^{-1})\}.
The final two recurrences, which correspond to nonsimple loops in the diagram, require particular attention.

\subsubsection{Algorithm Overview}
As mentioned in section \ref{simplerecurrence-section}, the recurrence corresponding to a loop with $n$ edges contains a maximum of $2^n$ terms. For that reason, it is more efficient to begin the computation with the shortest loops in the figure. The algorithm implemented in Mathematica begins with the following steps:
    \item If $\mathfrak{e}_e(a,b)>0$ and $\mathfrak{e}_f(a,b)>0$, reduce along the $(a,b)$ loop, corresponding to $\tr(\xb_1 \xb_2^{-1})$;
    \item otherwise, if $\mathfrak{e}_e(c,d)>0$ and $\mathfrak{e}_f(c,d)>0$, reduce along the $(c,d)$ loop, corresponding to $\tr(\xb_3)$;
    \item otherwise, if $\mathfrak e_b(a,e)>0$, $\mathfrak e_b(a,f)>0$, $\mathfrak e_c(d,e)>0$, and $\mathfrak e_c(d,f)>0$, reduce along the $(a,e,d,f)$ loop, corresponding to $\tr(\xb_1)$;
    \item otherwise, if $\mathfrak e_a(b,e)>0$, $\mathfrak e_a(b,f)>0$, $\mathfrak e_c(d,e)>0$, and $\mathfrak e_c(d,f)>0$, reduce along the $(b,e,d,f)$ loop, corresponding to $\tr(\xb_2)$;
    \item otherwise, if $\mathfrak e_a(b,e)>0$, $\mathfrak e_a(b,f)>0$, $\mathfrak e_d(c,e)>0$, and $\mathfrak e_d(c,f)>0$, reduce along the $(b,e,c,f)$ loop, corresponding to $\tr(\xb_2 \xb_3^{-1})$;
    \item otherwise, if $\mathfrak e_b(a,e)>0$, $\mathfrak e_b(a,f)>0$, $\mathfrak e_d(c,e)>0$, and $\mathfrak e_d(c,f)>0$, reduce along the $(a,e,c,f)$ loop, corresponding to $\tr(\xb_1 \xb_3^{-1})$;
    \item otherwise, if $e=0$, reduce along the nonsimple $(a,b,e,c,d,e)$ loop, corresponding to $\tr(\xb_1 \xb_3 \xb_2^{-1})$;
    \item otherwise, if $f=0$, reduce along the nonsimple $(a,b,f,c,d,f)$ loop, corresponding to $\tr(\xb_1 \xb_2^{-1} \xb_3)$.

    Any admissible $\chi_{a,b,c,d,e,f}$ can be reduced via one of the above cases.
    Consider the following set of edge types:
        \{\mathfrak e_a(b,e), \mathfrak e_b(a,e), \mathfrak e_a(b,f), \mathfrak e_b(a,f)\}.
    If any two of these is zero, then the central function is reducible. To see this, suppose without loss of generality that $\mathfrak e_a(b,e)=0$. If $\mathfrak e_b(a,e)=0$ then $e=\mathfrak e_a(b,e)+\mathfrak e_b(a,e)=0$. If $\mathfrak e_b(a,f)=0$, then $a=b+e$ and $b=a+f$ so that $e=0$ and $f=0$. Finally, if $\mathfrak e_a(b,f)=0$ then $a=b+e=b+f$ and so $\mathfrak e_e(a,b)=2b=\mathfrak e_f(a,b)$. So either $b>0$ and reduction by the $(a,b)$ loop is possible, or $b=0$. In this case, the diagram is reducible by either $\tr(\xb_1)$, $\tr(\xb_1 \xb_3^{-1})$, or $\tr(\xb_3)$.

    The same logic can be applied to the set of edges
        \{\mathfrak e_c(d,e), \mathfrak e_d(c,e), \mathfrak e_c(d,f), \mathfrak e_d(c,f)\}.
    But note that if at most one of \eqref{eq:testedges} and at most one of \eqref{eq:testedges2} is zero, then one of the four reductions corresponding to a loop of length four is possible.

\subsubsection{Reduction along Simple Loops}
The simple loop reductions may have up to 4 terms in the $\tr(\xb_3)$ and $\tr(\xb_1 \xb_2^{-1})$ cases, and up to 16 terms in the other four cases. While this is a large number of terms, writing out the recurrence is a straightforward process. Each is an immediate consequence of Theorem \ref{t:simplerecurrence}. For example:
    \tr(\xb_1\xb_2^{-1})\chi_{a,b,c,d,e,f} =
         + \mathfrak s_e(a,b)\mathfrak s_f(a,b)\frac{\mathfrak e_a(b,e) \mathfrak e_a(b,f)}{b(b+1)} \chi_{a+1,b-1,c,d,e,f}\\
         + \mathfrak s_e(a,b)\mathfrak s_f(a,b)\frac{\mathfrak e_b(a,e)\mathfrak e_b(a,f)}{a(a+1)} \chi_{a-1,b+1,c,d,e,f}\\
         + \frac{\mathfrak e_e(a,b)\mathfrak e_f(a,b)(\mathfrak e(a,b,e)+1)(\mathfrak e(a,b,f)+1)}{a(a+1)b(b+1)} \chi_{a-1,b-1,c,d,e,f}.
The remainder of the recurrences may be found in the Mathematica notebook.

\subsubsection{Reduction along Non-Simple Loops}
In the final cases, either $e=0$ or $f=0$, and the central function is topologically equivalent to a barbell. Proposition \ref{prop:barbellrecurrence} could be adapted to this case, but we prefer to show directly how the rank three central functions are related to the barbell functions $\tilde\chi_{a,c}^b$. When $f=0$, the diagram is
    \chi_{a,b,c,d,e,0} =
            (0,0)to[bend left=80]node[small matrix]{$\xb_1$}(0,1)node[leftlabel,pos=.8]{$a$}
            (0,0)to[bend right=80]node[small matrix]{$\xb_2$}(0,1)node[rightlabel,pos=.8]{$b$}
            (0,0)to[bend right=20](.5,-.2)node[bottomlabel,pos=.5]{$e$}
            to[bend right=80]node[small matrix]{$\xb_3$}(.5,1.2)node[rightlabel,pos=.75]{$c$}
            (.5,-.2)to[bend right=20](1,-.4)
            to[bend right=80](1,1.4)node[rightlabel,pos=.75]{$d$}
            to[bend right=20](.5,1.2);
    = (-1)^{\frac e2}\hspace{-12pt}
        \node[small matrix]at(-.6,-.5){$\xb_3$};\node[small matrix,ellipse]at(-.6,1.5){$\xb_2^{-1}\xb_1$};}
    = (-1)^{\frac e2} \tilde{\chi}_{c,a}^e(\xb_3,\xb_2^{-1}\xb_1).
The sign $(-1)^{\frac e2}$ arises from Proposition \ref{p:spinnetsignstrong} since there are $\frac e2$ kinks in the left diagram and none on the right. So $\chi(a,b,c,d,e,0)$ may be obtained from Table \ref{t:barbell} using the substitutions
        x &\mapsto \mathrm{tr}(\xb_3) = t_3,\\
        y &\mapsto \mathrm{tr}(\xb_2^{-1}\xb_1) = t_1t_2-t_{12},\\
        z &\mapsto \mathrm{tr}(\xb_3\xb_2^{-1}\xb_1) = t_{123}+t_1t_2t_3-t_{12}t_3-t_1t_{23}.

Similarly, when $e=0$, the diagram is
    \chi_{a,b,c,d,0,f} =
            (0,0)to[bend left=80]node[small matrix]{$\xb_1$}(0,1)node[leftlabel,pos=.8]{$a$}
            (0,0)to[bend right=80]node[small matrix]{$\xb_2$}(0,1)node[rightlabel,pos=.8]{$b$}
            (.5,-.2)to[bend right=80]node[small matrix]{$\xb_3$}(.5,1.2)node[rightlabel,pos=.75]{$c$}
            to[bend right=20](0,1)node[toplabel,pos=.5]{$f$}
            (.5,-.2)to[bend right=20](1,-.4)
            to[bend right=80](1,1.4)node[rightlabel,pos=.75]{$d$}
            to[bend right=20](.5,1.2);
    = (-1)^{\frac f2}\hspace{-12pt}
        \node[small matrix,ellipse]at(-.6,-.5){$\xb_1\xb_2^{-1}$};\node[small matrix]at(-.6,1.5){$\xb_3$};}
    = (-1)^{\frac f2} \tilde{\chi}_{a,c}^f(\xb_1\xb_2^{-1},\xb_3).
The proper substitutions in this case are
        x & \mapsto\mathrm{tr}(\xb_1\xb_2^{-1}) = t_1t_2-t_{12}, \\
        y & \mapsto\mathrm{tr}(\xb_3)=t_3, \\
        z & \mapsto\mathrm{tr}(\xb_3\xb_1\xb_2^{-1})=t_{13}t_2-t_{123}.

Recall that every rank 3 central function has 6 indices.  The first 4 coming from the inclusion of a symmetric tensor $V_d \hookrightarrow V_a\otimes V_b \otimes V_c$.  Suppose it occurs with multiplicity $m$.  Then for $1\leq i,j\leq m$, the central functions are indexed by $\chi_{a,b,c,d}^{i,j}$, $i$ for $V_d$ and $j$ for $V^*_d$. Any 6-tuple arising in this fashion is called admissible. Note that this choice of index differs from $\chi_{a,b,c,d,e,f}$. By letting $c=0$ we recover the rank 2 central functions in \cite{LP} and by letting $b=c=0$ we recover the classical rank 1 central functions.

We call the set of all admissible 6-tuples satisfying $a+b+c=s$ the $s$-{\it stratum}. Table \ref{t:rank3strata} shows the first four strata of rank three central functions.

0-stratum (1 function)

\hline \chi_{0,0,0,0}^{1,1}=1 \\ \hline

1-stratum (3 functions)

\hline \chi_{1,0,0,1}^{1,1}=t_1 & \chi_{0,1,0,1}^{1,1}=t_2 &\chi_{0,0,1,1}^{1,1}=t_3 \\ \hline

2-stratum (9 functions)

    & \chi_{1,0,1,2}^{1,1}=\tfrac{t_1 t_3}{2}+\tfrac{t_{13}}{2}
        & \chi_{0,1,1,2}^{1,1}=\tfrac{t_2 t_3}{2}+\tfrac{t_{23}}{2}\\
\chi_{1,1,0,2}^{1,1}=\tfrac{t_1 t_2}{2}+\tfrac{t_{12}}{2}
    & \chi_{1,0,1,0}^{1,1}=t_1 t_3-t_{13}
        & \chi_{0,1,1,0}^{1,1}=t_2 t_3-t_{23}\\
\chi_{1,1,0,0}^{1,1}=t_1 t_2-t_{12} 
    & \chi_{0,2,0,2}^{1,1}=t_2^2-1
        & \chi_{0,0,2,2}^{1,1}=t_3^2-1\\ \hline

3-stratum (20 functions)

\chi_{3,0,0,3}^{1,1}=t_1^3-2 t_1
    & \chi_{1,1,1,1}^{1,2}=-\frac{1}{2} t_1 t_2 t_3+\frac{t_{12} t_3}{2}+t_1 t_{23}-t_{123}\\
\chi_{2,1,0,3}^{1,1}=\frac{1}{3} t_2 t_1^2+\frac{2 t_{12}  t_1}{3}-\frac{2 t_2}{3}
    & \chi_{1,1,1,1}^{1,1}=\frac{3}{4} t_1 t_2 t_3+\frac{t_{12} t_3}{4}-\frac{t_2 t_{13}}{2}-\frac{t_1 t_{23}}{2}\\
\chi_{2,1,0,1}^{1,1}=t_2 t_1^2-t_{12} t_1-\frac{t_2}{2}
    & \chi_{1,0,2,3}^{1,1}=\frac{1}{3} t_1 t_3^2+\frac{2 t_{13} t_3}{3}-\frac{2 t_1}{3}\\
\chi_{2,0,1,3}^{1,1}=\frac{1}{3} t_3 t_1^2+\frac{2 t_{13} t_1}{3}-\frac{2 t_3}{3}
    & \chi_{1,0,2,1}^{1,1}=t_1 t_3^2-t_{13} t_3-\frac{t_1}{2}\\
\chi_{2,0,1,1}^{1,1}=t_3 t_1^2-t_{13} t_1-\frac{t_3}{2}
    & \chi_{0,3,0,3}^{1,1}=t_2^3-2 t_2\\
\chi_{1,2,0,3}^{1,1}=\frac{1}{3} t_1 t_2^2+\frac{2 t_{12} t_2}{3}-\frac{2 t_1}{3}
    & \chi_{0,2,1,3}^{1,1}=\frac{1}{3} t_3 t_2^2+\frac{2 t_{23} t_2}{3}-\frac{2 t_3}{3}\\
\chi_{1,2,0,1}^{1,1}=t_1 t_2^2-t_{12} t_2-\frac{t_1}{2}
    & \chi_{0,2,1,1}^{1,1}=t_3 t_2^2-t_{23} t_2-\frac{t_3}{2}\\
\chi_{1,1,1,3}^{1,1}=\frac{t_3 t_{12}}{3}+\frac{t_2 t_{13}}{3}+\frac{t_1 t_{23}}{3}
    & \chi_{0,1,2,3}^{1,1}=\frac{1}{3} t_2 t_3^2+\frac{2 t_{23} t_3}{3}-\frac{2 t_2}{3}\\
\chi_{1,1,1,1}^{2,2}=t_1 t_2 t_3-t_3 t_{12}
    & \chi_{0,1,2,1}^{1,1}=t_2 t_3^2-t_{23} t_3-\frac{t_2}{2}\\
\chi_{1,1,1,1}^{2,1}=\frac{1}{2} t_1 t_2 t_3-\frac{t_{12} t_3}{2}-t_2 t_{13}+t_{123}
    & \chi_{0,0,3,3}^{1,1}=t_3^3-2 t_3\\ \hline
\caption{Strata of Rank 3 central functions.}\label{t:rank3strata}

A more complicated example is
\chi_{3,2,2,3}^{2,1} &= \frac{1}{30} t_2^2 t_1^3+\frac{4}{15} t_2^2 t_3^2 t_1^3-\frac{1}{5} t_3^2 t_1^3-\frac{1}{5} t_2 t_3 t_{23}t_1^3+\frac{2 t_1^3}{15}-\frac{2}{15} t_2t_3^2 t_{12} t_1^2\\
                      &-\frac{7}{10} t_2^2 t_3t_{13} t_1^2+\frac{7}{15} t_3 t_{13}t_1^2+\frac{1}{5} t_3 t_{12} t_{23}t_1^2+\frac{3}{10} t_2 t_{13} t_{23}t_1^2+\frac{2}{3} t_2 t_3 t_{123}t_1^2\\
                      &-\frac{1}{3} t_{23} t_{123}t_1^2-\frac{1}{6} t_2^2 t_1+\frac{2}{15}t_2^2 t_3^2 t_1+\frac{1}{15} t_3^2t_1-\frac{2}{15} t_3^2 t_{12}^2t_1-\frac{1}{30} t_{12}^2t_1\\
                      &+\frac{2}{5}t_2^2 t_{13}^2 t_1-\frac{4}{15} t_{13}^2t_1+\frac{7}{30} t_{23}^2 t_1-\frac{1}{3} t_2t_3 t_{12} t_{13} t_1-\frac{1}{2} t_2 t_3t_{23} t_1\\
                      &+\frac{1}{30} t_{12} t_{13} t_{23}t_1+\frac{1}{3} t_3 t_{12} t_{123}t_1-\frac{1}{2} t_2 t_{13} t_{123}t_1-\frac{t_1}{15}+\frac{4}{15} t_2 t_{12}t_{13}^2\\
                      &+\frac{4}{15} t_2 t_3^2t_{12}-\frac{t_2 t_{12}}{10}+\frac{1}{30} t_3t_{12}^2 t_{13}+\frac{2}{15} t_2^2 t_3t_{13}-\frac{t_3 t_{13}}{10}-\frac{7}{30} t_3t_{12}t_{23}\\
                      &+\frac{2}{15} t_2 t_{13}t_{23}-\frac{1}{3} t_2 t_3t_{123}-\frac{1}{6} t_{12} t_{13}t_{123}+\frac{t_{23} t_{123}}{6}.

Using our tensorial algorithm we have computed all strata up to 10 which gives 2254 known examples.

%\begin{comment}The existence of a central function in the rank 2 case is equivalent to Shur's lemma.  In other words, given two corresponding admissible triples $(a,b,c_1)$ and $(a,b,c_2)$ there is a central function if and only if $c_1=c_2$.  However, this is not true anymore for rank 3 central functions because of multiplicity.  There is a shur-like lemma here however.  Take two admissible six tuples which correspond under the decomposition:  $(a,b,c,d,e_1,f_1)$ and $(a,b,c,d,e_2,f_2)$.  For a central function to exist $e_1=e_2$ and $f_1=f_2$ but $e_i\not=f_i$ in general, as one may think.  However, it is true that $e_i=f_i$ if and only if the central function evaluated at the identity for all three generic matrices is not zero.  Since we are evaluating at the identity, the spin networks become only the coefficients.  Hence the proof of this observation come from the coefficient computations above.\end{comment}


Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License