Dear Martin,
Can I ask about the definitions of the involutions:
Yes, please ask, I'll answer if I know and time permits.
gradeInvolution == reversion? sign=(-1)^(d(d-1)/2) symbol= ~
Cliplus clirev == ? symbol=multiply by pseudoscalar? symbol=
Clifford conjugation: scalar part minus non-scalar part? symbol=
The involutions you give are again assuming a diagonal (orthonormal)
basis for the quadratic (bilinear) form. Since this is bad for applications,
it has to be avoided. Of course if you know you are in a diagonal basis
you can use faster algorithms (like defining a cmulDiag)
Grade involution:
Any (Grassmann-) Clifford algebra is built up from a base space V (of dimension
dim V = n) You have two natural transformations on V which generalize
to the whole space W=/\V of dim W = 2^n.
a) The identity map (does nothing on W)
b) The map sending every vector v \in V to its additiove inverse (negative)
barV : V -> V :: v |--> -v
This will send any basis vector to its negative (regardless of the bilinear
form). So iff your basis is like the grBasis :=[Id, e1, e2, e1we2, e3,...]
barW will send all elements eiw..wej to (-)^number of basis
elements eiw..wej
c) Iff the basis is more general (like in the case with an antisymmetric part in
the bilinear form, you always fing a new basis (inhomgeous in the old
generators) such that the involution does the trick for the new
basis elements.
Eg: B:= matrix [[1,q],[-q ,1]] you would liek tto define a new
Grassmann basis
grBasWf = [Id, f1(=e1),f2(=e2), f1wf2 ( f1wf2-q*Id), f3 (=e3),...] etc
In this case you will see that the new basis is graded (only even or odd
elements appear) so the involution still works as expected.
Whta you technically do is the following:
* Take a vector space V
* build the free algebra over it, that is the tensor algebra TV, its product is
concatenation it is noncommutative
* You are only intersted in antisymmetric tensors, hence factor out
all symmetric
ones. That is you identify all terms of the form
(v1 (x)... (x) vi (x) vi (x) ... (x) vd) = 0
[One can check that this is a graded ideal I_gr and one can therefor factor
the tensor algebra TV/I_gr = /\V
(for the generators this means you impose ei^2=0. From that you conclude that
0=(ei+ej)^(x)2 = ei^2+ei (x) ej + ej (x) ei + ej^2
= ei (x) ej + ej (x) ej
calling the projectet tensor /\ you get
ei /\ ej = - ej /\ ei )
Reversion:
This is quite different. The reversion needs to know about the Clifford
multiplication, so its actually defined in the Clifford basis
cliBas:= [Id, e(1), e(2), e(12):=e(1)*e(2), e(3),... ]
and it reverses the order of the Clifford multiplication (which depends on
the quadratic (bilinear) form. Hence you have (eij) Grassmann ,
e(ij) Clifford
basis elements)
reversion e12 = reversion (e(12)-B(e1,e2)*Id) = e(21)-B(e1,e2)*Id
= e2/\e1 +(B(e2,e1)-B(e1,e2))*Id
= -e12 - 2F(e1,e2)
where F(e1,e2) is the antisymmetric part of B, hence this difficulty will
_only_ appear when there is an antisymmetric part. Since not many people
have looked at that, its rarely described in literature, but see my
joint papers
with Rafal Ablamowicz:
#
Mathematics of CLIFFORD - A Maple package for Clifford and Grassmann algebras
Rafal Ablamowicz, B. Fauser: Adv. in Appl. Clifford Alg. 15 No. 2, 2005:157-181
#
arXiv:math-ph/0212032
Clifford and Grassmann Hopf algebras via the BIGEBRA package for Maple
Rafal Ablamowicz, B. Fauser: Comp. Physics Comm. 170, 2005:115--130
available from the arXiv (or I can send you pdf's).
Technically the reversion comes from the dualising the vector space V -> V*
and building the tensor algebra over the dualised vector space V* and
identifying
V* ~= V canonically. The project onto antisymmetric tensors. You see
that the symmetric group acts on tensors by permuting the 'list' ov
vectors a tensor is forms of. The reverison is the unique larget
permutation (usually called w)
with the most inversions (in reduced notation). So reversion is not
just adding a sign, its really reversing the list of all vectors
(generators) and then reorder them using the antisymmetry and collect
the minus signs.
conjugation is just the composition of recerion and grade involution:
conjugation x == reversion gradeinvolution x
(== gradeinvolution reversion x) -- oder does not matter
So first thing is to implement:
Grassmann wedge
Grassmann meet (note that the meet is a sort of wedge but with the roles
of the grades inverted)
Clifford product for a bilinear form (this is not hard, its the same routine
which calculates the Clifford product for a symmetric bilinear form or an
arbitrary bilinear form)
I expect I had better ignore the Spinors for now although I am curious about
how they should be added on later.
Indeed, forget about them for now. I just added the spinor things to
make sure there are matrix analogs which show that the algebra
computes correctly. It is
just a way to convince yourself that everything works right. It was not meant
(yet) as a feature request. (Spinors come in handy when describing
linear complexes in projective space)
Upto now I have thought of spinors as an even subalgebra of Clifford
algebras?
There are many ways to look at spinors. Every way has its own benefit.
Indeed the weakest is to go for matrix representations and confuse
those with the abstract algebra. In the examples you see that all
three Clifford algebras Cl(2,0), CL(1,1)
and CL(0,2) finally can be modelled by 2x2 matrices, but how to
identify them if only the matrices are given?
The spinors in the examples are (left) idela spinors, that is they
are elements of a minimal left ideal of the Clifford algebra and
therefore them self representable as Clifford numbers (as clearly can
be seen by inspecting the lists calld spinorSpace). If represented in
the (same) spinor basis, these elements make up a matrix with a single
non-zero column, forgetting about the zero columns gives what you see
usually (but it gets more complicated if the ring is no longer the
reals, even complex conjugation is then a differently presented
operation).
Some of the matrix concepts are difficult to translate back into the
algebra, eg transposition if the bilinear form is not
diagonal/symmetric... this is a reseach
subject (and I hope there will be soon a paper comming out clarifying
this partly).
(rotation by 2*pi multiplies by -1) but
this does not seem to be the case here? Would spinors require a new type of
multiplication or a new domain?
If a spinor is represented _inside_ of the Clifford algebra, no new
multiplications are needed. That's the nice feature of unifying these
things. Interpreting spinors as rotations can be done by constructing
so called operator spinors, well described eg in Perrti Lounesto's
boon Clifford algebras and spinors. Also this is not yet important.
(And will not need new algebra).
What you need to implement at the moment is:
a) A Grassmann basis (module over some (commutative) ring, field)
[Clifford has some undocumented features which allow a q-deformation here
but I guess you should ignore that too]
b) the wedge and meet products, and some helper functions, like
extraction of the coefficient of the identiy, the volume elemenet etc
gradeinvolution, Grassmann reversion (that is the above reversion where
all terms comming from the bilinear form are absent), so this is
just the sign
factor (or better (-1)^{number of involutions of the permutatiosn which
reorders the rerm} this would allow to replace -1 by generic q).
a) and b) make up a GrassmannAlgebra (and with very little effort could be
turned into a Grassmann Hopf algebra by adding a wedge corproduct, and
an antipode)
c) The Clifford product, based on a correctly implemented
leftContraction and rightContraction. This is done easily by a
recursion, as I tried to explain in a previous mail.
With x,y in V u,v,w in W=/\V you have
i) x * y = lc(x,y) + x/\y and 1*1=1
ii) x * (u /\ v) = x /\ u /\ v + lc(x, u/\v)
= x/\u/\v + lc(x,u)/\v + gradeinvolution(u) /\ lc(x, v)
(same for (u/\v) * y using right contraction)
iii) Since every term u can be recursively be decomposed as a Clifford product
you can finally evaluate u * v by
u = w*x (where the grade of w is strictly less than that of u
u * v = (w*x)*v = w*(x*v) and use the above recursion with 1*1=1
I am sorry to have to less time currently to sit down and just provide the code,
I have some other things on my table which are urgent.
Hope this helps, otherwise feel free to ask.
Ciao
BF.