Notices
Results 1 to 32 of 32

Thread: Evidence

  1. #1 Evidence 
    Forum Masters Degree thyristor's Avatar
    Join Date
    Feb 2008
    Location
    Sweden
    Posts
    542
    Do you have any favourite evidence, which is extra neat and short but still deep.


    373 13231-mbm-13231 373
    Reply With Quote  
     

  2.  
     

  3. #2  
    Forum Professor serpicojr's Avatar
    Join Date
    Jul 2007
    Location
    JRZ
    Posts
    1,069
    I like this proof that sqrt(2) is irrational. Suppose otherwise. Then you can write sqrt(2) = x/y for some integers x, y. We may assume that x is the smallest positive integer such that you can write sqrt(2) = x/y for some y. Note that x/y > 1, and so x > y. Now draw a square with side lengths x, and draw two squares with side lengths y in opposing corners of the first square:



    x<sup>2</sup> is clearly the area of the large square. But we may also calculate the area of the large square as:

    2y<sup>2</sup>-D+2U

    This is twice the area of either of the two smaller squares (2y<sup>2</sup>) minus the area that they both cover (D) plus the area that neither covers (2U). By definition of x and y, x<sup>2</sup> = 2y<sup>2</sup>. So we must have D = 2U. But U = (x-y)<sup>2</sup>, and D = (2y-x)<sup>2</sup>. Thus (2y-x)<sup>2</sup> = 2(x-y)<sup>2</sup>. Since x > y > 0, x-y is a positive integer smaller. Note that 2 > x/y, so 2y > x, and thus 2y-x is a positive integer, and we have ((2y-x)/(x-y))<sup>2</sup> = 2, i.e. sqrt(2) = (2y-x)/(x-y). Since 2y < 2x, we have 2y-x < x. Thus we have found a representation of sqrt(2) with positive numerator smaller than x. This contradicts the choice of x. So sqrt(2) must be irrational.


    Reply With Quote  
     

  4. #3  
    Forum Ph.D.
    Join Date
    Apr 2008
    Posts
    956
    I like the application of the AM–GM inequality to the problem of finding the minimum surface area or the maximum volume of a cuboid. (That’s a solid with 6 rectangular faces such that adjacent faces are perpendicular to each other.) Suppose a cuboid has length, width and height l, w and h respectively. Its surface area is S = 2(lw+wh+hl) and its volume V = lwh.

    By the AM–GM inequality,



    i.e.



    i.e.



    Hence we can find the minimum surface area for a given volume, or the maximum volume for a given surface area. Moreover, equality occurs if and only if lw = wh = hl, i.e. if and only if l = w = h.

    This is so much less messy than using calculus.
    Reply With Quote  
     

  5. #4  
    Forum Masters Degree thyristor's Avatar
    Join Date
    Feb 2008
    Location
    Sweden
    Posts
    542
    Neat ones. I fancy the proof of the Fundamental theorem of calculus.
    Unfortunately I can't write it here since I don't know how to write
    the symbol of integral in the message.
    373 13231-mbm-13231 373
    Reply With Quote  
     

  6. #5  
    Forum Professor river_rat's Avatar
    Join Date
    Jun 2006
    Location
    South Africa
    Posts
    1,517
    Finite sums theorem, its quite deep but the proof is really neat.
    As is often the case with technical subjects we are presented with an unfortunate choice: an explanation that is accurate but incomprehensible, or comprehensible but wrong.
    Reply With Quote  
     

  7. #6  
    Forum Freshman
    Join Date
    May 2008
    Posts
    9
    The fact that you can solve problems in topology using methods from abstract algebra is IMHO one of the best things in pure mathematics.
    Reply With Quote  
     

  8. #7  
    Forum Masters Degree thyristor's Avatar
    Join Date
    Feb 2008
    Location
    Sweden
    Posts
    542
    How do I write an integration in my post?
    373 13231-mbm-13231 373
    Reply With Quote  
     

  9. #8  
    Forum Professor serpicojr's Avatar
    Join Date
    Jul 2007
    Location
    JRZ
    Posts
    1,069
    Quote Originally Posted by algebraic topology
    The fact that you can solve problems in topology using methods from abstract algebra is IMHO one of the best things in pure mathematics.
    And vice versa!
    Reply With Quote  
     

  10. #9  
    Forum Professor serpicojr's Avatar
    Join Date
    Jul 2007
    Location
    JRZ
    Posts
    1,069
    Quote Originally Posted by thyristor
    How do I write an integration in my post?
    Copy and paste ∫. To get upper and lower limits, use HTML for sub- and superscript formatting (< sub > and < sup >). In general, appeal to this post for math symbols:

    http://www.thescienceforum.com/More-...odes-8625t.php

    If you know latex, this site has free latex hosting:

    http://math.b3co.com/

    Ignore the fact that it says the service is down, as I've posted a few images from the site recently.
    Reply With Quote  
     

  11. #10  
    Forum Masters Degree thyristor's Avatar
    Join Date
    Feb 2008
    Location
    Sweden
    Posts
    542
    Ok, so here comes the evidence ( if anyone doesn't know it)

    F and f are functions.
    F(x)=∫<sup>x</sup><sub>x<sub>0</sub></sub> f(t)dt
    Thus:
    (F(x+deltax)-F(x))*1/deltax= (∫<sup>x+deltax</sup><sub>x<sub>0</sub></sub> f(t)dt -∫<sup>x+deltax</sup><sub>x</sub> f(t)dt)*1/deltax
    Thus:
    F'(x)=(∫<sup>x+deltax</sup><sub>x</sub> f(t)dt)*1/deltax
    According to the mean value theorem there is a dot p such that f(p)*deltax=(∫<sup>x+deltax</sup><sub>x</sub> f(t)dt)
    Thus:
    F'(x)=f(p)*deltax/deltax F'(x)=f(p)
    lim deltax -> 0 p=x
    Thus:
    F'(x)=f(x) (Which completes the proof)
    373 13231-mbm-13231 373
    Reply With Quote  
     

  12. #11  
    Forum Masters Degree bit4bit's Avatar
    Join Date
    Jul 2007
    Posts
    621
    I like the derivation of the vector cross product.

    v = <a,b,c> = ai+bj+ck
    u = <d,e,f> = di+ej+fk

    v x u = (ai+bj+ck)(di+ej+fk)

    =adixi+aeixj+afixk+bdjxi+dejxj+bfjxk+cdkxi+cekxj+cfkxk

    |ixi|, |jxj|, |kxk| = 0
    so,

    v x u = aeixj+afixk+bdjxi+bfjxk+cdkxi+cekxj

    |ixj|=k, |jxi|=-k
    |jxk|=i, |kxj|=-i
    |kxi|=j, |ixk|=-j

    so,

    v x u = aek-afj-bdk+bfi+cdj-cei

    = (bf-ce)i + (cd-af)j + (ae-bd)k
    = (bf-ce)i - (af-cd)j + (ae-bd)k

    = i | b c | - j | a c | + k | a b|
    .....| e f |......| d f |.......| d e |

    =
    | i j k |
    | a b c |
    | d e f |

    Chance favours the prepared mind.
    Reply With Quote  
     

  13. #12  
    Forum Sophomore
    Join Date
    Jul 2007
    Location
    South Africa
    Posts
    196
    :?


    My >3D Vector product definition leads to a definition of non square determinants. A definition isn't always cheap if it must fit into already existing theorems and definitions.

    Here is the secrets of non square determinants. Let A be nxm matrix with n<m then you develop it's determinant just like square determinants untill
    reaching 2xp, p > 2 stage.

    At this stage you write this as a sum of 2x2 matrices, one term for every combination of deleted coulumns (so that only 2 columns remain), marking the deleted columns in their correct place(s) by a symbol . Then you interchange columns untill all empty columns is (are) at rightmost and multiply the term by -1 if an odd amount of columns were interchanged.

    That's it.
    It also matters what isn't there - Tao Te Ching interpreted.
    Reply With Quote  
     

  14. #13  
    Forum Professor serpicojr's Avatar
    Join Date
    Jul 2007
    Location
    JRZ
    Posts
    1,069
    So the determinant satisfies these properties:

    1. It's alternating in the columns of a matrix--i.e., if you swap two columns, the determinant changes by -1.

    2. It's linear in each column--i.e., fixing all of the other columns, the determinant gives a linear map in the one unfixed column.

    3. det(I) = 1

    These properties uniquely specify the determinant. If you try to come up with something like this for mxn matrices with m ≠ n, you'll find that... well, first, the 3rd condition doesn't make sense. But if you get rid of the 3rd condition, then the determinant is uniquely specified up to a scalar multiple on nxn matrices, and so you can ask whether the same holds for mxn, m ≠ n.

    The answer is no. When m > n, you'll always have at least two possible "determinants" which are not scalar multiples of each other. When m < n, there is necessarily a linear dependency amongst the columns of your matrix, and so anything satisfying 1 and 2 must be identically 0--in other words, there is not even a single candidate for determinant.
    Reply With Quote  
     

  15. #14  
    Forum Professor serpicojr's Avatar
    Join Date
    Jul 2007
    Location
    JRZ
    Posts
    1,069
    Let me illustrate what I'm talking about.

    If v = [x,y]<sup>T</sup> is a column vector, then d(v) = x and d'(v) = y both give potential "determinants"--the alternating property is trivial, and the second property boils down to d being a linear map, since there's only one column. And clearly there does not exist a scalar so that d(v) = kd'(v) for all v.

    Now let u = [z,w] be a row vector, then suppose d satisfies 1 and 2. Clearly d([0,0]) = 0, so we may assume that u ≠ [0,0]. Suppose z ≠ 0, and consider d on the vector [z,z]. Swapping columns preserves this vector, so:

    d([z,z]) = -d([z,z])

    Thus d([z,z]) = 0. But d is linear in any one column if you fix the other, so that:

    d([z,w]) = d([z,(w/z)z]) = (w/z)d([z,z]) = 0

    So d is trivial. A similar argument work if z ≠ 0.
    Reply With Quote  
     

  16. #15  
    Forum Sophomore
    Join Date
    Jul 2007
    Location
    South Africa
    Posts
    196
    :?

    Yes you get two determinants, one developed by column: D_C, the other one developed by row D_R.

    However the basic properties (except D (A) = D (A transposed)) holds for some combination(s) of:

    D_R, D_C, n<m, m<n

    and never for neither D_R nor D_C.

    Of course you have: D_R (A) = D_C (A transposed) and vice versa.

    Your conditoin 3 just generalise as an nxn identity matrix with zero last column vector(s). Condition 1 holds for D_R A, A a row longest matrix and intercahnge of rows and D_C A, A a column longest and interchange of columns.

    I just need to check condition 2 and the following post.
    It also matters what isn't there - Tao Te Ching interpreted.
    Reply With Quote  
     

  17. #16  
    Forum Professor serpicojr's Avatar
    Join Date
    Jul 2007
    Location
    JRZ
    Posts
    1,069
    I just tell it like it is.
    Reply With Quote  
     

  18. #17  
    Forum Sophomore
    Join Date
    Jul 2007
    Location
    South Africa
    Posts
    196
    :?

    I think you are right about the linear dependence, however:

    for A a row longest matrix and if the rows are linearly independent

    D_R (A) behaves just like the square detrminant. D_C (B) for B a column

    longest matrix works as dual.

    Did you get the generalisation of 3:

    10000
    01000
    00100

    D_R for this matrix is equal to 1.

    I will send someone the article if they help me to get it published.
    It also matters what isn't there - Tao Te Ching interpreted.
    Reply With Quote  
     

  19. #18  
    Forum Professor serpicojr's Avatar
    Join Date
    Jul 2007
    Location
    JRZ
    Posts
    1,069
    What is your determinant's use? What does it mean? The square determinant is useful and meaningful--for example:

    1. It tells you when a matrix is invertible.

    2. It tells you how volumes change under your matrix.

    3. It helps you find eigenvalues by calculating the characteristic polynomial.

    None of these really makes sense in the context of non-square matrices, so I don't know what you would use your non-square determinant for.
    Reply With Quote  
     

  20. #19  
    Forum Sophomore
    Join Date
    May 2008
    Posts
    121
    Quote Originally Posted by river_rat
    Finite sums theorem, its quite deep but the proof is really neat.
    Yeah that's my favourite too.
    Reply With Quote  
     

  21. #20  
    Forum Sophomore
    Join Date
    Jul 2007
    Location
    South Africa
    Posts
    196
    :?

    D_R A plays the some role as the square type for (right) inverses like:

    AB = I

    where A is nxm and A a row longest matrix, B is the inverse and mxn and

    I is nxn. I haven't checked D_C yet.

    I don't know what you mean by 2, and i'll see if I can generalise the

    eigenvalues.
    It also matters what isn't there - Tao Te Ching interpreted.
    Reply With Quote  
     

  22. #21  
    Forum Professor serpicojr's Avatar
    Join Date
    Jul 2007
    Location
    JRZ
    Posts
    1,069
    Quote Originally Posted by talanum1
    D_R A plays the some role as the square type for (right) inverses like:

    AB = I

    where A is nxm and A a row longest matrix, B is the inverse and mxn and

    I is nxn. I haven't checked D_C yet.
    So you mean that D_R(A) = 0 iff no right inverse exists? It's easy enough to check this by just checking the rank of A, e.g. by doing Gaussian elimination: a right inverse exists iff the rank of A is as big as it can be, i.e. if the rank is the number of columns (assuming A has more rows than columns).

    I don't know what you mean by 2
    Let A be an nxn matrix, and consider the parallelotope with one vertex at the origin and sides meeting at the origin given by the column vectors of the matrix. Then the volume of this parallelotope is the determinant of A.

    I feel like this must fail at some point. I mean, first, this doesn't make sense, because if A has more rows than columns, the range of A is necessarily going to be a strict subspace of the codomain. So, really, the volume of anything in this space is 0. But you'll argue that, well, there's a natural notion of volume in this subspace, and this is what your determinant gives. I'm not so sure it does, but even if it does, then your determinant is necessarily not multiplicative: I can come up with a determinant 1 nxn matrix B with eigenvalues a < 1, b > 1, such that the range of A is contained in the a-eigenspace, the orthogonal complemenet of the range of A is contained in the b-eigenspace. So then D_R(A) and D_R(AB) are necessarily different, which is weird and, quite frankly, undesirable given that det(B) = 1.

    and i'll see if I can generalise the

    eigenvalues.
    If the domain and codomain are not equal, then what does it mean for a vector in one of these spaces to be a scalar multiple of the other?

    Talanum, you have to realize that someone must have thought of this all before you, and, furthermore, they must have realized that it doesn't really make sense, at least in the context you're trying to do things. Take a look at multilinear algebra. You'll be surprised to see that a lot of the questions you're asking have already been answered about as well as they can be.
    Reply With Quote  
     

  23. #22  
    Forum Sophomore
    Join Date
    Jul 2007
    Location
    South Africa
    Posts
    196
    :?

    1. It is if D_R (A) = 0 then no right inverse exists, because you need to

    devide by this to get the inverse. However D_R (A) may be zero even if

    the rows are linearly independent. The test for linear dependence goes

    like: reduce D_R (A) untill reaching the 2xp stage, then if every term in

    this expression is zero then the rows of A are linearly dependent.

    2. I'll look at that later.

    3. I don't know how to get the domain and codomain nor how to get the spaces of them.

    4. Are you sure??? Why do you have so many questions then???
    It also matters what isn't there - Tao Te Ching interpreted.
    Reply With Quote  
     

  24. #23  
    Forum Sophomore
    Join Date
    Jul 2007
    Location
    South Africa
    Posts
    196
    :x

    You must think in terms of row/column different generalisation!

    Just det (A) says something about either rows or columns, we are moving

    beyond that!

    As for the eigenvalues: there is a generalisation: use I=

    1100
    0110
    0011

    in (A - (lambda)I), for the 3x4 case. This choice of "I" comes from the

    reason that a 3x4 is reducable to a double coulomn matrix analogous

    to above by Gauss-Jordan ellimination. Use the linear dependence test as

    above.
    It also matters what isn't there - Tao Te Ching interpreted.
    Reply With Quote  
     

  25. #24  
    Forum Professor serpicojr's Avatar
    Join Date
    Jul 2007
    Location
    JRZ
    Posts
    1,069
    Quote Originally Posted by talanum1
    Are you sure??? Why do you have so many questions then???
    Okay, I am sure. People have asked and answered these questions before. There's a subject called multilinear algebra, one aspect of which deals with generalizations of determinants, i.e. alternating multilinear functionals. Take a look at this.

    I'm asking you questions because I'd rather have a discussion with you than just dismiss you outright. Asking you questions allows you to think of aspects of your discoveries that you haven't considered, allowing you to come to conclusions yourself instead of me just leading you to them.

    Please, don't be angry. There's no reason for it. A lot of math has been accomplished before you. You're bound to think of a million different mathematical things which have been done before. I'm just trying to make you aware of them. That's no reason to get mad at anything--neither me, the person who did the original work, the whole world, fate, nor yourself.

    Instead, look at how wonderful this is! If you have a mathematical thought and cannot flesh it out, check the literature--you might find the result you're looking for, or you might get some inspiration. You can string along more delicate ideas to achieve more interesting results by allowing yourself to consider that which has been done before. Remember:

    Quote Originally Posted by [img
    http://i29.tinypic.com/k0nk1w.jpg[/img] Isaac Newton]If I have seen further, it is by standing on the shoulders of giants.
    Reply With Quote  
     

  26. #25  
    Forum Sophomore
    Join Date
    Jul 2007
    Location
    South Africa
    Posts
    196
    :?

    There is nothing like this in the entire catagory of Determinants at Wikipedia.

    Multilinear algebra is about tensors, tensor spaces and determinants of dimension larger than two - but I'll continue looking.

    You specified det(AB) for a nonsquare operand. A nxn matrix B with det (B) = 1 has n eigenvalues (what about multiplicities)?

    Your volume problem is easily solved: if the matrix is 4x3 you have a shapedefined by three edges in 4D, if it is 3x4 you have a shape defined by four edges in 3D.

    As for the precedence: even if there is proof that the generalisation is impossible it would probably require all of the properties to hold, since you cannot anticipate what properties to take as holding (half-holding, hold in a special case etc.) without actually investigating the specific generalisation in detail. It is also unlikely that someone would guess the right system based on elegance: Euclidian space is easily picturable but failed to describe physical space.

    For the linear dependence I actually just proved that if the rows of A (A row longest) are linearly dependent then D_R (A) = 0 and for the converse that for 2xp matrices, if every term at the stage right after the 2xp stage is zero then the two rows are linearly dependent.

    I'm not sure about the eigenvalues.
    It also matters what isn't there - Tao Te Ching interpreted.
    Reply With Quote  
     

  27. #26  
    Forum Professor serpicojr's Avatar
    Join Date
    Jul 2007
    Location
    JRZ
    Posts
    1,069
    Quote Originally Posted by talanum1
    :?
    Would you stop beginning every single post like this? It's really annoying.

    Multilinear algebra is about tensors, tensor spaces and determinants of dimension larger than two - but I'll continue looking.
    The determinant can be viewed as an alternating tensor.

    As for the precedence: even if there is proof that the generalisation is impossible it would probably require all of the properties to hold, since you cannot anticipate what properties to take as holding (half-holding, hold in a special case etc.) without actually investigating the specific generalisation in detail. It is also unlikely that someone would guess the right system based on elegance: Euclidian space is easily picturable but failed to describe physical space.
    What is the "right system"? That's what I've been asking all along! You need to come up with reasons why your determinant is "right", and what else does "right" mean than useful and meaningful?

    It looks to me like all you did was say, "Okay, let me take the cofactor expansion of determinant and push and shove it until it fits nonsquare matrices." You claim that elegance is not a good indicator that something is the "right" generalization. Why, then, is forcing an old construction onto something a good indicator that you're on the "right" path? It's the same game: in one case, you're being guided by properties; in the other, you're being guided by calculations.

    But, actually, I argue that elegance (in the sense you're using it) is a good indicator that something is right. Determinants are useful and meaningful because of their properties. So for a generalization to be useful and meaningful, and for it to be called a determinant, it should satisfy many of these properties or related properties.

    In any case, the burden of showing why your generalization is worthwhile is on you. The burden of studying the properties of the generalization is on you. The burden of connecting this to previous results is on you. Hence you have the responsibility to sit down, learn some advanced linear algebra, and poke around with your determinant to figure out its properties. (I mean, come on dude, it's clear you didn't do any of this given that you didn't even think to relate your determinant to linear dependence of rows or columns until I told you to!)

    I'm not sure about the eigenvalues.
    It looks like there is a decent generalization of eigenvalues for nonsquare matrices--singular values. Basically, you can find orthogonal bases of your domain and codomain so that each basis vector in the domain gets sent to a multiple of a basis vector in the codomain or 0, with no two domain basis vectors being sent to a nonzero multiple of the same codomain basis vector. Maybe your determinant can tell you something about this?
    Reply With Quote  
     

  28. #27  
    Forum Sophomore
    Join Date
    Jul 2007
    Location
    South Africa
    Posts
    196
    I did not push and shove the definitoin of the square case such that it fits with
    nonsquare matrixes. It developed quite naturally out of my definition for the
    vector product of two vectors of larger than 4 dimensions. I proved that the
    5D vector product is zero if the two vectors are linearly dependent. This definition
    then translates just like the 3D case as computable by a determinant (just a
    non-square one). This definitoin does break beyond the accepted way of
    usual presentation (usual elegance) because you need the formula coupled
    with an extendable number triangle - which is why I persued it and also why
    I thought it is unlikely to have been developed previously. It doesn't look like
    all notions of elegance coincides.

    I did investigate 7 properties I had, I thought the linear dependence is guarranteed
    by the sum of rows and constant multiple of rows rules. It may have looked like I
    implied the work was finished but I did'nt intend that.

    It (they) do(es) satisfy many of the properties, just with a different row-column
    duality (replace "right" with "left", R with C, n<m with n>m and "rows" with "columns"
    and vice versa).

    Tell me if this makes sense: if we have

    Ax = Lx = L I x

    which is identical to the generating relation of eigenvalues in the square case.
    Now we may just specify A to be mxn, x as nx1 and I as the multidiagonal
    mxn matrix with just ones on the multidiagonals and zeros elsewhere. Then the
    logic of eigenvalues is identical to this (we must just be carefull of left and right
    multiplication):

    (A - L I )x = 0

    where we would need to use left inverse here in order to solve for x, therefore
    we need D_C and A and I must be column longest or:

    xA = Lx = L xI

    x(A - L I ) = 0

    and take D_R of the matrix in brackets equal to zero (for A and I row longest).
    The coice of I can be said to follow the logic of the square case in that I is
    in the shape (coresponding zero entries) of whatever state A can be reduced to
    by Gauss-Jordan ellimination.

    As for D_R (AB) not = D_R (A)det(B) we have the Cauchy-Binet formula as an
    example of a generalisation (for AB being square) where we need a sum on the
    right side: det(AB) = SUM det(A_S)det(B_S).

    Is the domain and codomain what I know as row and column space? I'll hit the
    advanced books for the last question.
    It also matters what isn't there - Tao Te Ching interpreted.
    Reply With Quote  
     

  29. #28  
    Forum Professor serpicojr's Avatar
    Join Date
    Jul 2007
    Location
    JRZ
    Posts
    1,069
    If you have an mxn matrix A, the domain is R<sup>n</sup> and the codomain is R<sup>m</sup>. This is because A maps n-component vectors to m-component vectors. The column space is the range of A, the row space is the orthogonal complement of the null space (or kernel) of A.

    Generally, if you have a function f:X->Y, X is the domain, Y is the codomain, and f(X) is the range.

    ----------------------

    Okay, so what's your generalization of your vector cross product? And how does it relate to your determinant?
    Reply With Quote  
     

  30. #29  
    Forum Sophomore
    Join Date
    Jul 2007
    Location
    South Africa
    Posts
    196
    The vector product definition is in another post at this forum, date about
    December 2007.

    The nonsquare determinant follows form the 5D vector product written out in
    components then written as a sum of 2x2 determinants (in the correct order of
    matrixes, with the correct signs).

    Then by terminological extension of the 3D vector product written as a
    determinant we get (for uxv, [n] is the n'th unit vector):

    [1] [2] [3] [4] [5]
    u_1 u_2 u_3 u_4 u_5
    v_1 v_2 v_3 v_4 v_4

    and the first component comes from:

    [1] 0 0 0 0
    u_1 u_2 u_3 u_4 u_5
    v_1 v_2 v_3 v_4 v_4

    Develop this by row 1 and you get a 2x4 determinant, that fits the other
    formula only if the determinant definition is as stated.

    The similar thing works for the other components. The generalisation is then
    already determined.

    The 4D vector product is proven a Lie algebra (in my paper) and the nD one is
    conjectured to be also. It seems logical to assume the nD one must also be a Lie
    algebra because it's compution method is identical (in logic) to the 4D case?

    Doesn't for example:

    00000
    0a00b

    map the second row vector in R^5 to two vectors in R^2 each with components
    (0,a) and (0,b)? Or is the second row vector really in R^2? While of

    11c11
    0a00b

    can be said to map the second row vector in R^5 to two vectors in R^2 with
    components: (1,a) and (1,b) OR to map the first row in R^5 to four vectors
    in R^2 ? While in this case the second row cannot be said to be in R^2?
    It also matters what isn't there - Tao Te Ching interpreted.
    Reply With Quote  
     

  31. #30  
    Forum Professor serpicojr's Avatar
    Join Date
    Jul 2007
    Location
    JRZ
    Posts
    1,069
    Quote Originally Posted by talanum1
    The vector product definition is in another post at this forum, date about December 2007.
    I found the post and, honestly, your description of your cross product is inscrutable. Part of it is the awful formatting of these forums. Do you have, say, a pdf version of your paper?

    The 4D vector product is proven a Lie algebra (in my paper) and the nD one is conjectured to be also. It seems logical to assume the nD one must also be a Lie algebra because it's compution method is identical (in logic) to the 4D case?
    By "the vector product is a Lie algebra" you mean that the vector product makes R<sup>n</sup> into a Lie algebra, I'm assuming.

    Doesn't for example:

    00000
    0a00b

    map the second row vector in R^5 to two vectors in R^2 each with components (0,a) and (0,b)? Or is the second row vector really in R^2?
    Uh, a map sends a vector to exactly one vector. So what you said makes no sense. This matrix sends the second and fifth standard basis vectors in R<sup>5</sup> to [0,a] and [0,b], respectively, and sends the other standard basis vectors to [0,0] (these should all be column vectors, but it's easier to write them as rows). Then the (again, column) vector [c,d,e,f,g] gets mapped to, by linearity, [0,ad+bg].

    The rows of a 2x5 matrix are always in R<sup>5</sup>, the columns always in R<sup>2</sup>. The null space and row space are both inside R<sup>5</sup>, the column space inside R<sup>2</sup>.
    Reply With Quote  
     

  32. #31  
    Forum Sophomore
    Join Date
    Jul 2007
    Location
    South Africa
    Posts
    196
    I have it in RTF format: Microsoft Word or Wordpad or I can scan it. I'll need your email address to send an attachment.

    The paper is in better shape than the post.

    Is the basis of each clasified Lie algebra the only possible ones for each type?
    It also matters what isn't there - Tao Te Ching interpreted.
    Reply With Quote  
     

  33. #32  
    Forum Sophomore
    Join Date
    Jul 2007
    Location
    South Africa
    Posts
    196
    No one noticed that:

    Lx ~= LIx

    in the case of I being columns longest, however the eigenvalues can still be defined as coming from:

    Ax = LIx.

    I actually don't know if we can call them eigenvalues since D_C I ~= 1.

    We actually have correspondence (equivalence) to the Cauchy-Binet formula as:

    det AB = D_R A @ D_C B

    if A is 2x3 and B is 3x2, where the product "@" is defined similar to the dot product (just treating 2x2 determinants like numbers and the correspondence is by deleted column number (in A) being equal to deleted row number (in B), D_R/D_C taking precedence). That we need a special product should not be surprising since D_R/D_C are generalisations of "det".

    I will see if this holds in the more general cases.
    It also matters what isn't there - Tao Te Ching interpreted.
    Reply With Quote  
     

Bookmarks
Bookmarks
Posting Permissions
  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •