Talk:Covariance and contravariance of vectors

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Co-/contra- variance of 'vectors' (per se) or of their components[edit]

It is my understanding that a vector, per se (i.e. not its representation), is neither covariant or contravariant. Instead covariance or contravariance refers to a given representation of the vector, depending on the basis being used. For example, one can write the same vector in either manner as, . I think this point (in addition to the variance topic as a whole) can be both subtle and confusing to people first learning these topics, and thus the terminology should be used very carefully, consistently, and rigorously. I tried to change some of the terminology in the article to say, "vectors with covariant components" instead of "covariant vectors" (for example), but this has been reversed as inaccurate. So I wanted to open a discussion in case I am mistaken or others disagree. @JRSpriggs. Zhermes (talk) 16:45, 2 January 2020 (UTC)[reply]

I have several textbooks that describe covariance and contravariance of tensors. Most of them refer to the tensors themselves as having these properties; however, some of them indicate that these properties apply only to the components and that the tensors are invariant. I am inclined to agree with the latter, that is, that only the components change—not the tensors. I think we should point out in the article two things: 1. what we think is the correct parlance, and 2. the fact that some textbooks say otherwise. I can supply citations.—Anita5192 (talk) 17:54, 2 January 2020 (UTC)[reply]
The key point to understand is that the distinction between "covariant" and "contravariant" only makes sense for vector fields in the context of a preferred tangent bundle over an underlying manifold.
Otherwise, all the structures could just be called simply "vectors" and the choice of one basis or another would be completely arbitrary.
So, restricting ourselves to structures built from the tangent bundle, the vectors in the tangent bundle itself are called "contravariant", and the vectors in the dual or cotangent bundle are called "covariant". Tensor products of tangent and cotangent bundles may have both covariant and contravariant properties (depending on the index considered). JRSpriggs (talk) 02:13, 3 January 2020 (UTC)[reply]
I'm afraid that in a strict sense, there are two distinct meanings in play here, and unless we distinguish them, confusion will ensue. One is a description of components, and mentioned above, and the meaning used by JRSpriggs essentially means "is an element of the (co)tangent bundle". The former makes sense in the context of a vector space and its dual space, independently of whether these are associated with a manifold. It is unfortunate that texts often conflate a tensor with its components with respect to a basis. I expect that the interpretation of a "contravariant vector" as being "an element of the tangent bundle" arises from exactly this conflation, and should be avoided (there is a better term: a "vector field") in favour of restricting its use to describing the behaviour of components with respect to changes of a basis. If in doubt about the confusion implicit in the mixed use of the terms, consider this conundrum: the set of components of a vector is contravariant (transform as the inverse of the transformation of the basis), whereas the basis (a tuple of vectors) is covariant (transform as the transformation of the basis by definition, in the first sense), yet we would describe the elements of the basis as being contravariant (in the sense of belonging to the tangent space), making them simultaneously "covariant" and "contravariant". Let's not build such confusion into the article, and restrict its meaning to (the no doubt original) sense. —Quondum 18:06, 31 March 2020 (UTC)[reply]

I had precisely the same question, that I opened in math stackexchange. I haven't got any satisfactory answer yet. I hope somebody will pick this thread and continue the discussion. https://math.stackexchange.com/questions/4297246/conceptual-difference-between-covariant-and-contravariant-tensors — Preceding unsigned comment added by 99.62.6.96 (talk) 05:24, 29 December 2021 (UTC)[reply]

Introduction for the layman[edit]

I would add a paragraph to the introduction, which is easier to understand. Maybe something like this:

contravariant vectors

Let's say we have three base vectors a, b and c. Each one has the length 1 meter. d = (3,2,4) is a vector (≙ (3m,2m,4m)).If we now double the length of every base vector, so that |a| = |b| = |c| = 2m, then d must be (1.5, 1, 2) using the new a, b, c basis.(but d would still be (3m,2m,4m))

covariant vectors

Let's say f is a scalar function and the base of the coordinate system is a, b and c. And suppose |a| = |b| = |c| = 1 meter. Then suppose that ∇f=(2,5,9); so that the slope in x-direction is 2/ meter (2 per meter). If we now double the length of each base vector, so that |a| = |b| = |c| = 2m, ∇f becomes (4,10,18). The slope in x-direction is the same: 4/2m = 2/m.

CaffeineWitcher (talk) 12:59, 23 May 2020 (UTC)[reply]

Covariance of gradient[edit]

I am a bit confused : this article takes the gradient to be a prime example of a "covariant vector", but the Gradient article claims that it is a contravariant vector. Which is correct? (Sorry if this is the wrong place to ask) --93.25.93.82 (talk) 19:56, 7 July 2020 (UTC)[reply]

After thinking a little, I think I understand where my confusion comes from: the gradient of a function is covariant with respect to the input basis, but contravariant with respect to the output basis. Is this a valid interpretation? If it is, maybe this could be clarified in the lead of this article. --93.25.93.82 (talk) 20:08, 7 July 2020 (UTC)[reply]
I do not understand your second comment. The gradient article is wrong because it is choosing the primary meaning as the one which includes the metric (i.e. the dot product) rather than the one which is merely the derivative. JRSpriggs (talk) 07:31, 9 July 2020 (UTC)[reply]
I back my position up with the reference Gravitation (book) page 59. JRSpriggs (talk) 05:12, 15 July 2020 (UTC)[reply]
I meant that if a gradient is seen as a linear form, say for instance from to , and (resp. ) is an invertible matrix corresponding to a change of basis in (resp. a change in ), then will be expressed in the bases and as (assuming of course that was originally expressed as a row matrix in the bases and ). — Preceding unsigned comment added by 93.25.93.82 (talk) 13:03, 10 July 2020 (UTC)[reply]
I find it 1) not likely that the author of this bit is right and everyone else who writes on the topic is wrong, and 2) pretty obvious that gradient is covariant. If the temperature of a fluid drops by 1 degree per meter (a temperature gradient), then it's going to drop by ten degrees per ten meters, so if you change your unit of measure from one metre to 10 metres, the temperature gradient (change in degrees per unit of distance) likewise multiplies by 10. If the author of the "actually, gradient is contravariant" bit would care to plug that simple example into their
and explain why they get a different result, perhaps this would be helpful. 203.13.3.93 (talk) 03:38, 29 January 2024 (UTC)[reply]
There are two different conventions for cpvariant versus contravariant; one gives primacy to covectors (diiferential forms) and on gives primacy to vectors. The gradient of a scalar field is a covector fiels; if vctors are covariant then it is contravariant and if vectors are contravariant then it is covatiant. I don't recall which convention the "Texas Telephone Directory" (MTW) uses. -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 15:15, 29 January 2024 (UTC)[reply]
I think a few comments here have confusion based on the fact that the gradient of a function on R3 can be interpreted equally well either as a covariant or contravariant vector, because R3 has a canonical Riemannian metric. On a more general space, the derivative of a function is covariant (i.e. a 1-form) and if there is a metric it can be converted, in a way depending on the particular metric, to contravariant (i.e. a tangent vector). In common usage the former is called the differential of the function and the latter is called the gradient. Gumshoe2 (talk) 15:55, 29 January 2024 (UTC)[reply]

Example of vector which is not contravariant is arguably not a vector[edit]

Currently: "On the other hand, for instance, a triple consisting of the length, width, and height of a rectangular box could make up the three components of an abstract vector, but this vector would not be contravariant, since a change in coordinates on the space does not change the box's length, width, and height: instead these are scalars." I think this is not helpful since there is no obvious meaning to the vector space of box [length, width, height]. What do vector scaling and vector addition correspond to? I suggest this example be removed. Intellec7 (talk) 05:19, 28 August 2020 (UTC)[reply]

OK. Go ahead and delete that sentence. JRSpriggs (talk) 14:47, 28 August 2020 (UTC)[reply]

Geometrical construction of coordinates[edit]

If you have a basis in a three dimensional Euclidean space you can construct the coordinates of a given point by drawing lines through the point parallel to each basis vector. Those lines will intersect with each other and the distance of the intersection point from the origin divided by the length of the corresponding basis vector gives you the covariant components . But how do you construct the coordinates in respect to the dual basis  ???? 2003:E7:2F3B:B0D8:11C:4108:841E:BA24 (talk) 07:00, 17 April 2021 (UTC)[reply]

Mis-use of the word "scalar"[edit]

The word "scalar" is used in the page to mean something that multiplies a unit vector. But a scalar is supposed to be coordinate-free or gauge-invariant or invariant to changes of coordinates. The quantities that multiply unit vectors do not have this property, because the unit vectors change (and hence the components of the vectors change) as you change coordinates. --David W. Hogg (talk) 12:58, 16 May 2021 (UTC)[reply]

I tried to fix that. Let me know if there is still a problem. JRSpriggs (talk) 18:26, 16 May 2021 (UTC)[reply]
What about scalar densities (or if you refer to work from a concrete example, 'density' as in kg/m^3)? These are rank-zero tensors (or tensor fields) but nevertheless they are covariant. Can't recall the name, but one text I have read suggests that you can also have a 'scalar capacity', which can also be a field, which is a contravariant scalar. — Preceding unsigned comment added by 203.13.3.89 (talk) 03:26, 29 January 2024 (UTC)[reply]

Vectors vs. Covectors[edit]

I have some "issues" with the fundamental jargon used in this article. I would like to help improve it; but I don't want to start an editing war. So, let me test the waters with a few comments:

Vectors and covectors are not the same thing. And they are both invariant under (proper, invertible) linear transformation. (Note: this is the passive/alias viewpoint of transformations). It is a semantic error to say that a vector is contravariant (or that a covector is covariant). The co/vectors, and tensors in general, are invariant under linear transformations. If the space has a metric, then one can "convert" a vector into a covector, and vice versa. But the existence of a metric is not obligatory and, in fact, confuses people into thinking that vectors and covectors are fungible. They are not. Vectors are linear approximations to the curves defined by the intersection of coordinate functions; covectors are linear functional approximations to the level (hyper)surfaces of a single coordinate function. The figure at the top of the article kind of hints at this, but then garbles the message by overlaying arrow quantities with level-surface quantities on the right-hand side. Too bad. Remove those blue arrows and you'd have a right proud representation of covectors in a cobasis.

Co/contra-variance is a property of the components and of the basis elements. For a vector , the components are contravariant and the basis vectors are covariant; for a covector , the components are covariant and the basis covectors are contravariant. In either case, when you contract the components with the basis—one of which is covariant and the other contravariant—then you get an invariant quantity, as required of a tensor.

I won't try to define/defend here (yet;-) what co/contra-variant mean. (Spoiler alert: contravariant quantities transform as the Jacobian of a coordinate transformation; covariant quantities transform as the inverse Jacobian; this seems backwards to what I, for one, would expect from the concepts of co- and contra-; but it is what it is!)

To whomever has purview over this article: if these comments make sense and seem worth the trouble of editing the article, please let me know and I'll try to collaborate. On the other hand, if you think these are distinctions without a difference—or worse, misguided—then I'll stand down.--ScriboErgoSum (talk) 08:25, 15 November 2021 (UTC)[reply]

NPOV[edit]

The article describes covariance and contravariance in terms of coordinates and components, a perspective that is rather dated. The terms have a meaning independent of any choice of basis or coordinates, and the article should reflect that. There is a lot of variation in the literature, but essentially there are three styles:

  1. Define the tangent and cotangent bundles independently, then prove duality.
    1. Authors often define tangent vectors as equivalence classes of curves through a point.
    2. Authors often define cotangent vectors in the language of germs.
  2. Define the tangent bundle, define the cotangent space at a point as the dual of the tangent space at that point and then define the cotangent bundle.
  3. Define the cotangent bundle, define the tangent space at a point as the dual of the cotangent space at that point and then define the tangent bundle.

Tensors can then be defined as either tensor products or as multilinear maps. It is common to just classify tensors by the covariant and contravariant ranks, but if there is a (pseudo)metric involved then order matters because of raising and lowering of indexes. --Shmuel (Seymour J.) Metz Username:Chatul (talk) 11:32, 1 March 2022 (UTC)[reply]

This article mostly describes tensors; what you are describing is tensor field. But taking your point, there should be some section for the tensor product language on this page. Even so, there is no need to choose one way over another. (I object to the idea that bases and coordinates are dated.) Gumshoe2 (talk) 07:26, 13 March 2022 (UTC)[reply]
There is a large amount of material in the Tensors article that does not belong there, but should rather be in Tensor fields. The distinction between covariant and contravariant is really only relevant for tensor fields on a manifold, but as long as it is in this article the text should reflect its character.
I didn't say that coordinates and bases are dated, but rather that defining, e.g., tensor, covariant, in terms of coordinates and bases rather than intrinsically is dated. --Shmuel (Seymour J.) Metz Username:Chatul (talk) 12:51, 13 March 2022 (UTC)[reply]
I agree that the tensor product definition should be given here, but I think it should be given together with the present one. Why do you say that co(/ntra)variance is only relevant for tensor fields? There is of course a distinction between an element of a vector space and an element of the dual vector space, for instance. Gumshoe2 (talk) 17:37, 13 March 2022 (UTC)[reply]
Whoops! Yes, of course there is a difference between V and V* for an abstract vector space V, not just for tangent spaces. --Shmuel (Seymour J.) Metz Username:Chatul (talk) 19:21, 13 March 2022 (UTC)[reply]

Inconsistency of note 1 with the main text[edit]

I am adding this (3/26/2022) without reading the below because I have a comment on the definition section. We have f = (X1, ..., Xn) with each of the Xis being basis a vector. I was going to add a remark that said this makes f a matrix and indeed it is used as a matrix in the unnumbered equation v=f v[f]. I decided not to add that comment as it contradicts Note 1. In the note preceding Eq 1 it says "regarding f as a row vector whose entries are the elements of the basis". Maybe I am naive but I find this pretty confusing as I've always considered matrices and vectors to be different objects. If f is to be considered a row vector but with each element a basis vector instead of just a number (as one usually thinks of vectors) this needs to be explained. If the note is incorrect then it should be removed. — Preceding unsigned comment added by 76.113.29.12 (talkcontribs) 14:45, 26 March 2022 (UTC)[reply]

I removed the template. It is sometimes useful when dealing with matrices to consider a matrix as a column vector of row vectors or a row vector of column vectors. In this sense, a vector is just a list of some object even though it's entries are not strictly from a field. I tend to agree with your level of scrutiny, which if applied would require we don't use the word vector (see here). The wording of vector could be changed to that of list, but I don't think that would be a helpful change and actually could be a little detrimental, so I am leaving it. 69.166.46.156 (talk) 19:35, 25 April 2022 (UTC)[reply]

Relative vectors?[edit]

Should the article discuss relative vectors, whose transformation includes a power of the transformation determinant as a factor? In more modern language, these are tensor products of vectors with tensor densities, or for orientable manifolds, liftings of line bundles. -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 14:31, 13 November 2023 (UTC)[reply]

This seems like a good place for it. Note that in tensor there is some content on tensor densities which uses notation compatible with this article. Tito Omburo (talk) 22:31, 15 March 2024 (UTC)[reply]