Dot product

From Wikipedia, the free encyclopedia

Jump to: navigation, search

In mathematics, the dot product, also known as the scalar product, is an operation which takes two vectors over the real numbers R and returns a real-valued scalar quantity. It is the standard inner product of the orthonormal Euclidean space. It contrasts with the cross product which produces a vector result.

Contents

[edit] Definition

The dot product of two vectors a = [a1, a2, … , an] and b = [b1, b2, … , bn] is defined as:

\mathbf{a}\cdot \mathbf{b} = \sum_{i=1}^n a_ib_i = a_1b_1 + a_2b_2 + \cdots + a_nb_n

where Σ denotes summation notation and n is the dimension of the vectors.

For example, the dot product of two three-dimensional vectors <1, 3, −5> and <4, −2, −1> is

\begin{bmatrix}1&3&-5\end{bmatrix} \cdot \begin{bmatrix}4&-2&-1\end{bmatrix} = (1)(4) + (3)(-2) + (-5)(-1) = 3.

For two complex column vectors the dot product is defined as

\mathbf{a}\cdot \mathbf{b} = \sum{\overline{b_i} a_i}

where

\overline{b_i}

is the complex conjugate of bi; further note that in the complex case

 \mathbf{a} \cdot \mathbf{b} = \overline{\mathbf{b} \cdot \mathbf{a}}

The dot product is typically applied to vectors from orthonormal vector spaces. Its generalization to non-orthonormal vector spaces is described below.

[edit] Conversion to matrix multiplication

Using matrix multiplication and treating the vectors as n×1 matrices (i.e. "column matrices" or "column vectors"), the dot product can also be written as:

\mathbf{a} \cdot \mathbf{b} = \mathbf{a}^T \mathbf{b} \,

where aT denotes the transpose of the matrix a, and in this specific case, since a is a column matrix, the transpose of a is a "row matrix" or "row vector" (1×n matrix).

For instance, the dot product of the two above-mentioned three-dimensional vectors is equivalent to the product of a 1×3 matrix by a 3×1 matrix (which, by virtue of the matrix multiplication, results in a 1×1 matrix, i.e., a scalar):

\begin{bmatrix}
    1&3&-5
\end{bmatrix}\begin{bmatrix} 
    4\\-2\\-1
\end{bmatrix} = \begin{bmatrix}
    3
\end{bmatrix}.

[edit] Geometric interpretation

AB = |A| |B| cos(θ).
|A| cos(θ) is the scalar projection of A onto B.

In Euclidean geometry, the dot product, length, and angle are related. For a vector a, the dot product a · a is the square of the length of a, or

|\mathbf{a}| = \sqrt{\mathbf{a} \cdot \mathbf{a}}

where |a| denotes the length (magnitude) of a. More generally, if b is another vector

 \mathbf{a} \cdot \mathbf{b} = |\mathbf{a}| \, |\mathbf{b}| \cos \theta \,

where

|a| and |b| denote the length of a and b
θ is the angle between them.

Thus, given two vectors, the angle between them can be found by rearranging the above formula:

\theta =  \arccos \left( \frac {\bold{a}\cdot\bold{b}} {|\bold{a}||\bold{b}|}\right).

As the cosine of 90° is zero, the dot product of two orthogonal vectors is always zero:

\text{If } \mathbf{a} \perp \mathbf{b} \text{, then } \mathbf{a} \cdot \mathbf{b} = 0.

Moreover, two vectors can be considered orthogonal if and only if their dot product is zero, and they have non-null length. This property provides a simple method to test the condition of orthogonality.

Sometimes these properties are also used for defining the dot product, especially in 2 and 3 dimensions; this definition is equivalent to the above one. For higher dimensions the formula can be used to define the concept of angle.

The geometric properties rely on the basis being orthonormal, i.e. composed of vectors perpendicular to each other and having unit length.

[edit] Scalar projection

If both a and b have length one (i.e. they are unit vectors), their dot product simply gives the cosine of the angle between them.

If only b is a unit vector, then the dot product a · b gives |a| cos(θ), i.e. the magnitude of the projection of a in the direction of b, with a minus sign if the direction is opposite. This is called the scalar projection of a onto b, or scalar component of a in the direction of b (see figure). This property of the dot product has several useful applications (for instance, see next section).

If neither a nor b is a unit vector, then the magnitude of the projection of a in the direction of b, for example, would be a · (b / |b|) as the unit vector in the direction of b is b / |b|.

[edit] Rotation

A rotation of the orthonormal basis in terms of which vector a is represented is obtained with a multiplication of a by a rotation matrix R. This matrix multiplication is just a compact representation of a sequence of dot products.

For instance, let

  • B1 = {x, y, z} and B2 = {u, v, w} be two different orthonormal bases of the same space R3, with B2 obtained by just rotating B1,
  • a1 = (ax, ay, az) represent vector a in terms of B1,
  • a2 = (au, av, aw) represent the same vector in terms of the rotated basis B2,
  • u1, v1, w1 be the rotated basis vectors u, v, w represented in terms of B1.

Then the rotation from B1 to B2 is performed as follows:

 \bold a_2 = \bold{Ra}_1 = 
\begin{bmatrix} u_x & u_y & u_z \\ v_x & v_y & v_z \\ w_x & w_y & w_z \end{bmatrix} 
\begin{bmatrix} a_x \\ a_y \\ a_z \end{bmatrix} =
\begin{bmatrix} \bold u_1\cdot\bold a_1 \\ \bold v_1\cdot\bold a_1 \\ \bold w_1\cdot\bold a_1 \end{bmatrix} = \begin{bmatrix} a_u \\ a_v \\ a_w \end{bmatrix} .

Notice that the rotation matrix R is assembled by using the rotated basis vectors u1, v1, w1 as its rows, and these vectors are unit vectors. By definition, Ra1 consists of a sequence of dot products between each of the three rows of R and vector a1. Each of these dot products determines a scalar component of a in the direction of a rotated basis vector (see previous section).

If a1 is a row vector, rather than a column vector, then R must contain the rotated basis vectors in its columns, and must post-multiply a1:

 \bold a_2 = \bold a_1 \bold R = 
\begin{bmatrix} a_x & a_y & a_z \end{bmatrix}
\begin{bmatrix} u_x & v_x & w_x \\ u_y & v_y & w_y \\ u_z & v_z & w_z \end{bmatrix} =
\begin{bmatrix} \bold u_1\cdot\bold a_1 & \bold v_1\cdot\bold a_1 & \bold w_1\cdot\bold a_1 \end{bmatrix} = \begin{bmatrix} a_u & a_v & a_w \end{bmatrix} .

[edit] The dot product in physics

In physics, magnitude is a scalar in the physical sense, i.e. a physical quantity independent of the coordinate system, expressed as the product of a numerical value and a physical unit, not just a number. The dot product is also a scalar in this sense, given by the formula, independent of the coordinate system. The formula in terms of coordinates is evaluated with not just numbers, but numbers times units. Therefore, although it relies on the basis being orthonormal, it does not depend on scaling.

Example:

[edit] Properties

The following properties hold if a, b, and c are real vectors and r is a scalar.

The dot product is commutative:

 \mathbf{a} \cdot \mathbf{b} = \mathbf{b} \cdot \mathbf{a}.

The dot product is distributive over vector addition:

 \mathbf{a} \cdot (\mathbf{b} + \mathbf{c}) = \mathbf{a} \cdot \mathbf{b} + \mathbf{a} \cdot \mathbf{c}.

The dot product is not associative, but (for column vectors a, b, and c) with the help of the matrix-multiplication one can derive:

 (\mathbf{a} \cdot \mathbf{b})  \mathbf{c} = (\mathbf{c} \mathbf{b}^T) \mathbf{a}.

The dot product is bilinear:

 \mathbf{a} \cdot (r\mathbf{b} +  \mathbf{c}) 
    = r(\mathbf{a} \cdot   \mathbf{b}) +(\mathbf{a} \cdot \mathbf{c}).

When multiplied by a scalar value, dot product satisfies:

 (c_1\mathbf{a}) \cdot (c_2\mathbf{b}) = (c_1c_2) (\mathbf{a} \cdot \mathbf{b})

(these last two properties follow from the first two).

Two non-zero vectors a and b are perpendicular if and only if ab = 0.

Unlike multiplication of ordinary numbers, where if ab = ac, then b always equals c unless a is zero, the dot product does not obey the cancellation law:

If ab = ac and a0:
then we can write: a • (bc) = 0 by the distributive law; and from the previous result above:
If a is perpendicular to (bc), we can have (bc) ≠ 0 and therefore bc.

Provided that the basis is orthonormal, the dot product is invariant under isometric changes of the basis: rotations, reflections, and combinations, keeping the origin fixed. The above mentioned geometric interpretation relies on this property. In other words, for an orthonormal space with any number of dimensions, the dot product is invariant under a coordinate transformation based on an orthogonal matrix. This corresponds to the following two conditions:

  • the new basis is again orthonormal (i.e., it is orthonormal expressed in the old one)
  • the new base vectors have the same length as the old ones (i.e., unit length in terms of the old basis)

[edit] Derivative

If a and b are functions, then the derivative of ab is a'b + ab'

[edit] Triple product expansion

This is a very useful identity (also known as Lagrange's formula) involving the dot- and cross-products. It is written as

\mathbf{a} \times (\mathbf{b} \times \mathbf{c}) = \mathbf{b}(\mathbf{a}\cdot\mathbf{c}) - \mathbf{c}(\mathbf{a}\cdot\mathbf{b})

which is easier to remember as “BAC minus CAB”, keeping in mind which vectors are dotted together. This formula is commonly used to simplify vector calculations in physics.

[edit] Proof of the geometric interpretation

Note: This proof is shown for 3-dimensional vectors, but is readily extendable to n-dimensional vectors.

Consider a vector

 \mathbf{v} = v_1 \mathbf{i} + v_2 \mathbf{j} + v_3 \mathbf{k}. \,

Repeated application of the Pythagorean theorem yields for its length v

 v^2 = v_1^2 + v_2^2 + v_3^2. \,

But this is the same as

 \mathbf{v} \cdot \mathbf{v} = v_1^2 + v_2^2 + v_3^2, \,

so we conclude that taking the dot product of a vector v with itself yields the squared length of the vector.

Lemma 1
 \mathbf{v} \cdot \mathbf{v} = v^2. \,

Now consider two vectors a and b extending from the origin, separated by an angle θ. A third vector c may be defined as

 \mathbf{c} \ \stackrel{\mathrm{def}}{=}\  \mathbf{a} - \mathbf{b}. \,

creating a triangle with sides a, b, and c. According to the law of cosines, we have

 c^2 = a^2 + b^2 - 2 ab \cos \theta. \,

Substituting dot products for the squared lengths according to Lemma 1, we get


  \mathbf{c} \cdot \mathbf{c} 
= \mathbf{a} \cdot \mathbf{a} 
+ \mathbf{b} \cdot \mathbf{b} 
- 2 ab \cos\theta. \,
                  (1)

But as cab, we also have


  \mathbf{c} \cdot \mathbf{c} 
= (\mathbf{a} - \mathbf{b}) \cdot (\mathbf{a} - \mathbf{b}) \,,

which, according to the distributive law, expands to


  \mathbf{c} \cdot \mathbf{c} 
= \mathbf{a} \cdot \mathbf{a} 
+ \mathbf{b} \cdot \mathbf{b} 
-2(\mathbf{a} \cdot \mathbf{b}). \, 
                    (2)

Merging the two cc equations, (1) and (2), we obtain


  \mathbf{a} \cdot \mathbf{a} 
+ \mathbf{b} \cdot \mathbf{b} 
-2(\mathbf{a} \cdot \mathbf{b}) 
= \mathbf{a} \cdot \mathbf{a} 
+ \mathbf{b} \cdot \mathbf{b} 
- 2 ab \cos\theta. \,

Subtracting aa + bb from both sides and dividing by −2 leaves

 \mathbf{a} \cdot \mathbf{b} = ab \cos\theta. \,

Q.E.D.

[edit] Generalization

The inner product generalizes the dot product to abstract vector spaces and is normally denoted by <a , b>. Due to the geometric interpretation of the dot product the norm ||a|| of a vector a in such an inner product space is defined as

\|\mathbf{a}\| = \sqrt{\langle\mathbf{a}\, , \mathbf{a}\rangle}

such that it generalizes length, and the angle θ between two vectors a and b by

 \cos{\theta} = \frac{\langle\mathbf{a}\, , \mathbf{b}\rangle}{\|\mathbf{a}\| \, \|\mathbf{b}\|}.

In particular, two vectors are considered orthogonal if their inner product is zero

 \langle\mathbf{a}\, , \mathbf{b}\rangle = 0.

The Frobenius inner product generalizes the dot product to matrices. It is defined as the sum of the products of the corresponding components of two matrices having the same size.

[edit] Matrix representation

An inner product can be represented using a square matrix and matrix multiplication. For example, given two vectors

 
    \mathbf{a} = \begin{bmatrix} a_u \\ a_v \\ a_w \end{bmatrix}, \qquad
    \mathbf{b} = \begin{bmatrix} b_u \\ b_v \\ b_w \end{bmatrix}

with respect to the basis set S


    \mathrm{S} = \{ \mathbf{u}, \mathbf{v} ,\mathbf{w} \} = \left\{
    \begin{bmatrix} u_1 \\ u_2 \\ u_3 \end{bmatrix},
    \begin{bmatrix} v_1 \\ v_2 \\ v_3 \end{bmatrix},
    \begin{bmatrix} w_1 \\ w_2 \\ w_3 \end{bmatrix} \right\}

any inner product can be represented as follows:

 
   \langle \mathbf{a}\, , \mathbf{b} \rangle = \mathbf{a}^T \mathbf{M} \mathbf{b}

where M is a 3x3 matrix. Given the matrix of the inner products through S called CS, M can be calculated by solving the following system of equations.


    \mathbf{C}_S = 
        \begin{bmatrix} 
        \langle \mathbf{u,u} \rangle & \langle \mathbf{u,v} \rangle & \langle \mathbf{u,w} \rangle \\ 
        \langle \mathbf{v,u} \rangle & \langle \mathbf{v,v} \rangle & \langle \mathbf{v,w} \rangle \\ 
        \langle \mathbf{w,u} \rangle & \langle \mathbf{w,v} \rangle & \langle \mathbf{w,w} \rangle
        \end{bmatrix}
=
        \begin{bmatrix} 
        \mathbf{u}^T \mathbf{M} \mathbf{u} & \mathbf{u}^T \mathbf{M} \mathbf{v} & \mathbf{u}^T \mathbf{M} \mathbf{w} \\ 
        \mathbf{v}^T \mathbf{M} \mathbf{u} & \mathbf{v}^T \mathbf{M} \mathbf{v} & \mathbf{v}^T \mathbf{M} \mathbf{w} \\ 
        \mathbf{w}^T \mathbf{M} \mathbf{u} & \mathbf{w}^T \mathbf{M} \mathbf{v} & \mathbf{w}^T \mathbf{M} \mathbf{w}
        \end{bmatrix}

If the basis set S is composed of orthogonal unit vectors (orthonormal basis), then both CS and M reduce to the identity matrix 1, and the inner product can be represented as a simple product between a row matrix and a column matrix:

 
   \langle \mathbf{a}\, , \mathbf{b} \rangle =
   \mathbf{a}^T \mathbf{1} \mathbf{b} = \mathbf{a}^T \mathbf{b}.

Thus, if the basis is orthonormal, the square matrix M is not necessary and the inner product coincides with the dot product as defined above:


   \langle \mathbf{a}\, , \mathbf{b} \rangle  = \mathbf{a}^T \mathbf{b} = \mathbf{a} \cdot \mathbf{b}.

[edit] Example

Given a basis set


    \mathrm{S} = \{ \mathbf{u}, \mathbf{v} ,\mathbf{w} \} = \left\{
    \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix},
    \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix},
    \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix} \right\}

and a matrix of the inner product through S


    \mathbf{C}_S = 
        \begin{bmatrix} 
        5 & 2 & 0 \\ 
        2 & 6 & 2 \\ 
        0 & 2 & 7
        \end{bmatrix}

we can set each element of CS equal to the inner product of two of the basis vectors as follows


    \mathbf{C}_S[i,j] = \langle \mathrm{S}[i],\mathrm{S}[j] \rangle

    \mathbf{C}_S[0,0] = 5 = \langle \mathbf{u,u} \rangle =
        \begin{bmatrix} 1 & 0 & 0 \end{bmatrix} 
        \mathrm{M} 
        \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}


    \mathbf{C}_S[0,1] = 2 = \langle \mathbf{u,v} \rangle =
        \begin{bmatrix} 1 & 0 & 0 \end{bmatrix} 
        \mathrm{M} 
        \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix}


    \cdots

which gives nine equations and nine unknowns. Solving these equations yields


    \mathbf{M} = 
        \begin{bmatrix} 
        5 & -3 & -2 \\ 
        -3 & 7 & -2 \\ 
        -2 & -2 & 9
        \end{bmatrix}.

[edit] See also

[edit] External links

Personal tools