Formula for the derivative of a product
Geometric illustration of a proof of the product rule
In calculus , the product rule (or Leibniz rule [1] or Leibniz product rule ) is a formula used to find the derivatives of products of two or more functions . For two functions, it may be stated in Lagrange's notation as
(
u
⋅
v
)
′
=
u
′
⋅
v
+
u
⋅
v
′
{\displaystyle (u\cdot v)'=u'\cdot v+u\cdot v'}
or in
Leibniz's notation as
d
d
x
(
u
⋅
v
)
=
d
u
d
x
⋅
v
+
u
⋅
d
v
d
x
.
{\displaystyle {\frac {d}{dx}}(u\cdot v)={\frac {du}{dx}}\cdot v+u\cdot {\frac {dv}{dx}}.}
The rule may be extended or generalized to products of three or more functions, to a rule for higher-order derivatives of a product, and to other contexts.
Discovery [ edit ]
Discovery of this rule is credited to Gottfried Leibniz , who demonstrated it using differentials .[2] (However, J. M. Child, a translator of Leibniz's papers,[3] argues that it is due to Isaac Barrow .) Here is Leibniz's argument: Let u (x ) and v (x ) be two differentiable functions of x . Then the differential of uv is
d
(
u
⋅
v
)
=
(
u
+
d
u
)
⋅
(
v
+
d
v
)
−
u
⋅
v
=
u
⋅
d
v
+
v
⋅
d
u
+
d
u
⋅
d
v
.
{\displaystyle {\begin{aligned}d(u\cdot v)&{}=(u+du)\cdot (v+dv)-u\cdot v\\&{}=u\cdot dv+v\cdot du+du\cdot dv.\end{aligned}}}
Since the term du ·dv is "negligible" (compared to du and dv ), Leibniz concluded that
d
(
u
⋅
v
)
=
v
⋅
d
u
+
u
⋅
d
v
{\displaystyle d(u\cdot v)=v\cdot du+u\cdot dv}
and this is indeed the differential form of the product rule. If we divide through by the differential
dx , we obtain
d
d
x
(
u
⋅
v
)
=
v
⋅
d
u
d
x
+
u
⋅
d
v
d
x
{\displaystyle {\frac {d}{dx}}(u\cdot v)=v\cdot {\frac {du}{dx}}+u\cdot {\frac {dv}{dx}}}
which can also be written in
Lagrange's notation as
(
u
⋅
v
)
′
=
v
⋅
u
′
+
u
⋅
v
′
.
{\displaystyle (u\cdot v)'=v\cdot u'+u\cdot v'.}
Examples [ edit ]
Suppose we want to differentiate
f
(
x
)
=
x
2
sin
(
x
)
.
{\displaystyle f(x)=x^{2}{\text{sin}}(x).}
By using the product rule, one gets the derivative
f
′
(
x
)
=
2
x
⋅
sin
(
x
)
+
x
2
cos
(
x
)
{\displaystyle f'(x)=2x\cdot {\text{sin}}(x)+x^{2}{\text{cos}}(x)}
(since the derivative of
x
2
{\displaystyle x^{2}}
is
2
x
,
{\displaystyle 2x,}
and the derivative of the sine function is the cosine function).
One special case of the product rule is the constant multiple rule , which states: if c is a number, and
f
(
x
)
{\displaystyle f(x)}
is a differentiable function, then
c
⋅
f
(
x
)
{\displaystyle c\cdot f(x)}
is also differentiable, and its derivative is
(
c
f
)
′
(
x
)
=
c
⋅
f
′
(
x
)
.
{\displaystyle (cf)'(x)=c\cdot f'(x).}
This follows from the product rule since the derivative of any constant is zero. This, combined with the sum rule for derivatives, shows that differentiation is linear .
The rule for integration by parts is derived from the product rule, as is (a weak version of) the quotient rule . (It is a "weak" version in that it does not prove that the quotient is differentiable but only says what its derivative is if it is differentiable.)
Limit definition of derivative [ edit ]
Let h (x ) = f (x )g (x ) and suppose that f and g are each differentiable at x . We want to prove that h is differentiable at x and that its derivative, h′ (x ) , is given by f′ (x )g (x ) + f (x )g′ (x ) . To do this,
f
(
x
)
g
(
x
+
Δ
x
)
−
f
(
x
)
g
(
x
+
Δ
x
)
{\displaystyle f(x)g(x+\Delta x)-f(x)g(x+\Delta x)}
(which is zero, and thus does not change the value) is added to the numerator to permit its factoring, and then properties of limits are used.
h
′
(
x
)
=
lim
Δ
x
→
0
h
(
x
+
Δ
x
)
−
h
(
x
)
Δ
x
=
lim
Δ
x
→
0
f
(
x
+
Δ
x
)
g
(
x
+
Δ
x
)
−
f
(
x
)
g
(
x
)
Δ
x
=
lim
Δ
x
→
0
f
(
x
+
Δ
x
)
g
(
x
+
Δ
x
)
−
f
(
x
)
g
(
x
+
Δ
x
)
+
f
(
x
)
g
(
x
+
Δ
x
)
−
f
(
x
)
g
(
x
)
Δ
x
=
lim
Δ
x
→
0
[
f
(
x
+
Δ
x
)
−
f
(
x
)
]
⋅
g
(
x
+
Δ
x
)
+
f
(
x
)
⋅
[
g
(
x
+
Δ
x
)
−
g
(
x
)
]
Δ
x
=
lim
Δ
x
→
0
f
(
x
+
Δ
x
)
−
f
(
x
)
Δ
x
⋅
lim
Δ
x
→
0
g
(
x
+
Δ
x
)
⏟
See the note below.
+
lim
Δ
x
→
0
f
(
x
)
⋅
lim
Δ
x
→
0
g
(
x
+
Δ
x
)
−
g
(
x
)
Δ
x
=
f
′
(
x
)
g
(
x
)
+
f
(
x
)
g
′
(
x
)
.
{\displaystyle {\begin{aligned}h'(x)&=\lim _{\Delta x\to 0}{\frac {h(x+\Delta x)-h(x)}{\Delta x}}\\[5pt]&=\lim _{\Delta x\to 0}{\frac {f(x+\Delta x)g(x+\Delta x)-f(x)g(x)}{\Delta x}}\\[5pt]&=\lim _{\Delta x\to 0}{\frac {f(x+\Delta x)g(x+\Delta x)-f(x)g(x+\Delta x)+f(x)g(x+\Delta x)-f(x)g(x)}{\Delta x}}\\[5pt]&=\lim _{\Delta x\to 0}{\frac {{\big [}f(x+\Delta x)-f(x){\big ]}\cdot g(x+\Delta x)+f(x)\cdot {\big [}g(x+\Delta x)-g(x){\big ]}}{\Delta x}}\\[5pt]&=\lim _{\Delta x\to 0}{\frac {f(x+\Delta x)-f(x)}{\Delta x}}\cdot \underbrace {\lim _{\Delta x\to 0}g(x+\Delta x)} _{\text{See the note below.}}+\lim _{\Delta x\to 0}f(x)\cdot \lim _{\Delta x\to 0}{\frac {g(x+\Delta x)-g(x)}{\Delta x}}\\[5pt]&=f'(x)g(x)+f(x)g'(x).\end{aligned}}}
The fact that
lim
Δ
x
→
0
g
(
x
+
Δ
x
)
=
g
(
x
)
{\displaystyle \lim _{\Delta x\to 0}g(x+\Delta x)=g(x)}
follows from the fact that differentiable functions are continuous.
Linear approximations [ edit ]
By definition, if
f
,
g
:
R
→
R
{\displaystyle f,g:\mathbb {R} \to \mathbb {R} }
are differentiable at
x
{\displaystyle x}
, then we can write linear approximations :
f
(
x
+
h
)
=
f
(
x
)
+
f
′
(
x
)
h
+
ε
1
(
h
)
{\displaystyle f(x+h)=f(x)+f'(x)h+\varepsilon _{1}(h)}
and
g
(
x
+
h
)
=
g
(
x
)
+
g
′
(
x
)
h
+
ε
2
(
h
)
,
{\displaystyle g(x+h)=g(x)+g'(x)h+\varepsilon _{2}(h),}
where the error terms are small with respect to
h : that is,
lim
h
→
0
ε
1
(
h
)
h
=
lim
h
→
0
ε
2
(
h
)
h
=
0
,
{\textstyle \lim _{h\to 0}{\frac {\varepsilon _{1}(h)}{h}}=\lim _{h\to 0}{\frac {\varepsilon _{2}(h)}{h}}=0,}
also written
ε
1
,
ε
2
∼
o
(
h
)
{\displaystyle \varepsilon _{1},\varepsilon _{2}\sim o(h)}
. Then:
f
(
x
+
h
)
g
(
x
+
h
)
−
f
(
x
)
g
(
x
)
=
(
f
(
x
)
+
f
′
(
x
)
h
+
ε
1
(
h
)
)
(
g
(
x
)
+
g
′
(
x
)
h
+
ε
2
(
h
)
)
−
f
(
x
)
g
(
x
)
=
f
(
x
)
g
(
x
)
+
f
′
(
x
)
g
(
x
)
h
+
f
(
x
)
g
′
(
x
)
h
−
f
(
x
)
g
(
x
)
+
error terms
=
f
′
(
x
)
g
(
x
)
h
+
f
(
x
)
g
′
(
x
)
h
+
o
(
h
)
.
{\displaystyle {\begin{aligned}f(x+h)g(x+h)-f(x)g(x)&=(f(x)+f'(x)h+\varepsilon _{1}(h))(g(x)+g'(x)h+\varepsilon _{2}(h))-f(x)g(x)\\[.5em]&=f(x)g(x)+f'(x)g(x)h+f(x)g'(x)h-f(x)g(x)+{\text{error terms}}\\[.5em]&=f'(x)g(x)h+f(x)g'(x)h+o(h).\end{aligned}}}
The "error terms" consist of items such as
f
(
x
)
ε
2
(
h
)
,
f
′
(
x
)
g
′
(
x
)
h
2
{\displaystyle f(x)\varepsilon _{2}(h),f'(x)g'(x)h^{2}}
and
h
f
′
(
x
)
ε
1
(
h
)
{\displaystyle hf'(x)\varepsilon _{1}(h)}
which are easily seen to have magnitude
o
(
h
)
.
{\displaystyle o(h).}
Dividing by
h
{\displaystyle h}
and taking the limit
h
→
0
{\displaystyle h\to 0}
gives the result.
Quarter squares [ edit ]
This proof uses the chain rule and the quarter square function
q
(
x
)
=
1
4
x
2
{\displaystyle q(x)={\tfrac {1}{4}}x^{2}}
with derivative
q
′
(
x
)
=
1
2
x
{\displaystyle q'(x)={\tfrac {1}{2}}x}
. We have:
u
v
=
q
(
u
+
v
)
−
q
(
u
−
v
)
,
{\displaystyle uv=q(u+v)-q(u-v),}
and differentiating both sides gives:
f
′
=
q
′
(
u
+
v
)
(
u
′
+
v
′
)
−
q
′
(
u
−
v
)
(
u
′
−
v
′
)
=
(
1
2
(
u
+
v
)
(
u
′
+
v
′
)
)
−
(
1
2
(
u
−
v
)
(
u
′
−
v
′
)
)
=
1
2
(
u
u
′
+
v
u
′
+
u
v
′
+
v
v
′
)
−
1
2
(
u
u
′
−
v
u
′
−
u
v
′
+
v
v
′
)
=
v
u
′
+
u
v
′
.
{\displaystyle {\begin{aligned}f'&=q'(u+v)(u'+v')-q'(u-v)(u'-v')\\[4pt]&=\left({\tfrac {1}{2}}(u+v)(u'+v')\right)-\left({\tfrac {1}{2}}(u-v)(u'-v')\right)\\[4pt]&={\tfrac {1}{2}}(uu'+vu'+uv'+vv')-{\tfrac {1}{2}}(uu'-vu'-uv'+vv')\\[4pt]&=vu'+uv'.\end{aligned}}}
Multivariable chain rule [ edit ]
The product rule can be considered a special case of the chain rule for several variables, applied to the multiplication function
m
(
u
,
v
)
=
u
v
{\displaystyle m(u,v)=uv}
:
d
(
u
v
)
d
x
=
∂
(
u
v
)
∂
u
d
u
d
x
+
∂
(
u
v
)
∂
v
d
v
d
x
=
v
d
u
d
x
+
u
d
v
d
x
.
{\displaystyle {d(uv) \over dx}={\frac {\partial (uv)}{\partial u}}{\frac {du}{dx}}+{\frac {\partial (uv)}{\partial v}}{\frac {dv}{dx}}=v{\frac {du}{dx}}+u{\frac {dv}{dx}}.}
Non-standard analysis [ edit ]
Let u and v be continuous functions in x , and let dx , du and dv be infinitesimals within the framework of non-standard analysis , specifically the hyperreal numbers . Using st to denote the standard part function that associates to a finite hyperreal number the real infinitely close to it, this gives
d
(
u
v
)
d
x
=
st
(
(
u
+
d
u
)
(
v
+
d
v
)
−
u
v
d
x
)
=
st
(
u
v
+
u
⋅
d
v
+
v
⋅
d
u
+
d
u
⋅
d
v
−
u
v
d
x
)
=
st
(
u
⋅
d
v
+
v
⋅
d
u
+
d
u
⋅
d
v
d
x
)
=
st
(
u
d
v
d
x
+
(
v
+
d
v
)
d
u
d
x
)
=
u
d
v
d
x
+
v
d
u
d
x
.
{\displaystyle {\begin{aligned}{\frac {d(uv)}{dx}}&=\operatorname {st} \left({\frac {(u+du)(v+dv)-uv}{dx}}\right)\\&=\operatorname {st} \left({\frac {uv+u\cdot dv+v\cdot du+du\cdot dv-uv}{dx}}\right)\\&=\operatorname {st} \left({\frac {u\cdot dv+v\cdot du+du\cdot dv}{dx}}\right)\\&=\operatorname {st} \left(u{\frac {dv}{dx}}+(v+dv){\frac {du}{dx}}\right)\\&=u{\frac {dv}{dx}}+v{\frac {du}{dx}}.\end{aligned}}}
This was essentially
Leibniz 's proof exploiting the
transcendental law of homogeneity (in place of the standard part above).
Smooth infinitesimal analysis [ edit ]
In the context of Lawvere's approach to infinitesimals, let
d
x
{\displaystyle dx}
be a nilsquare infinitesimal. Then
d
u
=
u
′
d
x
{\displaystyle du=u'\ dx}
and
d
v
=
v
′
d
x
{\displaystyle dv=v'\ dx}
, so that
d
(
u
v
)
=
(
u
+
d
u
)
(
v
+
d
v
)
−
u
v
=
u
v
+
u
⋅
d
v
+
v
⋅
d
u
+
d
u
⋅
d
v
−
u
v
=
u
⋅
d
v
+
v
⋅
d
u
+
d
u
⋅
d
v
=
u
⋅
d
v
+
v
⋅
d
u
{\displaystyle {\begin{aligned}d(uv)&=(u+du)(v+dv)-uv\\&=uv+u\cdot dv+v\cdot du+du\cdot dv-uv\\&=u\cdot dv+v\cdot du+du\cdot dv\\&=u\cdot dv+v\cdot du\end{aligned}}}
since
d
u
d
v
=
u
′
v
′
(
d
x
)
2
=
0.
{\displaystyle du\,dv=u'v'(dx)^{2}=0.}
Dividing by
d
x
{\displaystyle dx}
then gives
d
(
u
v
)
d
x
=
u
d
v
d
x
+
v
d
u
d
x
{\displaystyle {\frac {d(uv)}{dx}}=u{\frac {dv}{dx}}+v{\frac {du}{dx}}}
or
(
u
v
)
′
=
u
⋅
v
′
+
v
⋅
u
′
{\displaystyle (uv)'=u\cdot v'+v\cdot u'}
.
Logarithmic differentiation [ edit ]
Let
h
(
x
)
=
f
(
x
)
g
(
x
)
{\displaystyle h(x)=f(x)g(x)}
. Taking the absolute value of each function and the natural log of both sides of the equation,
ln
|
h
(
x
)
|
=
ln
|
f
(
x
)
g
(
x
)
|
{\displaystyle \ln |h(x)|=\ln |f(x)g(x)|}
Applying properties of the absolute value and logarithms,
ln
|
h
(
x
)
|
=
ln
|
f
(
x
)
|
+
ln
|
g
(
x
)
|
{\displaystyle \ln |h(x)|=\ln |f(x)|+\ln |g(x)|}
Taking the
logarithmic derivative of both sides and then solving for
h
′
(
x
)
{\displaystyle h'(x)}
:
h
′
(
x
)
h
(
x
)
=
f
′
(
x
)
f
(
x
)
+
g
′
(
x
)
g
(
x
)
{\displaystyle {\frac {h'(x)}{h(x)}}={\frac {f'(x)}{f(x)}}+{\frac {g'(x)}{g(x)}}}
Solving for
h
′
(
x
)
{\displaystyle h'(x)}
and substituting back
f
(
x
)
g
(
x
)
{\displaystyle f(x)g(x)}
for
h
(
x
)
{\displaystyle h(x)}
gives:
h
′
(
x
)
=
h
(
x
)
(
f
′
(
x
)
f
(
x
)
+
g
′
(
x
)
g
(
x
)
)
=
f
(
x
)
g
(
x
)
(
f
′
(
x
)
f
(
x
)
+
g
′
(
x
)
g
(
x
)
)
=
f
′
(
x
)
g
(
x
)
+
f
(
x
)
g
′
(
x
)
.
{\displaystyle {\begin{aligned}h'(x)&=h(x)\left({\frac {f'(x)}{f(x)}}+{\frac {g'(x)}{g(x)}}\right)\\&=f(x)g(x)\left({\frac {f'(x)}{f(x)}}+{\frac {g'(x)}{g(x)}}\right)\\&=f'(x)g(x)+f(x)g'(x).\end{aligned}}}
Note: Taking the absolute value of the functions is necessary for the
logarithmic differentiation of functions that may have negative values, as logarithms are only
real-valued for positive arguments. This works because
d
d
x
(
ln
|
u
|
)
=
u
′
u
{\displaystyle {\tfrac {d}{dx}}(\ln |u|)={\tfrac {u'}{u}}}
, which justifies taking the absolute value of the functions for logarithmic differentiation.
Generalizations [ edit ]
Product of more than two factors [ edit ]
The product rule can be generalized to products of more than two factors. For example, for three factors we have
d
(
u
v
w
)
d
x
=
d
u
d
x
v
w
+
u
d
v
d
x
w
+
u
v
d
w
d
x
.
{\displaystyle {\frac {d(uvw)}{dx}}={\frac {du}{dx}}vw+u{\frac {dv}{dx}}w+uv{\frac {dw}{dx}}.}
For a collection of functions
f
1
,
…
,
f
k
{\displaystyle f_{1},\dots ,f_{k}}
, we have
d
d
x
[
∏
i
=
1
k
f
i
(
x
)
]
=
∑
i
=
1
k
(
(
d
d
x
f
i
(
x
)
)
∏
j
=
1
,
j
≠
i
k
f
j
(
x
)
)
=
(
∏
i
=
1
k
f
i
(
x
)
)
(
∑
i
=
1
k
f
i
′
(
x
)
f
i
(
x
)
)
.
{\displaystyle {\frac {d}{dx}}\left[\prod _{i=1}^{k}f_{i}(x)\right]=\sum _{i=1}^{k}\left(\left({\frac {d}{dx}}f_{i}(x)\right)\prod _{j=1,j\neq i}^{k}f_{j}(x)\right)=\left(\prod _{i=1}^{k}f_{i}(x)\right)\left(\sum _{i=1}^{k}{\frac {f'_{i}(x)}{f_{i}(x)}}\right).}
The logarithmic derivative provides a simpler expression of the last form, as well as a direct proof that does not involve any recursion . The logarithmic derivative of a function f , denoted here Logder(f ) , is the derivative of the logarithm of the function. It follows that
Logder
(
f
)
=
f
′
f
.
{\displaystyle \operatorname {Logder} (f)={\frac {f'}{f}}.}
Using that the logarithm of a product is the sum of the logarithms of the factors, the
sum rule for derivatives gives immediately
Logder
(
f
1
⋯
f
k
)
=
∑
i
=
1
k
Logder
(
f
i
)
.
{\displaystyle \operatorname {Logder} (f_{1}\cdots f_{k})=\sum _{i=1}^{k}\operatorname {Logder} (f_{i}).}
The last above expression of the derivative of a product is obtained by multiplying both members of this equation by the product of the
f
i
.
{\displaystyle f_{i}.}
Higher derivatives [ edit ]
It can also be generalized to the general Leibniz rule for the n th derivative of a product of two factors, by symbolically expanding according to the binomial theorem :
d
n
(
u
v
)
=
∑
k
=
0
n
(
n
k
)
⋅
d
(
n
−
k
)
(
u
)
⋅
d
(
k
)
(
v
)
.
{\displaystyle d^{n}(uv)=\sum _{k=0}^{n}{n \choose k}\cdot d^{(n-k)}(u)\cdot d^{(k)}(v).}
Applied at a specific point x , the above formula gives:
(
u
v
)
(
n
)
(
x
)
=
∑
k
=
0
n
(
n
k
)
⋅
u
(
n
−
k
)
(
x
)
⋅
v
(
k
)
(
x
)
.
{\displaystyle (uv)^{(n)}(x)=\sum _{k=0}^{n}{n \choose k}\cdot u^{(n-k)}(x)\cdot v^{(k)}(x).}
Furthermore, for the n th derivative of an arbitrary number of factors, one has a similar formula with multinomial coefficients :
(
∏
i
=
1
k
f
i
)
(
n
)
=
∑
j
1
+
j
2
+
⋯
+
j
k
=
n
(
n
j
1
,
j
2
,
…
,
j
k
)
∏
i
=
1
k
f
i
(
j
i
)
.
{\displaystyle \left(\prod _{i=1}^{k}f_{i}\right)^{\!\!(n)}=\sum _{j_{1}+j_{2}+\cdots +j_{k}=n}{n \choose j_{1},j_{2},\ldots ,j_{k}}\prod _{i=1}^{k}f_{i}^{(j_{i})}.}
Higher partial derivatives [ edit ]
For partial derivatives , we have[4]
∂
n
∂
x
1
⋯
∂
x
n
(
u
v
)
=
∑
S
∂
|
S
|
u
∏
i
∈
S
∂
x
i
⋅
∂
n
−
|
S
|
v
∏
i
∉
S
∂
x
i
{\displaystyle {\partial ^{n} \over \partial x_{1}\,\cdots \,\partial x_{n}}(uv)=\sum _{S}{\partial ^{|S|}u \over \prod _{i\in S}\partial x_{i}}\cdot {\partial ^{n-|S|}v \over \prod _{i\not \in S}\partial x_{i}}}
where the index
S runs through all
2n subsets of
{1, ..., n } , and
|S | is the
cardinality of
S . For example, when
n = 3,
∂
3
∂
x
1
∂
x
2
∂
x
3
(
u
v
)
=
u
⋅
∂
3
v
∂
x
1
∂
x
2
∂
x
3
+
∂
u
∂
x
1
⋅
∂
2
v
∂
x
2
∂
x
3
+
∂
u
∂
x
2
⋅
∂
2
v
∂
x
1
∂
x
3
+
∂
u
∂
x
3
⋅
∂
2
v
∂
x
1
∂
x
2
+
∂
2
u
∂
x
1
∂
x
2
⋅
∂
v
∂
x
3
+
∂
2
u
∂
x
1
∂
x
3
⋅
∂
v
∂
x
2
+
∂
2
u
∂
x
2
∂
x
3
⋅
∂
v
∂
x
1
+
∂
3
u
∂
x
1
∂
x
2
∂
x
3
⋅
v
.
{\displaystyle {\begin{aligned}&{\partial ^{3} \over \partial x_{1}\,\partial x_{2}\,\partial x_{3}}(uv)\\[1ex]={}&u\cdot {\partial ^{3}v \over \partial x_{1}\,\partial x_{2}\,\partial x_{3}}+{\partial u \over \partial x_{1}}\cdot {\partial ^{2}v \over \partial x_{2}\,\partial x_{3}}+{\partial u \over \partial x_{2}}\cdot {\partial ^{2}v \over \partial x_{1}\,\partial x_{3}}+{\partial u \over \partial x_{3}}\cdot {\partial ^{2}v \over \partial x_{1}\,\partial x_{2}}\\[1ex]&+{\partial ^{2}u \over \partial x_{1}\,\partial x_{2}}\cdot {\partial v \over \partial x_{3}}+{\partial ^{2}u \over \partial x_{1}\,\partial x_{3}}\cdot {\partial v \over \partial x_{2}}+{\partial ^{2}u \over \partial x_{2}\,\partial x_{3}}\cdot {\partial v \over \partial x_{1}}+{\partial ^{3}u \over \partial x_{1}\,\partial x_{2}\,\partial x_{3}}\cdot v.\\[-3ex]&\end{aligned}}}
Banach space [ edit ]
Suppose X , Y , and Z are Banach spaces (which includes Euclidean space ) and B : X × Y → Z is a continuous bilinear operator . Then B is differentiable, and its derivative at the point (x ,y ) in X × Y is the linear map D (x ,y ) B : X × Y → Z given by
(
D
(
x
,
y
)
B
)
(
u
,
v
)
=
B
(
u
,
y
)
+
B
(
x
,
v
)
∀
(
u
,
v
)
∈
X
×
Y
.
{\displaystyle (D_{\left(x,y\right)}\,B)\left(u,v\right)=B\left(u,y\right)+B\left(x,v\right)\qquad \forall (u,v)\in X\times Y.}
This result can be extended[5] to more general topological vector spaces.
In vector calculus [ edit ]
The product rule extends to various product operations of vector functions on
R
n
{\displaystyle \mathbb {R} ^{n}}
:[6]
For scalar multiplication :
(
f
⋅
g
)
′
=
f
′
⋅
g
+
f
⋅
g
′
{\displaystyle (f\cdot \mathbf {g} )'=f'\cdot \mathbf {g} +f\cdot \mathbf {g} '}
For dot product :
(
f
⋅
g
)
′
=
f
′
⋅
g
+
f
⋅
g
′
{\displaystyle (\mathbf {f} \cdot \mathbf {g} )'=\mathbf {f} '\cdot \mathbf {g} +\mathbf {f} \cdot \mathbf {g} '}
For cross product of vector functions on
R
3
{\displaystyle \mathbb {R} ^{3}}
:
(
f
×
g
)
′
=
f
′
×
g
+
f
×
g
′
{\displaystyle (\mathbf {f} \times \mathbf {g} )'=\mathbf {f} '\times \mathbf {g} +\mathbf {f} \times \mathbf {g} '}
There are also analogues for other analogs of the derivative: if f and g are scalar fields then there is a product rule with the gradient :
∇
(
f
⋅
g
)
=
∇
f
⋅
g
+
f
⋅
∇
g
{\displaystyle \nabla (f\cdot g)=\nabla f\cdot g+f\cdot \nabla g}
Such a rule will hold for any continuous bilinear product operation. Let B : X × Y → Z be a continuous bilinear map between vector spaces, and let f and g be differentiable functions into X and Y , respectively. The only properties of multiplication used in the proof using the limit definition of derivative is that multiplication is continuous and bilinear. So for any continuous bilinear operation,
H
(
f
,
g
)
′
=
H
(
f
′
,
g
)
+
H
(
f
,
g
′
)
.
{\displaystyle H(f,g)'=H(f',g)+H(f,g').}
This is also a special case of the product rule for bilinear maps in
Banach space .
Derivations in abstract algebra and differential geometry [ edit ]
In abstract algebra , the product rule is the defining property of a derivation . In this terminology, the product rule states that the derivative operator is a derivation on functions.
In differential geometry , a tangent vector to a manifold M at a point p may be defined abstractly as an operator on real-valued functions which behaves like a directional derivative at p : that is, a linear functional v which is a derivation,
v
(
f
g
)
=
v
(
f
)
g
(
p
)
+
f
(
p
)
v
(
g
)
.
{\displaystyle v(fg)=v(f)\,g(p)+f(p)\,v(g).}
Generalizing (and dualizing) the formulas of vector calculus to an
n -dimensional manifold
M, one may take
differential forms of degrees
k and
l , denoted
α
∈
Ω
k
(
M
)
,
β
∈
Ω
ℓ
(
M
)
{\displaystyle \alpha \in \Omega ^{k}(M),\beta \in \Omega ^{\ell }(M)}
, with the wedge or
exterior product operation
α
∧
β
∈
Ω
k
+
ℓ
(
M
)
{\displaystyle \alpha \wedge \beta \in \Omega ^{k+\ell }(M)}
, as well as the
exterior derivative
d
:
Ω
m
(
M
)
→
Ω
m
+
1
(
M
)
{\displaystyle d:\Omega ^{m}(M)\to \Omega ^{m+1}(M)}
. Then one has the
graded Leibniz rule :
d
(
α
∧
β
)
=
d
α
∧
β
+
(
−
1
)
k
α
∧
d
β
.
{\displaystyle d(\alpha \wedge \beta )=d\alpha \wedge \beta +(-1)^{k}\alpha \wedge d\beta .}
Applications [ edit ]
Among the applications of the product rule is a proof that
d
d
x
x
n
=
n
x
n
−
1
{\displaystyle {d \over dx}x^{n}=nx^{n-1}}
when
n is a positive integer (this rule is true even if
n is not positive or is not an integer, but the proof of that must rely on other methods). The proof is by
mathematical induction on the exponent
n . If
n = 0 then
x n is constant and
nx n − 1 = 0. The rule holds in that case because the derivative of a constant function is 0. If the rule holds for any particular exponent
n , then for the next value,
n + 1, we have
d
x
n
+
1
d
x
=
d
d
x
(
x
n
⋅
x
)
=
x
d
d
x
x
n
+
x
n
d
d
x
x
(the product rule is used here)
=
x
(
n
x
n
−
1
)
+
x
n
⋅
1
(the induction hypothesis is used here)
=
(
n
+
1
)
x
n
.
{\displaystyle {\begin{aligned}{\frac {dx^{n+1}}{dx}}&{}={\frac {d}{dx}}\left(x^{n}\cdot x\right)\\[1ex]&{}=x{\frac {d}{dx}}x^{n}+x^{n}{\frac {d}{dx}}x&{\text{(the product rule is used here)}}\\[1ex]&{}=x\left(nx^{n-1}\right)+x^{n}\cdot 1&{\text{(the induction hypothesis is used here)}}\\[1ex]&{}=\left(n+1\right)x^{n}.\end{aligned}}}
Therefore, if the proposition is true for
n , it is true also for
n + 1, and therefore for all natural
n .
See also [ edit ]
References [ edit ]
^ "Leibniz rule – Encyclopedia of Mathematics" .
^ Michelle Cirillo (August 2007). "Humanizing Calculus" . The Mathematics Teacher . 101 (1): 23–27. doi :10.5951/MT.101.1.0023 .
^ Leibniz, G. W. (2005) [1920], The Early Mathematical Manuscripts of Leibniz (PDF) , translated by J.M. Child, Dover, p. 28, footnote 58, ISBN 978-0-486-44596-0
^ Micheal Hardy (January 2006). "Combinatorics of Partial Derivatives" (PDF) . The Electronic Journal of Combinatorics . 13 . arXiv :math/0601149 . Bibcode :2006math......1149H .
^ Kreigl, Andreas; Michor, Peter (1997). The Convenient Setting of Global Analysis (PDF) . American Mathematical Society. p. 59. ISBN 0-8218-0780-3 .
^ Stewart, James (2016), Calculus (8 ed.), Cengage , Section 13.2.