Polynomial sequence
Plot of the first five Tn Chebyshev polynomials (first kind) Plot of the first five Un Chebyshev polynomials (second kind)The Chebyshev polynomials are two sequences of orthogonal polynomials related to the cosine and sine functions, notated as T n ( x ) {\displaystyle T_{n}(x)} and U n ( x ) {\displaystyle U_{n}(x)} . They can be defined in several equivalent ways, one of which starts with trigonometric functions:
The Chebyshev polynomials of the first kind T n {\displaystyle T_{n}} are defined by
T n ( cos θ ) = cos ( n θ ) . {\displaystyle T_{n}(\cos \theta )=\cos(n\theta ).}
Similarly, the Chebyshev polynomials of the second kind U n {\displaystyle U_{n}} are defined by
U n ( cos θ ) sin θ = sin ( ( n + 1 ) θ ) . {\displaystyle U_{n}(\cos \theta )\sin \theta =\sin {\big (}(n+1)\theta {\big )}.}
That these expressions define polynomials in cos θ {\displaystyle \cos \theta } is not obvious at first sight but can be shown using de Moivre's formula (see below).
The Chebyshev polynomials Tn are polynomials with the largest possible leading coefficient whose absolute value on the interval [−1, 1] is bounded by 1. They are also the "extremal" polynomials for many other properties.[1]
In 1952, Cornelius Lanczos showed that the Chebyshev polynomials are important in approximation theory for the solution of linear systems;[2] the roots of Tn(x), which are also called Chebyshev nodes, are used as matching points for optimizing polynomial interpolation. The resulting interpolation polynomial minimizes the problem of Runge's phenomenon and provides an approximation that is close to the best polynomial approximation to a continuous function under the maximum norm, also called the "minimax" criterion. This approximation leads directly to the method of Clenshaw–Curtis quadrature.
These polynomials were named after Pafnuty Chebyshev.[3] The letter T is used because of the alternative transliterations of the name Chebyshev as Tchebycheff, Tchebyshev (French) or Tschebyschow (German).
Recurrence definition[edit]The Chebyshev polynomials of the first kind can be defined by the recurrence relation
T 0 ( x ) = 1 , T 1 ( x ) = x , T n + 1 ( x ) = 2 x T n ( x ) − T n − 1 ( x ) . {\displaystyle {\begin{aligned}T_{0}(x)&=1,\\T_{1}(x)&=x,\\T_{n+1}(x)&=2x\,T_{n}(x)-T_{n-1}(x).\end{aligned}}}
The Chebyshev polynomials of the second kind can be defined by the recurrence relation
U 0 ( x ) = 1 , U 1 ( x ) = 2 x , U n + 1 ( x ) = 2 x U n ( x ) − U n − 1 ( x ) , {\displaystyle {\begin{aligned}U_{0}(x)&=1,\\U_{1}(x)&=2x,\\U_{n+1}(x)&=2x\,U_{n}(x)-U_{n-1}(x),\end{aligned}}} which differs from the above only by the rule for n=1.
Trigonometric definition[edit]The Chebyshev polynomials of the first and second kind can be defined as the unique polynomials satisfying
T n ( cos θ ) = cos ( n θ ) {\displaystyle T_{n}(\cos \theta )=\cos(n\theta )}
and
U n ( cos θ ) = sin ( ( n + 1 ) θ ) sin θ , {\displaystyle U_{n}(\cos \theta )={\frac {\sin {\big (}(n+1)\theta {\big )}}{\sin \theta }},}
for n = 0, 1, 2, 3, ….
An equivalent way to state this is via exponentiation of a complex number: given a complex number z = a + bi with absolute value of one,
z n = T n ( a ) + i b U n − 1 ( a ) . {\displaystyle z^{n}=T_{n}(a)+ibU_{n-1}(a).}
Chebyshev polynomials can be defined in this form when studying trigonometric polynomials.[4]
That cos ( n x ) {\displaystyle \cos(nx)} is an n {\displaystyle n} th-degree polynomial in cos ( x ) {\displaystyle \cos(x)} can be seen by observing that cos ( n x ) {\displaystyle \cos(nx)} is the real part of one side of de Moivre's formula:
cos n θ + i sin n θ = ( cos θ + i sin θ ) n . {\displaystyle \cos n\theta +i\sin n\theta =(\cos \theta +i\sin \theta )^{n}.}
The real part of the other side is a polynomial in cos ( x ) {\displaystyle \cos(x)} and sin ( x ) {\displaystyle \sin(x)} , in which all powers of sin ( x ) {\displaystyle \sin(x)} are even and thus replaceable through the identity cos 2 ( x ) + sin 2 ( x ) = 1 {\displaystyle \cos ^{2}(x)+\sin ^{2}(x)=1} . By the same reasoning, sin ( n x ) {\displaystyle \sin(nx)} is the imaginary part of the polynomial, in which all powers of sin ( x ) {\displaystyle \sin(x)} are odd and thus, if one factor of sin ( x ) {\displaystyle \sin(x)} is factored out, the remaining factors can be replaced to create a n − 1 {\displaystyle n-1} st-degree polynomial in cos ( x ) {\displaystyle \cos(x)} .
For x {\displaystyle x} outside the interval [-1,1], the above definition implies
T n ( x ) = { cos ( n arccos x ) if | x | ≤ 1 , cosh ( n arcosh x ) if x ≥ 1 , ( − 1 ) n cosh ( n arcosh ( − x ) ) if x ≤ − 1. {\displaystyle T_{n}(x)={\begin{cases}\cos(n\arccos x)&{\text{ if }}~|x|\leq 1,\\\cosh(n\operatorname {arcosh} x)&{\text{ if }}~x\geq 1,\\(-1)^{n}\cosh(n\operatorname {arcosh} (-x))&{\text{ if }}~x\leq -1.\end{cases}}}
Commuting polynomials definition[edit]Chebyshev polynomials can also be characterized by the following theorem:[5]
If F n ( x ) {\displaystyle F_{n}(x)} is a family of monic polynomials with coefficients in a field of characteristic 0 {\displaystyle 0} such that deg F n ( x ) = n {\displaystyle \deg F_{n}(x)=n} and F m ( F n ( x ) ) = F n ( F m ( x ) ) {\displaystyle F_{m}(F_{n}(x))=F_{n}(F_{m}(x))} for all m {\displaystyle m} and n {\displaystyle n} , then, up to a simple change of variables, either F n ( x ) = x n {\displaystyle F_{n}(x)=x^{n}} for all n {\displaystyle n} or F n ( x ) = 2 ⋅ T n ( x / 2 ) {\displaystyle F_{n}(x)=2\cdot T_{n}(x/2)} for all n {\displaystyle n} .
Pell equation definition[edit]The Chebyshev polynomials can also be defined as the solutions to the Pell equation:
T n ( x ) 2 − ( x 2 − 1 ) U n − 1 ( x ) 2 = 1 {\displaystyle T_{n}(x)^{2}-\left(x^{2}-1\right)U_{n-1}(x)^{2}=1}
in a ring R [ x ] {\displaystyle R[x]} .[6] Thus, they can be generated by the standard technique for Pell equations of taking powers of a fundamental solution:
T n ( x ) + U n − 1 ( x ) x 2 − 1 = ( x + x 2 − 1 ) n . {\displaystyle T_{n}(x)+U_{n-1}(x)\,{\sqrt {x^{2}-1}}=\left(x+{\sqrt {x^{2}-1}}\right)^{n}~.}
Generating functions[edit]The ordinary generating function for T n {\displaystyle T_{n}} is
∑ n = 0 ∞ T n ( x ) t n = 1 − t x 1 − 2 t x + t 2 . {\displaystyle \sum _{n=0}^{\infty }T_{n}(x)\,t^{n}={\frac {1-tx}{1-2tx+t^{2}}}.}
There are several other generating functions for the Chebyshev polynomials; the exponential generating function is
∑ n = 0 ∞ T n ( x ) t n n ! = 1 2 ( exp ( t ( x − x 2 − 1 ) ) + exp ( t ( x + x 2 − 1 ) ) ) = e t x cosh ( t x 2 − 1 ) . {\displaystyle {\begin{aligned}\sum _{n=0}^{\infty }T_{n}(x){\frac {t^{n}}{n!}}&={\tfrac {1}{2}}{\Bigl (}{\exp }{\Bigl (}{\textstyle t{\bigl (}x-{\sqrt {x^{2}-1}}~\!{\bigr )}}{\Bigr )}+{\exp }{\Bigl (}{\textstyle t{\bigl (}x+{\sqrt {x^{2}-1}}~\!{\bigr )}}{\Bigr )}{\Bigr )}\\&=e^{tx}\cosh \left({\textstyle t{\sqrt {x^{2}-1}}}~\!\right).\end{aligned}}}
The generating function relevant for 2-dimensional potential theory and multipole expansion is
∑ n = 1 ∞ T n ( x ) t n n = ln ( 1 1 − 2 t x + t 2 ) . {\displaystyle \sum \limits _{n=1}^{\infty }T_{n}(x)\,{\frac {t^{n}}{n}}=\ln \left({\frac {1}{\sqrt {1-2tx+t^{2}}}}\right).}
The ordinary generating function for Un is
∑ n = 0 ∞ U n ( x ) t n = 1 1 − 2 t x + t 2 , {\displaystyle \sum _{n=0}^{\infty }U_{n}(x)\,t^{n}={\frac {1}{1-2tx+t^{2}}},}
and the exponential generating function is
∑ n = 0 ∞ U n ( x ) t n n ! = e t x ( cosh ( t x 2 − 1 ) + x x 2 − 1 sinh ( t x 2 − 1 ) ) . {\displaystyle \sum _{n=0}^{\infty }U_{n}(x){\frac {t^{n}}{n!}}=e^{tx}{\biggl (}\!\cosh \left(t{\sqrt {x^{2}-1}}\right)+{\frac {x}{\sqrt {x^{2}-1}}}\sinh \left(t{\sqrt {x^{2}-1}}\right){\biggr )}.}
Relations between the two kinds of Chebyshev polynomials[edit]The Chebyshev polynomials of the first and second kinds correspond to a complementary pair of Lucas sequences V ~ n ( P , Q ) {\displaystyle {\tilde {V}}_{n}(P,Q)} and U ~ n ( P , Q ) {\displaystyle {\tilde {U}}_{n}(P,Q)} with parameters P = 2 x {\displaystyle P=2x} and Q = 1 {\displaystyle Q=1} :
U ~ n ( 2 x , 1 ) = U n − 1 ( x ) , V ~ n ( 2 x , 1 ) = 2 T n ( x ) . {\displaystyle {\begin{aligned}{\tilde {U}}_{n}(2x,1)&=U_{n-1}(x),\\{\tilde {V}}_{n}(2x,1)&=2\,T_{n}(x).\end{aligned}}}
It follows that they also satisfy a pair of mutual recurrence equations:
T n + 1 ( x ) = x T n ( x ) − ( 1 − x 2 ) U n − 1 ( x ) , U n + 1 ( x ) = x U n ( x ) + T n + 1 ( x ) . {\displaystyle {\begin{aligned}T_{n+1}(x)&=x\,T_{n}(x)-(1-x^{2})\,U_{n-1}(x),\\U_{n+1}(x)&=x\,U_{n}(x)+T_{n+1}(x).\end{aligned}}}
The second of these may be rearranged using the recurrence definition for the Chebyshev polynomials of the second kind to give:
T n ( x ) = 1 2 ( U n ( x ) − U n − 2 ( x ) ) . {\displaystyle T_{n}(x)={\frac {1}{2}}{\big (}U_{n}(x)-U_{n-2}(x){\big )}.}
Using this formula iteratively gives the sum formula:
U n ( x ) = { 2 ∑ odd j > 0 n T j ( x ) for odd n . 2 ∑ even j ≥ 0 n T j ( x ) − 1 for even n , {\displaystyle U_{n}(x)={\begin{cases}2\sum _{{\text{ odd }}j>0}^{n}T_{j}(x)&{\text{ for odd }}n.\\2\sum _{{\text{ even }}j\geq 0}^{n}T_{j}(x)-1&{\text{ for even }}n,\end{cases}}}
while replacing U n ( x ) {\displaystyle U_{n}(x)} and U n − 2 ( x ) {\displaystyle U_{n-2}(x)} using the derivative formula for T n ( x ) {\displaystyle T_{n}(x)} gives the recurrence relationship for the derivative of T n {\displaystyle T_{n}} :
2 T n ( x ) = 1 n + 1 d d x T n + 1 ( x ) − 1 n − 1 d d x T n − 1 ( x ) , n = 2 , 3 , … {\displaystyle 2\,T_{n}(x)={\frac {1}{n+1}}\,{\frac {\mathrm {d} }{\mathrm {d} x}}\,T_{n+1}(x)-{\frac {1}{n-1}}\,{\frac {\mathrm {d} }{\mathrm {d} x}}\,T_{n-1}(x),\qquad n=2,3,\ldots }
This relationship is used in the Chebyshev spectral method of solving differential equations.
Turán's inequalities for the Chebyshev polynomials are:[8]
T n ( x ) 2 − T n − 1 ( x ) T n + 1 ( x ) = 1 − x 2 > 0 for − 1 < x < 1 and U n ( x ) 2 − U n − 1 ( x ) U n + 1 ( x ) = 1 > 0 . {\displaystyle {\begin{aligned}T_{n}(x)^{2}-T_{n-1}(x)\,T_{n+1}(x)&=1-x^{2}>0&&{\text{ for }}-1<x<1&&{\text{ and }}\\U_{n}(x)^{2}-U_{n-1}(x)\,U_{n+1}(x)&=1>0~.\end{aligned}}}
The integral relations are[10]
∫ − 1 1 T n ( y ) y − x d y 1 − y 2 = π U n − 1 ( x ) , ∫ − 1 1 U n − 1 ( y ) y − x 1 − y 2 d y = − π T n ( x ) {\displaystyle {\begin{aligned}\int _{-1}^{1}{\frac {T_{n}(y)}{y-x}}\,{\frac {\mathrm {d} y}{\sqrt {1-y^{2}}}}&=\pi \,U_{n-1}(x)~,\\[1.5ex]\int _{-1}^{1}{\frac {U_{n-1}(y)}{y-x}}\,{\sqrt {1-y^{2}}}\mathrm {d} y&=-\pi \,T_{n}(x)\end{aligned}}}
where integrals are considered as principal value.
Explicit expressions[edit]Using the complex number exponentiation definition of the Chebyshev polynomial, one can derive the following expressions, valid for any real x {\displaystyle x} :[citation needed]
T n ( x ) = 1 2 ( ( x − x 2 − 1 ) n + ( x + x 2 − 1 ) n ) = 1 2 ( ( x − x 2 − 1 ) n + ( x − x 2 − 1 ) − n ) . {\displaystyle {\begin{aligned}T_{n}(x)&={\tfrac {1}{2}}{\Big (}{\bigl (}{\textstyle x-{\sqrt {x^{2}-1}}\!~}{\bigr )}^{n}+{\bigl (}{\textstyle x+{\sqrt {x^{2}-1}}\!~}{\bigr )}^{n}{\Big )}\\[5mu]&={\tfrac {1}{2}}{\Big (}{\bigl (}{\textstyle x-{\sqrt {x^{2}-1}}\!~}{\bigr )}^{n}+{\bigl (}{\textstyle x-{\sqrt {x^{2}-1}}\!~}{\bigr )}^{-n}{\Big )}.\end{aligned}}}
The two are equivalent because ( x + x 2 − 1 ) ( x − x 2 − 1 ) = 1 {\displaystyle \textstyle {\bigl (}x+{\sqrt {x^{2}-1}}\!~{\bigr )}{\bigl (}x-{\sqrt {x^{2}-1}}\!~{\bigr )}=1} .
An explicit form of the Chebyshev polynomial in terms of monomials x k {\displaystyle x^{k}} follows from de Moivre's formula:
T n ( cos ( θ ) ) = Re ( cos n θ + i sin n θ ) = Re ( ( cos θ + i sin θ ) n ) , {\displaystyle T_{n}(\cos(\theta ))=\operatorname {Re} (\cos n\theta +i\sin n\theta )=\operatorname {Re} ((\cos \theta +i\sin \theta )^{n}),}
where R e {\displaystyle \mathrm {Re} } denotes the real part of a complex number. Expanding the formula, one gets
( cos θ + i sin θ ) n = ∑ j = 0 n ( n j ) i j sin j θ cos n − j θ . {\displaystyle (\cos \theta +i\sin \theta )^{n}=\sum \limits _{j=0}^{n}{\binom {n}{j}}i^{j}\sin ^{j}\theta \cos ^{n-j}\theta .}
The real part of the expression is obtained from summands corresponding to even indices. Noting i 2 j = ( − 1 ) j {\displaystyle i^{2j}=(-1)^{j}} and sin 2 j θ = ( 1 − cos 2 θ ) j {\displaystyle \sin ^{2j}\theta =(1-\cos ^{2}\theta )^{j}} , one gets the explicit formula:
cos n θ = ∑ j = 0 ⌊ n / 2 ⌋ ( n 2 j ) ( cos 2 θ − 1 ) j cos n − 2 j θ , {\displaystyle \cos n\theta =\sum \limits _{j=0}^{\lfloor n/2\rfloor }{\binom {n}{2j}}(\cos ^{2}\theta -1)^{j}\cos ^{n-2j}\theta ,}
which in turn means that
T n ( x ) = ∑ j = 0 ⌊ n / 2 ⌋ ( n 2 j ) ( x 2 − 1 ) j x n − 2 j . {\displaystyle T_{n}(x)=\sum \limits _{j=0}^{\lfloor n/2\rfloor }{\binom {n}{2j}}(x^{2}-1)^{j}x^{n-2j}.}
This can be written as a 2F1 hypergeometric function:
T n ( x ) = ∑ k = 0 ⌊ n 2 ⌋ ( n 2 k ) ( x 2 − 1 ) k x n − 2 k = x n ∑ k = 0 ⌊ n 2 ⌋ ( n 2 k ) ( 1 − x − 2 ) k = n 2 ∑ k = 0 ⌊ n 2 ⌋ ( − 1 ) k ( n − k − 1 ) ! k ! ( n − 2 k ) ! ( 2 x ) n − 2 k for n > 0 = n ∑ k = 0 n ( − 2 ) k ( n + k − 1 ) ! ( n − k ) ! ( 2 k ) ! ( 1 − x ) k for n > 0 = 2 F 1 ( − n , n ; 1 2 ; 1 2 ( 1 − x ) ) {\displaystyle {\begin{aligned}T_{n}(x)&=\sum _{k=0}^{\left\lfloor {\frac {n}{2}}\right\rfloor }{\binom {n}{2k}}\left(x^{2}-1\right)^{k}x^{n-2k}\\&=x^{n}\sum _{k=0}^{\left\lfloor {\frac {n}{2}}\right\rfloor }{\binom {n}{2k}}\left(1-x^{-2}\right)^{k}\\&={\frac {n}{2}}\sum _{k=0}^{\left\lfloor {\frac {n}{2}}\right\rfloor }(-1)^{k}{\frac {(n-k-1)!}{k!(n-2k)!}}~(2x)^{n-2k}\quad {\text{ for }}n>0\\\\&=n\sum _{k=0}^{n}(-2)^{k}{\frac {(n+k-1)!}{(n-k)!(2k)!}}(1-x)^{k}\quad {\text{ for }}n>0\\\\&={}_{2}F_{1}\!\left(-n,n;{\tfrac {1}{2}};{\tfrac {1}{2}}(1-x)\right)\\\end{aligned}}}
x n = 2 1 − n ∑ ′ j = 0 j ≡ n ( mod 2 ) n ( n n − j 2 ) T j ( x ) , {\displaystyle x^{n}=2^{1-n}\mathop {{\sum }'} _{j=0 \atop j\equiv n{\pmod {2}}}^{n}\!\!{\binom {n}{\tfrac {n-j}{2}}}\!\;T_{j}(x),}
where the prime at the summation symbol indicates that the contribution of j = 0 {\displaystyle j=0} needs to be halved if it appears.
A related expression for T n {\displaystyle T_{n}} as a sum of monomials with binomial coefficients and powers of two is
T n ( x ) = ∑ m = 0 ⌊ n 2 ⌋ ( − 1 ) m ( ( n − m m ) + ( n − m − 1 n − 2 m ) ) ⋅ 2 n − 2 m − 1 ⋅ x n − 2 m . {\displaystyle T_{n}(x)=\sum \limits _{m=0}^{\left\lfloor {\frac {n}{2}}\right\rfloor }(-1)^{m}\left({\binom {n-m}{m}}+{\binom {n-m-1}{n-2m}}\right)\cdot 2^{n-2m-1}\cdot x^{n-2m}.}
Similarly, U n {\displaystyle U_{n}} can be expressed in terms of hypergeometric functions:
U n ( x ) = ( x + x 2 − 1 ) n + 1 − ( x − x 2 − 1 ) n + 1 2 x 2 − 1 = ∑ k = 0 ⌊ n / 2 ⌋ ( n + 1 2 k + 1 ) ( x 2 − 1 ) k x n − 2 k = x n ∑ k = 0 ⌊ n / 2 ⌋ ( n + 1 2 k + 1 ) ( 1 − x − 2 ) k = ∑ k = 0 ⌊ n / 2 ⌋ ( 2 k − ( n + 1 ) k ) ( 2 x ) n − 2 k for n > 0 = ∑ k = 0 ⌊ n / 2 ⌋ ( − 1 ) k ( n − k k ) ( 2 x ) n − 2 k for n > 0 = ∑ k = 0 n ( − 2 ) k ( n + k + 1 ) ! ( n − k ) ! ( 2 k + 1 ) ! ( 1 − x ) k for n > 0 = ( n + 1 ) 2 F 1 ( − n , n + 2 ; 3 2 ; 1 2 ( 1 − x ) ) . {\displaystyle {\begin{aligned}U_{n}(x)&={\frac {\left(x+{\sqrt {x^{2}-1}}\right)^{n+1}-\left(x-{\sqrt {x^{2}-1}}\right)^{n+1}}{2{\sqrt {x^{2}-1}}}}\\&=\sum _{k=0}^{\left\lfloor {n}/{2}\right\rfloor }{\binom {n+1}{2k+1}}\left(x^{2}-1\right)^{k}x^{n-2k}\\&=x^{n}\sum _{k=0}^{\left\lfloor {n}/{2}\right\rfloor }{\binom {n+1}{2k+1}}\left(1-x^{-2}\right)^{k}\\&=\sum _{k=0}^{\left\lfloor {n}/{2}\right\rfloor }{\binom {2k-(n+1)}{k}}~(2x)^{n-2k}&{\text{ for }}n>0\\&=\sum _{k=0}^{\left\lfloor {n}/{2}\right\rfloor }(-1)^{k}{\binom {n-k}{k}}~(2x)^{n-2k}&{\text{ for }}n>0\\&=\sum _{k=0}^{n}(-2)^{k}{\frac {(n+k+1)!}{(n-k)!(2k+1)!}}(1-x)^{k}&{\text{ for }}n>0\\&=(n+1)\,{}_{2}F_{1}{\big (}-n,n+2;{\tfrac {3}{2}};{\tfrac {1}{2}}(1-x){\big )}.\end{aligned}}}
T n ( − x ) = ( − 1 ) n T n ( x ) , U n ( − x ) = ( − 1 ) n U n ( x ) . {\displaystyle {\begin{aligned}T_{n}(-x)&=(-1)^{n}\,T_{n}(x),\\[1ex]U_{n}(-x)&=(-1)^{n}\,U_{n}(x).\end{aligned}}}
That is, Chebyshev polynomials of even order have even symmetry and therefore contain only even powers of x {\displaystyle x} . Chebyshev polynomials of odd order have odd symmetry and therefore contain only odd powers of x {\displaystyle x} .
A Chebyshev polynomial of either kind with degree n has n different simple roots, called Chebyshev roots, in the interval [−1, 1]. The roots of the Chebyshev polynomial of the first kind are sometimes called Chebyshev nodes because they are used as nodes in polynomial interpolation. Using the trigonometric definition and the fact that:
cos ( ( 2 k + 1 ) π 2 ) = 0 {\displaystyle \cos \left((2k+1){\frac {\pi }{2}}\right)=0}
one can show that the roots of T n {\displaystyle T_{n}} are:
x k = cos ( π ( k + 1 / 2 ) n ) , k = 0 , … , n − 1. {\displaystyle x_{k}=\cos \left({\frac {\pi (k+1/2)}{n}}\right),\quad k=0,\ldots ,n-1.}
Similarly, the roots of U n {\displaystyle U_{n}} are:
x k = cos ( k n + 1 π ) , k = 1 , … , n . {\displaystyle x_{k}=\cos \left({\frac {k}{n+1}}\pi \right),\quad k=1,\ldots ,n.}
The extrema of T n {\displaystyle T_{n}} on the interval − 1 ≤ x ≤ 1 {\displaystyle -1\leq x\leq 1} are located at:
x k = cos ( k n π ) , k = 0 , … , n . {\displaystyle x_{k}=\cos \left({\frac {k}{n}}\pi \right),\quad k=0,\ldots ,n.}
One unique property of the Chebyshev polynomials of the first kind is that on the interval − 1 ≤ x ≤ 1 {\displaystyle -1\leq x\leq 1} all of the extrema have values that are either −1 or 1. Thus these polynomials have only two finite critical values, the defining property of Shabat polynomials. Both the first and second kinds of Chebyshev polynomial have extrema at the endpoints, given by:
T n ( 1 ) = 1 T n ( − 1 ) = ( − 1 ) n U n ( 1 ) = n + 1 U n ( − 1 ) = ( − 1 ) n ( n + 1 ) . {\displaystyle {\begin{aligned}T_{n}(1)&=1\\T_{n}(-1)&=(-1)^{n}\\U_{n}(1)&=n+1\\U_{n}(-1)&=(-1)^{n}(n+1).\end{aligned}}}
The extrema of T n ( x ) {\displaystyle T_{n}(x)} on the interval − 1 ≤ x ≤ 1 {\displaystyle -1\leq x\leq 1} where n > 0 {\displaystyle n>0} are located at n + 1 {\displaystyle n+1} values of x {\displaystyle x} . They are ± 1 {\displaystyle \pm 1} , or cos ( 2 π k d ) {\displaystyle \cos \left({\frac {2\pi k}{d}}\right)} where d > 2 {\displaystyle d>2} , d | 2 n {\displaystyle d\;|\;2n} , 0 < k < d / 2 {\displaystyle 0<k<d/2} and ( k , d ) = 1 {\displaystyle (k,d)=1} , i.e., k {\displaystyle k} and d {\displaystyle d} are relatively prime numbers.
Specifically (Minimal polynomial of 2cos(2pi/n)[13][14]) when n {\displaystyle n} is even:
When n {\displaystyle n} is odd:
Differentiation and integration[edit]The derivatives of the polynomials can be less than straightforward. By differentiating the polynomials in their trigonometric forms, it can be shown that:
d T n d x = n U n − 1 d U n d x = ( n + 1 ) T n + 1 − x U n x 2 − 1 d 2 T n d x 2 = n n T n − x U n − 1 x 2 − 1 = n ( n + 1 ) T n − U n x 2 − 1 . {\displaystyle {\begin{aligned}{\frac {\mathrm {d} T_{n}}{\mathrm {d} x}}&=nU_{n-1}\\{\frac {\mathrm {d} U_{n}}{\mathrm {d} x}}&={\frac {(n+1)T_{n+1}-xU_{n}}{x^{2}-1}}\\{\frac {\mathrm {d} ^{2}T_{n}}{\mathrm {d} x^{2}}}&=n\,{\frac {nT_{n}-xU_{n-1}}{x^{2}-1}}=n\,{\frac {(n+1)T_{n}-U_{n}}{x^{2}-1}}.\end{aligned}}}
The last two formulas can be numerically troublesome due to the division by zero (0/0 indeterminate form, specifically) at x = 1 {\displaystyle x=1} and x = − 1 {\displaystyle x=-1} . By L'Hôpital's rule:
d 2 T n d x 2 | x = 1 = n 4 − n 2 3 , d 2 T n d x 2 | x = − 1 = ( − 1 ) n n 4 − n 2 3 . {\displaystyle {\begin{aligned}\left.{\frac {\mathrm {d} ^{2}T_{n}}{\mathrm {d} x^{2}}}\right|_{x=1}\!\!&={\frac {n^{4}-n^{2}}{3}},\\\left.{\frac {\mathrm {d} ^{2}T_{n}}{\mathrm {d} x^{2}}}\right|_{x=-1}\!\!&=(-1)^{n}{\frac {n^{4}-n^{2}}{3}}.\end{aligned}}}
More generally,
d p T n d x p | x = ± 1 = ( ± 1 ) n + p ∏ k = 0 p − 1 n 2 − k 2 2 k + 1 , {\displaystyle \left.{\frac {\mathrm {d} ^{p}T_{n}}{\mathrm {d} x^{p}}}\right|_{x=\pm 1}\!\!=(\pm 1)^{n+p}\prod _{k=0}^{p-1}{\frac {n^{2}-k^{2}}{2k+1}}~,}
which is of great use in the numerical solution of eigenvalue problems.
Also, we have:
d p d x p T n ( x ) = 2 p n ∑ ′ 0 ≤ k ≤ n − p k ≡ n − p ( mod 2 ) ( n + p − k 2 − 1 n − p − k 2 ) ( n + p + k 2 − 1 ) ! ( n − p + k 2 ) ! T k ( x ) , p ≥ 1 , {\displaystyle {\frac {\mathrm {d} ^{p}}{\mathrm {d} x^{p}}}\,T_{n}(x)=2^{p}\,n\mathop {{\sum }'} _{0\leq k\leq n-p \atop k\,\equiv \,n-p{\pmod {2}}}{\binom {{\frac {n+p-k}{2}}-1}{\frac {n-p-k}{2}}}{\frac {\left({\frac {n+p+k}{2}}-1\right)!}{\left({\frac {n-p+k}{2}}\right)!}}\,T_{k}(x),~\qquad p\geq 1,}
where the prime at the summation symbols means that the term contributed by k = 0 is to be halved, if it appears.
Concerning integration, the first derivative of the Tn implies that:
∫ U n d x = T n + 1 n + 1 {\displaystyle \int U_{n}\,\mathrm {d} x={\frac {T_{n+1}}{n+1}}}
and the recurrence relation for the first kind polynomials involving derivatives establishes that for n ≥ 2 {\displaystyle n\geq 2} :
∫ T n d x = 1 2 ( T n + 1 n + 1 − T n − 1 n − 1 ) = n T n + 1 n 2 − 1 − x T n n − 1 . {\displaystyle \int T_{n}\,\mathrm {d} x={\frac {1}{2}}\,\left({\frac {T_{n+1}}{n+1}}-{\frac {T_{n-1}}{n-1}}\right)={\frac {n\,T_{n+1}}{n^{2}-1}}-{\frac {x\,T_{n}}{n-1}}.}
The last formula can be further manipulated to express the integral of T n {\displaystyle T_{n}} as a function of Chebyshev polynomials of the first kind only:
∫ T n d x = n n 2 − 1 T n + 1 − 1 n − 1 T 1 T n = n n 2 − 1 T n + 1 − 1 2 ( n − 1 ) ( T n + 1 + T n − 1 ) = 1 2 ( n + 1 ) T n + 1 − 1 2 ( n − 1 ) T n − 1 . {\displaystyle {\begin{aligned}\int T_{n}\,\mathrm {d} x&={\frac {n}{n^{2}-1}}T_{n+1}-{\frac {1}{n-1}}T_{1}T_{n}\\&={\frac {n}{n^{2}-1}}\,T_{n+1}-{\frac {1}{2(n-1)}}\,(T_{n+1}+T_{n-1})\\&={\frac {1}{2(n+1)}}\,T_{n+1}-{\frac {1}{2(n-1)}}\,T_{n-1}.\end{aligned}}}
Furthermore, we have:
∫ − 1 1 T n ( x ) d x = { ( − 1 ) n + 1 1 − n 2 if n ≠ 1 0 if n = 1. {\displaystyle \int _{-1}^{1}T_{n}(x)\,\mathrm {d} x={\begin{cases}{\frac {(-1)^{n}+1}{1-n^{2}}}&{\text{ if }}~n\neq 1\\0&{\text{ if }}~n=1.\end{cases}}}
Products of Chebyshev polynomials[edit]The Chebyshev polynomials of the first kind satisfy the relation:
T m ( x ) T n ( x ) = 1 2 ( T m + n ( x ) + T | m − n | ( x ) ) , ∀ m , n ≥ 0 , {\displaystyle T_{m}(x)\,T_{n}(x)={\tfrac {1}{2}}\!\left(T_{m+n}(x)+T_{|m-n|}(x)\right)\!,\qquad \forall m,n\geq 0,}
which is easily proved from the product-to-sum formula for the cosine:
2 cos α cos β = cos ( α + β ) + cos ( α − β ) . {\displaystyle 2\cos \alpha \,\cos \beta =\cos(\alpha +\beta )+\cos(\alpha -\beta ).}
For n = 1 {\displaystyle n=1} this results in the already known recurrence formula, just arranged differently, and with n = 2 {\displaystyle n=2} it forms the recurrence relation for all even or all odd indexed Chebyshev polynomials (depending on the parity of the lowest m) which implies the evenness or oddness of these polynomials. Three more useful formulas for evaluating Chebyshev polynomials can be concluded from this product expansion:
T 2 n ( x ) = 2 T n 2 ( x ) − T 0 ( x ) = 2 T n 2 ( x ) − 1 , T 2 n + 1 ( x ) = 2 T n + 1 ( x ) T n ( x ) − T 1 ( x ) = 2 T n + 1 ( x ) T n ( x ) − x , T 2 n − 1 ( x ) = 2 T n − 1 ( x ) T n ( x ) − T 1 ( x ) = 2 T n − 1 ( x ) T n ( x ) − x . {\displaystyle {\begin{aligned}T_{2n}(x)&=2\,T_{n}^{2}(x)-T_{0}(x)&&=2T_{n}^{2}(x)-1,\\T_{2n+1}(x)&=2\,T_{n+1}(x)\,T_{n}(x)-T_{1}(x)&&=2\,T_{n+1}(x)\,T_{n}(x)-x,\\T_{2n-1}(x)&=2\,T_{n-1}(x)\,T_{n}(x)-T_{1}(x)&&=2\,T_{n-1}(x)\,T_{n}(x)-x.\end{aligned}}}
The polynomials of the second kind satisfy the similar relation:
T m ( x ) U n ( x ) = { 1 2 ( U m + n ( x ) + U n − m ( x ) ) , if n ≥ m − 1 , 1 2 ( U m + n ( x ) − U m − n − 2 ( x ) ) , if n ≤ m − 2. {\displaystyle T_{m}(x)\,U_{n}(x)={\begin{cases}{\frac {1}{2}}\left(U_{m+n}(x)+U_{n-m}(x)\right),&~{\text{ if }}~n\geq m-1,\\\\{\frac {1}{2}}\left(U_{m+n}(x)-U_{m-n-2}(x)\right),&~{\text{ if }}~n\leq m-2.\end{cases}}}
(with the definition U − 1 ≡ 0 {\displaystyle U_{-1}\equiv 0} by convention ). They also satisfy:
U m ( x ) U n ( x ) = ∑ k = 0 n U m − n + 2 k ( x ) = ∑ p = m − n step 2 m + n U p ( x ) . {\displaystyle U_{m}(x)\,U_{n}(x)=\sum _{k=0}^{n}\,U_{m-n+2k}(x)=\sum _{\underset {\text{ step 2 }}{p=m-n}}^{m+n}U_{p}(x)~.}
for m ≥ n {\displaystyle m\geq n} . For n = 2 {\displaystyle n=2} this recurrence reduces to:
U m + 2 ( x ) = U 2 ( x ) U m ( x ) − U m ( x ) − U m − 2 ( x ) = U m ( x ) ( U 2 ( x ) − 1 ) − U m − 2 ( x ) , {\displaystyle U_{m+2}(x)=U_{2}(x)\,U_{m}(x)-U_{m}(x)-U_{m-2}(x)=U_{m}(x)\,{\big (}U_{2}(x)-1{\big )}-U_{m-2}(x)~,}
which establishes the evenness or oddness of the even or odd indexed Chebyshev polynomials of the second kind depending on whether m {\displaystyle m} starts with 2 or 3.
Composition and divisibility properties[edit]The trigonometric definitions of T n {\displaystyle T_{n}} and U n {\displaystyle U_{n}} imply the composition or nesting properties:[15]
T m n ( x ) = T m ( T n ( x ) ) , U m n − 1 ( x ) = U m − 1 ( T n ( x ) ) U n − 1 ( x ) . {\displaystyle {\begin{aligned}T_{mn}(x)&=T_{m}(T_{n}(x)),\\U_{mn-1}(x)&=U_{m-1}(T_{n}(x))U_{n-1}(x).\end{aligned}}}
For T m n {\displaystyle T_{mn}} the order of composition may be reversed, making the family of polynomial functions T n {\displaystyle T_{n}} a commutative semigroup under composition.
Since T m ( x ) {\displaystyle T_{m}(x)} is divisible by x {\displaystyle x} if m {\displaystyle m} is odd, it follows that T m n ( x ) {\displaystyle T_{mn}(x)} is divisible by T n ( x ) {\displaystyle T_{n}(x)} if m {\displaystyle m} is odd. Furthermore, U m n − 1 ( x ) {\displaystyle U_{mn-1}(x)} is divisible by U n − 1 ( x ) {\displaystyle U_{n-1}(x)} , and in the case that m {\displaystyle m} is even, divisible by T n ( x ) U n − 1 ( x ) {\displaystyle T_{n}(x)U_{n-1}(x)} .
Both T n {\displaystyle T_{n}} and U n {\displaystyle U_{n}} form a sequence of orthogonal polynomials. The polynomials of the first kind T n {\displaystyle T_{n}} are orthogonal with respect to the weight:
1 1 − x 2 , {\displaystyle {\frac {1}{\sqrt {1-x^{2}}}},}
on the interval [−1, 1], i.e. we have:
∫ − 1 1 T n ( x ) T m ( x ) d x 1 − x 2 = { 0 if n ≠ m , π if n = m = 0 , π 2 if n = m ≠ 0. {\displaystyle \int _{-1}^{1}T_{n}(x)\,T_{m}(x)\,{\frac {\mathrm {d} x}{\sqrt {1-x^{2}}}}={\begin{cases}0&~{\text{ if }}~n\neq m,\\[5mu]\pi &~{\text{ if }}~n=m=0,\\[5mu]{\frac {\pi }{2}}&~{\text{ if }}~n=m\neq 0.\end{cases}}}
This can be proven by letting x = cos ( θ ) {\displaystyle x=\cos(\theta )} and using the defining identity T n ( cos ( θ ) ) = cos ( n θ ) {\displaystyle T_{n}(\cos(\theta ))=\cos(n\theta )} .
Similarly, the polynomials of the second kind Un are orthogonal with respect to the weight:
1 − x 2 {\displaystyle {\sqrt {1-x^{2}}}} on the interval [−1, 1], i.e. we have:
∫ − 1 1 U n ( x ) U m ( x ) 1 − x 2 d x = { 0 if n ≠ m , π 2 if n = m . {\displaystyle \int _{-1}^{1}U_{n}(x)\,U_{m}(x)\,{\sqrt {1-x^{2}}}\,\mathrm {d} x={\begin{cases}0&~{\text{ if }}~n\neq m,\\[5mu]{\frac {\pi }{2}}&~{\text{ if }}~n=m.\end{cases}}}
(The measure 1 − x 2 d x {\displaystyle {\sqrt {1-x^{2}}}\,dx} is, to within a normalizing constant, the Wigner semicircle distribution.)
These orthogonality properties follow from the fact that the Chebyshev polynomials solve the Chebyshev differential equations:
( 1 − x 2 ) T n ″ − x T n ′ + n 2 T n = 0 , ( 1 − x 2 ) U n ″ − 3 x U n ′ + n ( n + 2 ) U n = 0 , {\displaystyle {\begin{aligned}(1-x^{2})T_{n}''-xT_{n}'+n^{2}T_{n}&=0,\\[1ex](1-x^{2})U_{n}''-3xU_{n}'+n(n+2)U_{n}&=0,\end{aligned}}} which are Sturm–Liouville differential equations. It is a general feature of such differential equations that there is a distinguished orthonormal set of solutions. (Another way to define the Chebyshev polynomials is as the solutions to those equations.)
The T n {\displaystyle T_{n}} also satisfy a discrete orthogonality condition:
∑ k = 0 N − 1 T i ( x k ) T j ( x k ) = { 0 if i ≠ j , N if i = j = 0 , N 2 if i = j ≠ 0 , {\displaystyle \sum _{k=0}^{N-1}{T_{i}(x_{k})\,T_{j}(x_{k})}={\begin{cases}0&~{\text{ if }}~i\neq j,\\[5mu]N&~{\text{ if }}~i=j=0,\\[5mu]{\frac {N}{2}}&~{\text{ if }}~i=j\neq 0,\end{cases}}}
where N {\displaystyle N} is any integer greater than max ( i , j ) {\displaystyle \max(i,j)} ,[10] and the x k {\displaystyle x_{k}} are the N {\displaystyle N} Chebyshev nodes (see above) of T N ( x ) {\displaystyle T_{N}(x)} :
x k = cos ( π 2 k + 1 2 N ) for k = 0 , 1 , … , N − 1. {\displaystyle x_{k}=\cos \left(\pi \,{\frac {2k+1}{2N}}\right)\quad ~{\text{ for }}~k=0,1,\dots ,N-1.}
For the polynomials of the second kind and any integer N > i + j {\displaystyle N>i+j} with the same Chebyshev nodes x k {\displaystyle x_{k}} , there are similar sums:
∑ k = 0 N − 1 U i ( x k ) U j ( x k ) ( 1 − x k 2 ) = { 0 if i ≠ j , N 2 if i = j , {\displaystyle \sum _{k=0}^{N-1}{U_{i}(x_{k})\,U_{j}(x_{k})\left(1-x_{k}^{2}\right)}={\begin{cases}0&{\text{ if }}~i\neq j,\\[5mu]{\frac {N}{2}}&{\text{ if }}~i=j,\end{cases}}}
and without the weight function:
∑ k = 0 N − 1 U i ( x k ) U j ( x k ) = { 0 if i ≢ j ( mod 2 ) , N ⋅ ( 1 + min { i , j } ) if i ≡ j ( mod 2 ) . {\displaystyle \sum _{k=0}^{N-1}{U_{i}(x_{k})\,U_{j}(x_{k})}={\begin{cases}0&~{\text{ if }}~i\not \equiv j{\pmod {2}},\\[5mu]N\cdot (1+\min\{i,j\})&~{\text{ if }}~i\equiv j{\pmod {2}}.\end{cases}}}
For any integer N > i + j {\displaystyle N>i+j} , based on the N {\displaystyle N} } zeros of U N ( x ) {\displaystyle U_{N}(x)} :
y k = cos ( π k + 1 N + 1 ) for k = 0 , 1 , … , N − 1 , {\displaystyle y_{k}=\cos \left(\pi \,{\frac {k+1}{N+1}}\right)\quad ~{\text{ for }}~k=0,1,\dots ,N-1,}
one can get the sum:
∑ k = 0 N − 1 U i ( y k ) U j ( y k ) ( 1 − y k 2 ) = { 0 if i ≠ j , N + 1 2 if i = j , {\displaystyle \sum _{k=0}^{N-1}{U_{i}(y_{k})\,U_{j}(y_{k})(1-y_{k}^{2})}={\begin{cases}0&~{\text{ if }}i\neq j,\\[5mu]{\frac {N+1}{2}}&~{\text{ if }}i=j,\end{cases}}}
and again without the weight function:
∑ k = 0 N − 1 U i ( y k ) U j ( y k ) = { 0 if i ≢ j ( mod 2 ) , ( min { i , j } + 1 ) ( N − max { i , j } ) if i ≡ j ( mod 2 ) . {\displaystyle \sum _{k=0}^{N-1}{U_{i}(y_{k})\,U_{j}(y_{k})}={\begin{cases}0&~{\text{ if }}~i\not \equiv j{\pmod {2}},\\[5mu]{\bigl (}\min\{i,j\}+1{\bigr )}{\bigl (}N-\max\{i,j\}{\bigr )}&~{\text{ if }}~i\equiv j{\pmod {2}}.\end{cases}}}
For any given n ≥ 1 {\displaystyle n\geq 1} , among the polynomials of degree n {\displaystyle n} with leading coefficient 1 (monic polynomials):
f ( x ) = 1 2 n − 1 T n ( x ) {\displaystyle f(x)={\frac {1}{\,2^{n-1}\,}}\,T_{n}(x)}
is the one of which the maximal absolute value on the interval [−1, 1] is minimal.
This maximal absolute value is:
1 2 n − 1 {\displaystyle {\frac {1}{2^{n-1}}}}
and | f ( x ) | {\displaystyle |f(x)|} reaches this maximum exactly n + 1 {\displaystyle n+1} times at:
x = cos k π n for 0 ≤ k ≤ n . {\displaystyle x=\cos {\frac {k\pi }{n}}\quad {\text{for }}0\leq k\leq n.}
ProofLet's assume that w n ( x ) {\displaystyle w_{n}(x)} is a polynomial of degree n {\displaystyle n} with leading coefficient 1 with maximal absolute value on the interval [−1, 1] less than 1 / 2n − 1.
Define
f n ( x ) = 1 2 n − 1 T n ( x ) − w n ( x ) {\displaystyle f_{n}(x)={\frac {1}{\,2^{n-1}\,}}\,T_{n}(x)-w_{n}(x)}
Because at extreme points of Tn we have
| w n ( x ) | < | 1 2 n − 1 T n ( x ) | f n ( x ) > 0 for x = cos 2 k π n where 0 ≤ 2 k ≤ n f n ( x ) < 0 for x = cos ( 2 k + 1 ) π n where 0 ≤ 2 k + 1 ≤ n {\displaystyle {\begin{aligned}|w_{n}(x)|&<\left|{\frac {1}{2^{n-1}}}T_{n}(x)\right|\\f_{n}(x)&>0\qquad {\text{ for }}~x=\cos {\frac {2k\pi }{n}}~&&{\text{ where }}0\leq 2k\leq n\\f_{n}(x)&<0\qquad {\text{ for }}~x=\cos {\frac {(2k+1)\pi }{n}}~&&{\text{ where }}0\leq 2k+1\leq n\end{aligned}}}
From the intermediate value theorem, fn(x) has at least n roots. However, this is impossible, as fn(x) is a polynomial of degree n − 1, so the fundamental theorem of algebra implies it has at most n − 1 roots.
By the equioscillation theorem, among all the polynomials of degree ≤ n, the polynomial f minimizes ‖ f ‖∞ on [−1, 1] if and only if there are n + 2 points −1 ≤ x0 < x1 < ⋯ < xn + 1 ≤ 1 such that | f(xi)| = ‖ f ‖∞.
Of course, the null polynomial on the interval [−1, 1] can be approximated by itself and minimizes the ∞-norm.
Above, however, | f | reaches its maximum only n + 1 times because we are searching for the best polynomial of degree n ≥ 1 (therefore the theorem evoked previously cannot be used).
Chebyshev polynomials as special cases of more general polynomial families[edit]The Chebyshev polynomials are a special case of the ultraspherical or Gegenbauer polynomials C n ( λ ) ( x ) {\displaystyle C_{n}^{(\lambda )}(x)} , which themselves are a special case of the Jacobi polynomials P n ( α , β ) ( x ) {\displaystyle P_{n}^{(\alpha ,\beta )}(x)} :
T n ( x ) = n 2 lim q → 0 1 q C n ( q ) ( x ) if n ≥ 1 , = 1 ( n − 1 2 n ) P n ( − 1 2 , − 1 2 ) ( x ) = 2 2 n ( 2 n n ) P n ( − 1 2 , − 1 2 ) ( x ) , U n ( x ) = C n ( 1 ) ( x ) = n + 1 ( n + 1 2 n ) P n ( 1 2 , 1 2 ) ( x ) = 2 2 n + 1 ( 2 n + 2 n + 1 ) P n ( 1 2 , 1 2 ) ( x ) . {\displaystyle {\begin{aligned}T_{n}(x)&={\frac {n}{2}}\lim _{q\to 0}{\frac {1}{q}}\,C_{n}^{(q)}(x)\qquad ~{\text{ if }}~n\geq 1,\\&={\frac {1}{\binom {n-{\frac {1}{2}}}{n}}}P_{n}^{\left(-{\frac {1}{2}},-{\frac {1}{2}}\right)}(x)={\frac {2^{2n}}{\binom {2n}{n}}}P_{n}^{\left(-{\frac {1}{2}},-{\frac {1}{2}}\right)}(x)~,\\[2ex]U_{n}(x)&=C_{n}^{(1)}(x)\\&={\frac {n+1}{\binom {n+{\frac {1}{2}}}{n}}}P_{n}^{\left({\frac {1}{2}},{\frac {1}{2}}\right)}(x)={\frac {2^{2n+1}}{\binom {2n+2}{n+1}}}P_{n}^{\left({\frac {1}{2}},{\frac {1}{2}}\right)}(x)~.\end{aligned}}}
Chebyshev polynomials are also a special case of Dickson polynomials:
D n ( 2 x α , α 2 ) = 2 α n T n ( x ) {\displaystyle D_{n}(2x\alpha ,\alpha ^{2})=2\alpha ^{n}T_{n}(x)\,}
E n ( 2 x α , α 2 ) = α n U n ( x ) . {\displaystyle E_{n}(2x\alpha ,\alpha ^{2})=\alpha ^{n}U_{n}(x).\,}
In particular, when α = 1 2 {\displaystyle \alpha ={\tfrac {1}{2}}} , they are related by D n ( x , 1 4 ) = 2 1 − n T n ( x ) {\displaystyle D_{n}(x,{\tfrac {1}{4}})=2^{1-n}T_{n}(x)} and E n ( x , 1 4 ) = 2 − n U n ( x ) {\displaystyle E_{n}(x,{\tfrac {1}{4}})=2^{-n}U_{n}(x)} .
The curves given by y = Tn(x), or equivalently, by the parametric equations y = Tn(cos θ) = cos nθ, x = cos θ, are a special case of Lissajous curves with frequency ratio equal to n.
Similar to the formula:
T n ( cos θ ) = cos ( n θ ) , {\displaystyle T_{n}(\cos \theta )=\cos(n\theta ),}
we have the analogous formula:
T 2 n + 1 ( sin θ ) = ( − 1 ) n sin ( ( 2 n + 1 ) θ ) . {\displaystyle T_{2n+1}(\sin \theta )=(-1)^{n}\sin \left(\left(2n+1\right)\theta \right).}
For x ≠ 0:
T n ( x + x − 1 2 ) = x n + x − n 2 {\displaystyle T_{n}\!\left({\frac {x+x^{-1}}{2}}\right)={\frac {x^{n}+x^{-n}}{2}}}
and:
x n = T n ( x + x − 1 2 ) + x − x − 1 2 U n − 1 ( x + x − 1 2 ) , {\displaystyle x^{n}=T_{n}\!\left({\frac {x+x^{-1}}{2}}\right)+{\frac {x-x^{-1}}{2}}\ U_{n-1}\!\left({\frac {x+x^{-1}}{2}}\right),} which follows from the fact that this holds by definition for x = eiθ.
There are relations between Legendre polynomials and Chebyshev polynomials
∑ k = 0 n P k ( x ) T n − k ( x ) = ( n + 1 ) P n ( x ) {\displaystyle \sum _{k=0}^{n}P_{k}\left(x\right)T_{n-k}\left(x\right)=\left(n+1\right)P_{n}\left(x\right)}
∑ k = 0 n P k ( x ) P n − k ( x ) = U n ( x ) {\displaystyle \sum _{k=0}^{n}P_{k}\left(x\right)P_{n-k}\left(x\right)=U_{n}\left(x\right)}
These identities can be proven using generating functions and discrete convolution
Chebyshev polynomials as determinants[edit]From their definition by recurrence it follows that the Chebyshev polynomials can be obtained as determinants of special tridiagonal matrices of size k × k {\displaystyle k\times k} :
T k ( x ) = det [ x 1 0 ⋯ 0 1 2 x 1 ⋱ ⋮ 0 1 2 x ⋱ 0 ⋮ ⋱ ⋱ ⋱ 1 0 ⋯ 0 1 2 x ] , {\displaystyle T_{k}(x)=\det {\begin{bmatrix}x&1&0&\cdots &0\\1&2x&1&\ddots &\vdots \\0&1&2x&\ddots &0\\\vdots &\ddots &\ddots &\ddots &1\\0&\cdots &0&1&2x\end{bmatrix}},} and similarly for U k {\displaystyle U_{k}} .
The first few Chebyshev polynomials of the first kind in the domain −1 < x < 1: The flat T0, T1, T2, T3, T4 and T5.The first few Chebyshev polynomials of the first kind are OEIS: A028297
T 0 ( x ) = 1 T 1 ( x ) = x T 2 ( x ) = 2 x 2 − 1 T 3 ( x ) = 4 x 3 − 3 x T 4 ( x ) = 8 x 4 − 8 x 2 + 1 T 5 ( x ) = 16 x 5 − 20 x 3 + 5 x T 6 ( x ) = 32 x 6 − 48 x 4 + 18 x 2 − 1 T 7 ( x ) = 64 x 7 − 112 x 5 + 56 x 3 − 7 x T 8 ( x ) = 128 x 8 − 256 x 6 + 160 x 4 − 32 x 2 + 1 T 9 ( x ) = 256 x 9 − 576 x 7 + 432 x 5 − 120 x 3 + 9 x T 10 ( x ) = 512 x 10 − 1280 x 8 + 1120 x 6 − 400 x 4 + 50 x 2 − 1 {\displaystyle {\begin{aligned}T_{0}(x)&=1\\T_{1}(x)&=x\\T_{2}(x)&=2x^{2}-1\\T_{3}(x)&=4x^{3}-3x\\T_{4}(x)&=8x^{4}-8x^{2}+1\\T_{5}(x)&=16x^{5}-20x^{3}+5x\\T_{6}(x)&=32x^{6}-48x^{4}+18x^{2}-1\\T_{7}(x)&=64x^{7}-112x^{5}+56x^{3}-7x\\T_{8}(x)&=128x^{8}-256x^{6}+160x^{4}-32x^{2}+1\\T_{9}(x)&=256x^{9}-576x^{7}+432x^{5}-120x^{3}+9x\\T_{10}(x)&=512x^{10}-1280x^{8}+1120x^{6}-400x^{4}+50x^{2}-1\end{aligned}}}
The first few Chebyshev polynomials of the second kind in the domain −1 < x < 1: The flat U0, U1, U2, U3, U4 and U5. Although not visible in the image, Un(1) = n + 1 and Un(−1) = (n + 1)(−1)n.The first few Chebyshev polynomials of the second kind are OEIS: A053117
U 0 ( x ) = 1 U 1 ( x ) = 2 x U 2 ( x ) = 4 x 2 − 1 U 3 ( x ) = 8 x 3 − 4 x U 4 ( x ) = 16 x 4 − 12 x 2 + 1 U 5 ( x ) = 32 x 5 − 32 x 3 + 6 x U 6 ( x ) = 64 x 6 − 80 x 4 + 24 x 2 − 1 U 7 ( x ) = 128 x 7 − 192 x 5 + 80 x 3 − 8 x U 8 ( x ) = 256 x 8 − 448 x 6 + 240 x 4 − 40 x 2 + 1 U 9 ( x ) = 512 x 9 − 1024 x 7 + 672 x 5 − 160 x 3 + 10 x U 10 ( x ) = 1024 x 10 − 2304 x 8 + 1792 x 6 − 560 x 4 + 60 x 2 − 1 {\displaystyle {\begin{aligned}U_{0}(x)&=1\\U_{1}(x)&=2x\\U_{2}(x)&=4x^{2}-1\\U_{3}(x)&=8x^{3}-4x\\U_{4}(x)&=16x^{4}-12x^{2}+1\\U_{5}(x)&=32x^{5}-32x^{3}+6x\\U_{6}(x)&=64x^{6}-80x^{4}+24x^{2}-1\\U_{7}(x)&=128x^{7}-192x^{5}+80x^{3}-8x\\U_{8}(x)&=256x^{8}-448x^{6}+240x^{4}-40x^{2}+1\\U_{9}(x)&=512x^{9}-1024x^{7}+672x^{5}-160x^{3}+10x\\U_{10}(x)&=1024x^{10}-2304x^{8}+1792x^{6}-560x^{4}+60x^{2}-1\end{aligned}}}
The non-smooth function (top) y = −x3H(−x), where H is the Heaviside step function, and (bottom) the 5th partial sum of its Chebyshev expansion. The 7th sum is indistinguishable from the original function at the resolution of the graph.In the appropriate Sobolev space, the set of Chebyshev polynomials form an orthonormal basis, so that a function in the same space can, on −1 ≤ x ≤ 1, be expressed via the expansion:[16]
f ( x ) = ∑ n = 0 ∞ a n T n ( x ) . {\displaystyle f(x)=\sum _{n=0}^{\infty }a_{n}T_{n}(x).}
Furthermore, as mentioned previously, the Chebyshev polynomials form an orthogonal basis which (among other things) implies that the coefficients an can be determined easily through the application of an inner product. This sum is called a Chebyshev series or a Chebyshev expansion.
Since a Chebyshev series is related to a Fourier cosine series through a change of variables, all of the theorems, identities, etc. that apply to Fourier series have a Chebyshev counterpart.[16] These attributes include:
The abundance of the theorems and identities inherited from Fourier series make the Chebyshev polynomials important tools in numeric analysis; for example they are the most popular general purpose basis functions used in the spectral method,[16] often in favor of trigonometric series due to generally faster convergence for continuous functions (Gibbs' phenomenon is still a problem).
The Chebfun software package supports function manipulation based on their expansion in the Chebyshev basis.
Consider the Chebyshev expansion of log(1 + x). One can express:
log ( 1 + x ) = ∑ n = 0 ∞ a n T n ( x ) . {\displaystyle \log(1+x)=\sum _{n=0}^{\infty }a_{n}T_{n}(x)~.}
One can find the coefficients an either through the application of an inner product or by the discrete orthogonality condition. For the inner product:
∫ − 1 + 1 T m ( x ) log ( 1 + x ) 1 − x 2 d x = ∑ n = 0 ∞ a n ∫ − 1 + 1 T m ( x ) T n ( x ) 1 − x 2 d x , {\displaystyle \int _{-1}^{+1}\,{\frac {T_{m}(x)\,\log(1+x)}{\sqrt {1-x^{2}}}}\,\mathrm {d} x=\sum _{n=0}^{\infty }a_{n}\int _{-1}^{+1}{\frac {T_{m}(x)\,T_{n}(x)}{\sqrt {1-x^{2}}}}\,\mathrm {d} x,} which gives: a n = { − log 2 for n = 0 , − 2 ( − 1 ) n n for n > 0. {\displaystyle a_{n}={\begin{cases}-\log 2&{\text{ for }}~n=0,\\{\frac {-2(-1)^{n}}{n}}&{\text{ for }}~n>0.\end{cases}}}
Alternatively, when the inner product of the function being approximated cannot be evaluated, the discrete orthogonality condition gives an often useful result for approximate coefficients:
a n ≈ 2 − δ 0 n N ∑ k = 0 N − 1 T n ( x k ) log ( 1 + x k ) , {\displaystyle a_{n}\approx {\frac {\,2-\delta _{0n}\,}{N}}\,\sum _{k=0}^{N-1}T_{n}(x_{k})\,\log(1+x_{k}),}
where δij is the Kronecker delta function and the xk are the N Gauss–Chebyshev zeros of TN (x):
x k = cos ( π ( k + 1 2 ) N ) . {\displaystyle x_{k}=\cos \left({\frac {\pi \left(k+{\tfrac {1}{2}}\right)}{N}}\right).}
For any N, these approximate coefficients provide an exact approximation to the function at xk with a controlled error between those points. The exact coefficients are obtained with N = ∞, thus representing the function exactly at all points in [−1,1]. The rate of convergence depends on the function and its smoothness.
This allows us to compute the approximate coefficients an very efficiently through the discrete cosine transform:
a n ≈ 2 − δ 0 n N ∑ k = 0 N − 1 cos ( n π ( k + 1 2 ) N ) log ( 1 + x k ) . {\displaystyle a_{n}\approx {\frac {2-\delta _{0n}}{N}}\sum _{k=0}^{N-1}\cos \left({\frac {n\pi \left(\,k+{\tfrac {1}{2}}\right)}{N}}\right)\log(1+x_{k}).}
To provide another example:
( 1 − x 2 ) α = − 1 π Γ ( 1 2 + α ) Γ ( α + 1 ) + 2 1 − 2 α ∑ n = 0 ( − 1 ) n ( 2 α α − n ) T 2 n ( x ) = 2 − 2 α ∑ n = 0 ( − 1 ) n ( 2 α + 1 α − n ) U 2 n ( x ) . {\displaystyle {\begin{aligned}\left(1-x^{2}\right)^{\alpha }&=-{\frac {1}{\sqrt {\pi }}}\,{\frac {\Gamma \left({\tfrac {1}{2}}+\alpha \right)}{\Gamma (\alpha +1)}}+2^{1-2\alpha }\,\sum _{n=0}\left(-1\right)^{n}\,{2\alpha \choose \alpha -n}\,T_{2n}(x)\\[1ex]&=2^{-2\alpha }\,\sum _{n=0}\left(-1\right)^{n}\,{2\alpha +1 \choose \alpha -n}\,U_{2n}(x).\end{aligned}}}
The partial sums of:
f ( x ) = ∑ n = 0 ∞ a n T n ( x ) {\displaystyle f(x)=\sum _{n=0}^{\infty }a_{n}T_{n}(x)}
are very useful in the approximation of various functions and in the solution of differential equations (see spectral method). Two common methods for determining the coefficients an are through the use of the inner product as in Galerkin's method and through the use of collocation which is related to interpolation.
As an interpolant, the N coefficients of the (N − 1)st partial sum are usually obtained on the Chebyshev–Gauss–Lobatto[17] points (or Lobatto grid), which results in minimum error and avoids Runge's phenomenon associated with a uniform grid. This collection of points corresponds to the extrema of the highest order polynomial in the sum, plus the endpoints and is given by:
x k = − cos ( k π N − 1 ) ; k = 0 , 1 , … , N − 1. {\displaystyle x_{k}=-\cos \left({\frac {k\pi }{N-1}}\right);\qquad k=0,1,\dots ,N-1.}
Polynomial in Chebyshev form[edit]An arbitrary polynomial of degree N can be written in terms of the Chebyshev polynomials of the first kind.[10] Such a polynomial p(x) is of the form:
p ( x ) = ∑ n = 0 N a n T n ( x ) . {\displaystyle p(x)=\sum _{n=0}^{N}a_{n}T_{n}(x).}
Polynomials in Chebyshev form can be evaluated using the Clenshaw algorithm.
Polynomials denoted C n ( x ) {\displaystyle C_{n}(x)} and S n ( x ) {\displaystyle S_{n}(x)} closely related to Chebyshev polynomials are sometimes used. They are defined by:
C n ( x ) = 2 T n ( x 2 ) , S n ( x ) = U n ( x 2 ) {\displaystyle C_{n}(x)=2T_{n}\left({\frac {x}{2}}\right),\qquad S_{n}(x)=U_{n}\left({\frac {x}{2}}\right)}
and satisfy:
C n ( x ) = S n ( x ) − S n − 2 ( x ) . {\displaystyle C_{n}(x)=S_{n}(x)-S_{n-2}(x).}
A. F. Horadam called the polynomials C n ( x ) {\displaystyle C_{n}(x)} Vieta–Lucas polynomials and denoted them v n ( x ) {\displaystyle v_{n}(x)} . He called the polynomials S n ( x ) {\displaystyle S_{n}(x)} Vieta–Fibonacci polynomials and denoted them V n ( x ) {\displaystyle V_{n}(x)} .[19] Lists of both sets of polynomials are given in Viète's Opera Mathematica, Chapter IX, Theorems VI and VII.[20] The Vieta–Lucas and Vieta–Fibonacci polynomials of real argument are, up to a power of i {\displaystyle i} and a shift of index in the case of the latter, equal to Lucas and Fibonacci polynomials Ln and Fn of imaginary argument.
Shifted Chebyshev polynomials of the first and second kinds are related to the Chebyshev polynomials by:
T n ∗ ( x ) = T n ( 2 x − 1 ) , U n ∗ ( x ) = U n ( 2 x − 1 ) . {\displaystyle T_{n}^{*}(x)=T_{n}(2x-1),\qquad U_{n}^{*}(x)=U_{n}(2x-1).}
When the argument of the Chebyshev polynomial satisfies 2x − 1 ∈ [−1, 1] the argument of the shifted Chebyshev polynomial satisfies x ∈ [0, 1]. Similarly, one can define shifted polynomials for generic intervals [a, b].
Around 1990 the terms "third-kind" and "fourth-kind" came into use in connection with Chebyshev polynomials, although the polynomials denoted by these terms had an earlier development under the name airfoil polynomials. According to J. C. Mason and G. H. Elliott, the terminology "third-kind" and "fourth-kind" is due to Walter Gautschi, "in consultation with colleagues in the field of orthogonal polynomials."[21] The Chebyshev polynomials of the third kind are defined as:
V n ( x ) = cos ( ( n + 1 2 ) θ ) cos ( θ 2 ) = 2 1 + x T 2 n + 1 ( x + 1 2 ) {\displaystyle V_{n}(x)={\frac {\cos \left(\left(n+{\frac {1}{2}}\right)\theta \right)}{\cos \left({\frac {\theta }{2}}\right)}}={\sqrt {\frac {2}{1+x}}}T_{2n+1}\left({\sqrt {\frac {x+1}{2}}}\right)} and the Chebyshev polynomials of the fourth kind are defined as: W n ( x ) = sin ( ( n + 1 2 ) θ ) sin ( θ 2 ) = U 2 n ( x + 1 2 ) , {\displaystyle W_{n}(x)={\frac {\sin \left(\left(n+{\frac {1}{2}}\right)\theta \right)}{\sin \left({\frac {\theta }{2}}\right)}}=U_{2n}\left({\sqrt {\frac {x+1}{2}}}\right),}
where θ = arccos x {\displaystyle \theta =\arccos x} .[21][22] They coincide with the Dirichlet kernel.
In the airfoil literature V n ( x ) {\displaystyle V_{n}(x)} and W n ( x ) {\displaystyle W_{n}(x)} are denoted t n ( x ) {\displaystyle t_{n}(x)} and u n ( x ) {\displaystyle u_{n}(x)} . The polynomial families T n ( x ) {\displaystyle T_{n}(x)} , U n ( x ) {\displaystyle U_{n}(x)} , V n ( x ) {\displaystyle V_{n}(x)} , and W n ( x ) {\displaystyle W_{n}(x)} are orthogonal with respect to the weights:
( 1 − x 2 ) − 1 / 2 , ( 1 − x 2 ) 1 / 2 , ( 1 − x ) − 1 / 2 ( 1 + x ) 1 / 2 , ( 1 + x ) − 1 / 2 ( 1 − x ) 1 / 2 {\displaystyle \left(1-x^{2}\right)^{-1/2},\quad \left(1-x^{2}\right)^{1/2},\quad (1-x)^{-1/2}(1+x)^{1/2},\quad (1+x)^{-1/2}(1-x)^{1/2}}
and are proportional to Jacobi polynomials P n ( α , β ) ( x ) {\displaystyle P_{n}^{(\alpha ,\beta )}(x)} with:[22]
( α , β ) = ( − 1 2 , − 1 2 ) , ( α , β ) = ( 1 2 , 1 2 ) , ( α , β ) = ( − 1 2 , 1 2 ) , ( α , β ) = ( 1 2 , − 1 2 ) . {\displaystyle (\alpha ,\beta )=\left(-{\frac {1}{2}},-{\frac {1}{2}}\right),\quad (\alpha ,\beta )=\left({\frac {1}{2}},{\frac {1}{2}}\right),\quad (\alpha ,\beta )=\left(-{\frac {1}{2}},{\frac {1}{2}}\right),\quad (\alpha ,\beta )=\left({\frac {1}{2}},-{\frac {1}{2}}\right).}
All four families satisfy the recurrence p n ( x ) = 2 x p n − 1 ( x ) − p n − 2 ( x ) {\displaystyle p_{n}(x)=2xp_{n-1}(x)-p_{n-2}(x)} with p 0 ( x ) = 1 {\displaystyle p_{0}(x)=1} , where p n = T n {\displaystyle p_{n}=T_{n}} , U n {\displaystyle U_{n}} , V n {\displaystyle V_{n}} , or W n {\displaystyle W_{n}} , but they differ according to whether p 1 ( x ) {\displaystyle p_{1}(x)} equals x {\displaystyle x} , 2 x {\displaystyle 2x} , 2 x − 1 {\displaystyle 2x-1} , or 2 x + 1 {\displaystyle 2x+1} .[21]
Even order modified Chebyshev polynomials[edit]Some applications rely on Chebyshev polynomials but may be unable to accommodate the lack of a root at zero, which rules out the use of standard Chebyshev polynomials for these kinds of applications. Even order Chebyshev filter designs using equally terminated passive networks are an example of this.[23] However, even order Chebyshev polynomials may be modified to move the lowest roots down to zero while still maintaining the desirable Chebyshev equi-ripple effect. Such modified polynomials contain two roots at zero, and may be referred to as even order modified Chebyshev polynomials. Even order modified Chebyshev polynomials may be created from the Chebyshev nodes in the same manner as standard Chebyshev polynomials.
P N = ∏ i = 1 N ( x − C i ) {\displaystyle P_{N}=\prod _{i=1}^{N}(x-C_{i})}
where
In the case of even order modified Chebyshev polynomials, the even order modified Chebyshev nodes are used to construct the even order modified Chebyshev polynomials.
P e N = ∏ i = 1 N ( x − C e i ) {\displaystyle Pe_{N}=\prod _{i=1}^{N}(x-Ce_{i})}
where
For example, the 4th order Chebyshev polynomial from the example above is X 4 − X 2 + .125 {\displaystyle X^{4}-X^{2}+.125} , which by inspection contains no roots of zero. Creating the polynomial from the even order modified Chebyshev nodes creates a 4th order even order modified Chebyshev polynomial of X 4 − .828427 X 2 {\displaystyle X^{4}-.828427X^{2}} , which by inspection contains two roots at zero, and may be used in applications requiring roots at zero.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4