**Contents**:

**Types of Polynomial Function:**

- Chebyshev Polynomials
- Hermite Polynomials
- Orthogonal Polynomials
- Symmetric Polynomials
- Univariate Polynomial
- Zernike Polynomials

**See also:** Leading Coefficients.

## What is a Polynomial Function?

A polynomial function is made up of terms called *monomials*; If the expression has exactly two monomials it’s called a *binomial*. The terms can be:

- Constants, like 3 or 523,
- Variables, like a, x, or z,
- A combination of numbers and variables like 88x or 7xyz.

You can’t have:

- Fractional exponents, like x
^{½} - Negative exponents, like x
^{-2} - Variables within the radical (square root) sign. For example, √2.
- Division by a variable.
- An infinite number of terms.

## Domain and Range of a Polynomial

The domain and range depends on the degree of the polynomial and the sign of the leading coefficient. Use the following flowchart to determine the range and domain for any polynomial function.

Watch the short video for an explanation:

## What is a Symmetric Polynomial?

A polynomial is a **symmetric polynomial** if its variables are unchanged under any permutation (i.e. swap). In other words, if you switch out two of the variables, you end up with the same polynomial.

## Examples

The polynomial x + y + z is symmetric because if you switch any of the variables, it remains the same. In other words,

**x + y + z = y + z + x = z + x + y**

The polynomial x_{1}x_{3} + 3x_{1}x_{2}x_{3} is a symmetric polynomial, because if you swap the variables, then it’s still the same polynomial. For example, the following image shows that swapping x_{1} and x_{3} results in the same polynomial:

In other words, x_{1}x_{3} + 3x_{1}x_{2}x_{3} is the same polynomial as x_{3}x_{1} + 3x_{3}x_{2}x_{1}

On the other hand, x_{1}x_{2} + x_{2}x_{3} is *not *symmetric. If you swap two of the variables (say, x_{2} and x_{3}, you get a completely different expression.

## Elementary Symmetric Polynomial

**Elementary symmetric polynomials** (sometimes called *elementary symmetric functions*) are the building blocks of all symmetric polynomials. For the variables x_{1}, x_{2}, x_{3},… x_{n}, they are defined mathematically as follows:

- S
_{1}= x_{1}+ x_{2}+ x_{3}+ … + x_{n} - S
_{2}= x_{1}x_{2}+ x_{1}x_{3}+ x_{1}x_{4}+ … + x_{(n – 1)}x_{(n – 2)} - S
_{3}= x_{1}x_{2}x_{3}+ x_{1}x_{2}x_{4}+ … + x_{(n – 2)}x_{(n – 1)}x_{n} - …
- S
_{n}= x_{1}x_{2}x_{3}… x_{n}

As an example, the elementary symmetric polynomials for the variables x_{1}, x_{2} and x_{3} are:

- S
_{1}= x_{1}+ x_{2}+ x_{3} - S
_{2}= x_{1}x_{2}+ x_{1}x_{3}+ x_{2}x_{3} - S
_{3}= x_{1}x_{2}x_{3}

## Graphs of a Symmetric Polynomial

They are called “symmetric” not because their graph shows symmetry, but because they remain the same if you permute their roots. As far as graphing, the graph of any symmetric polynomial would look exactly the same no matter which variables you switch around. If the graph changes, then the expression is *not *symmetric.

## Why are Symmetric Polynomials Important?

Symmetric polynomials are particularly important in number theory because two types—the elementary symmetric and power sum symmetric polynomials can **completely represent any set of points** in the set of all complex numbers.

See also: Testing for Symmetry of a Function.

## Univariate Polynomial

A **univariate polynomial** has one variable—usually *x* or *t*. For example, P(x) = 4x^{2} + 2x – 9.In common usage, they are sometimes just called “polynomials”.

For real-valued polynomials, the general form is:

_{n}x

^{n}+ p

_{n-1}x

^{n-1}+ … + p

_{1}x + p

_{0}.

The univariate polynomial is called **a monic polynomial** if p_{n} ≠ 0 and it is normalized to p_{n} = 1 (Parillo, 2006). In other words, the nonzero coefficient of highest degree is equal to 1.

## Zernike Polynomials

**Zernike polynomials** are sets of orthonormal functions that describe optical aberrations; Sometimes these polynomials describe the whole aberration and sometimes they describe a part. For example, “myopia with astigmatism” could be described as ρ cos 2(θ). This description doesn’t quantify the aberration: in order to so that, you would need the complete Rx, which describes both the aberration and its magnitude. Different polynomials can be added together to describe multiple aberrations of the eye (Jagerman, 2007).

Zernike polynomials aren’t the only way to describe abberations: Seidel polynomials can do the same thing, but they are not as easy to work with and are less reliable than Zernike polynomials.

## Chebyshev Polynomials

**Chebyshev polynomials** crop up in many areas of calculus, including numerical integration, orthogonal polynomials and spectral methods for partial differential equations. They can also be used for curve fitting (finding a function that models a curve), interpolation and in multiple other areas of numerical analysis.

The general formula for a Chebyshev polynomial, for an integer n ≥ 0, is:

**T _{n}(x) = cos(n cos^{-1}x) **; -1 ≤ x ≤ 1

## Properties

For n ≥ 2 (Smith, 2011):

- T
_{n}(x) is an n th-order polynomial in x. - When n is an even integer, T
_{n}(x) is an even function. - When n is an odd integer, T
_{n}(x) is an odd function. - T
_{n}(x) has n zeros in the open interval (-1, 1). - T
_{n}(x) has n + 1 extrema in the closed interval [-1, 1].

## Chebyshev Polynomials of the First Kind

Some authors refer to Chebyshev polynomials as just the Chebyshev polynomial of the first kind (T_{n})—a polynomial in x of degree n, defined by the relation (Mason & Handscomb, 2002):

**T _{n}(x) = cos nθ when x = cosθ.**

The following table (Culham, 2020) lists the first 12 Chebyshev Polynomials of the first kind, obtained from Rodrigue’s formula:

T_{0}(x) = 1 |

T_{1}(x) = x |

T_{2}(x) = 2x^{2} – 1 |

T_{3}(x) = 4x^{3} – 3x |

T_{4}(x) = 8x^{4} – 8x^{2} + 1 |

T_{5}(x) = 16x^{5} – 20x^{3} + 5x |

T_{6}(x) = 32x^{6} – 48x^{4} + 18x^{2} – 1 |

T_{7}(x) = 64x^{7} – 112x^{5} + 56x^{3} – 7x |

T_{8}(x) = 128x^{8} – 256x^{6} + 160x^{4} – 32x^{2} + 1 |

T_{9}(x) = 256x^{9} – 576x^{7} + 432x^{5} – 120x^{3} + 9x |

T_{10}(x) = 512x^{10} – 1280x^{8} + 1120x^{6} – 400x^{4} + 50x^{2} – 1 |

T_{11}(x) = 1024x^{11} – 2816x^{9} + 2616x^{7} – 1232x^{5} + 220x^{3} – 11x |

## Hermite Polynomials

**Hermite polynomials** are a widely used family of polynomials, defined over (-∞, ∞), with a weight function proportional to w(x) = e^{-x2}.

## Definition of Hermite Polynomials

There are several definitions for “Hermite polynomials”, which can be a source of confusion. First, **two different starting points result in two different sets of polynomials,** often called the “physicists” and “probabilists'” polynomials. Most authors simply refer to Hermite polynomials without any clarification, assuming the reader is working in one field or another (i.e. physics or probability) and therefore don’t need to know the “other” definition.

If you’re in calculus, you’re likely dealing with the “physicists” Hermite polynomials, built from the monomials. The first few are (Sawitzki, 2009):

- H
_{0}(x) = 1 - H
_{1}(x) = x - H
_{2}(x) = x^{2}– 1 - H
_{3}(x) = x^{3}– 3x - H
_{4}(x) = x^{4}– 6x^{2}+ 3 - H
_{5}(x) = x^{5}– 10x^{3}+ 15x - H
_{6}(x) = x^{6}– 15x^{4}+ 45x^{2}– 15

An alternate definition, with w(x) = e^{-x2/2} is sometimes used, especially in statistics. The “probabilists'” polynomials are sometimes called *Chebyshev-Hermite polynomials* (Sawitzki, 2009). The first few are:

- H
_{0}(x) = 1 - H
_{1}(x) = 2x - H
_{2}(x) = 4x^{2}– 2 - H
_{3}(x) = 8x^{3}– 12x - H
_{4}(x) = 16x^{4}– 48x^{2}+ 12 - H
_{5}(x) = 32x^{5}– 160x^{3}+ 120x - H
_{6}(x) = 64x^{6}– 480x^{4}+ 720x^{2}– 120

## Hermite Interpolation and Other Uses

Hermite polynomials are very useful as interpolation functions because their value—and their derivatives values— up to order n are unity at zero at the endpoints of the closed interval [0, 1] (Huebner et al., 2001). They provide an alternative way of representing cubic curves, allowing the curve to be defined in terms of endpoints and derivatives at those endpoints (Buss, 2003).

Hermite polynomials occur in various areas of physics, including as part of the solution to the quantum harmonic oscillator Hamiltonian. They also arise in numerical analysis as Gaussian quadrature.

## Orthogonal Polynomials

**Orthogonal polynomials **(also called an *orthogonal polynomial sequence*) are a set of polynomials that are orthogonal (perpendicular, or at right angles) to each other.

As a simple example, the two-dimensional coordinates {x, y} are perpendicular to each other. So two polynomials that each fit along the x and y axes are orthogonal to each other. When we talk about “orthogonal polynomials” though, we actually mean an **orthogonal polynomial sequence**. In other words, there must be an infinite number of them in order to meet the formal definition.

## Formal Definition

Orthogonal polynomials are the infinite sequence:

p_{0} (x), p_{1} (x), p_{2} (x), … p_{n} (x),

Where:

- p
_{n}(x) is a polynomial with degree n, - Any two polynomials are orthogonal to each other.

This can be represented by the following integral, which basically means if you multiply the two functions and integrate the result is zero:

The closed interval [a, b] is called the *interval of orthogonality*; The interval can be infinite at one end, or both.

As an example, of what this integral means, the following image shows the separate integration of two orthogonal polynomials ½(3x^{2} – 1) and ½(5x^{3} – 3x) on the closed interval [-1, 1]:

It should come as no surprise then, that the two integrals, when multiplied together on the same interval (see: integration by parts), also equal zero.

## Examples of Orthogonal Polynomials

The above two integrals (graphed with Integral-Calculator) are a part of a sequence called **Legendre polynomials,** which form solutions to the Legendre differential equation. Another set of orthogonal polynomials which are widely used are Hermite polynomials, which are part of the solution to the quantum harmonic oscillator Hamiltonian.

## Degrees of a Polynomial Function

“Degrees of a polynomial” refers to the highest degree of each term. To find the degree of a polynomial:

- Add up the values for the exponents for each individual term.
- Choose the sum with the highest degree.

## First Degree Polynomial Function

**First degree polynomials **have terms with a maximum degree of 1. In other words, you wouldn’t usually find any exponents in the terms of a first degree polynomial. For example, the following are first degree polynomials:

- 2x + 1,
- xyz + 50,
- 10a + 4b + 20.

The **shape of the graph** of a first degree polynomial is a straight line (although note that the line can’t be horizontal or vertical). The linear function *f(x) = mx + b* is an example of a first degree polynomial.

First degree polynomials have the following

**additional characteristics**:

- A single
*root*, solvable with a rational equation. - A constant rate of change with no
*extreme values*or inflection points. - The entire graph can be drawn with just two points (one at the beginning and one at the end).
- Symmetry for every point and line.
- The
*range*is the set of all real numbers.

## Second Degree Polynomial Function

Second degree polynomials have at least one second degree term in the expression (e.g. 2x^{2}, a^{2}, xyz^{2}). There are no higher terms (like x^{3} or abc^{5}). The quadratic function *f(x) = ax ^{2} + bx + c* is an example of a second degree polynomial.

The graphs of second degree polynomials have one fundamental shape: a curve that either looks like a cup (U), or an upside down cup that looks like a cap (∩).

Second degree polynomials have these

**additional features**:

- One extreme value (the vertex). A line of symmetry through the vertex.
- Zero inflection points.
- They take three points to construct; Unlike the first degree polynomial, the three points do not lie on the same plane.
- Up to 2 roots.

## Third Degree Polynomial

A **cubic function** (or *third-degree polynomial*) can be written as:

where *a*, *b*, *c*, and *d *are constant terms, and a is nonzero.

Unlike quadratic functions, which always are graphed as parabolas, **cubic functions take on several different shapes**. We can figure out the shape if we know how many roots, critical points and inflection points the function has.

Third degree polynomials have been studied for a long time. In fact, Babylonian cuneiform tablets have tables for calculating cubes and cube roots. Chinese and Greek scholars also puzzled over cubic functions, and later mathematicians built upon their work.

## Roots and Critical Points of a Cubic Function

Let’s suppose you have a cubic function f(x) and set f(x) = 0. Together, they form a **cubic equation**:

The solutions of this equation are called the roots of the polynomial. There can be up to three real roots; if *a, b, c,* and *d *are all real numbers, the function has at least one real root.

The critical points of the function are at points where the first derivative is zero:

We can use the quadratic equation to solve this, and we’d get:

It’s actually the part of that expression *within the square root sign* that tells us what kind of critical points our function has. Suppose the expression inside the square root sign was positive. Then we’d know our cubic function has a local maximum and a local minimum.

If *b ^{2}-3ac* is 0, then the function would have just one critical point, which happens to also be an inflection point. An inflection point is a point where the function changes concavity.

What about if the expression inside the square root sign was less than zero? Then we have no critical points whatsoever, and our cubic function is a monotonic function.

## Limit for a Polynomial Function

There’s more than one way to skin a cat, and there are multiple ways to find a limit for polynomial functions. This can be extremely confusing if you’re new to calculus. But the good news is—if one way doesn’t make sense to you (say, numerically), you can usually try another way (e.g. graphically).

You can find a limit for polynomial functions or radical functions in three main ways:

Graphical and numerical methods work for all types of functions; Click on the above links for a general overview of using those methods. All work well to find limits for polynomial functions (or radical functions) that are very simple. You might also be able to use direct substitution to find limits, which is a very easy method for simple functions; However, you can’t use that method if you have a complicated function (like f(x) + g(x)).

This next section walks you through finding limits algebraically using **Properties of limits **. *Properties of limits* are **short cuts** to finding limits. They give you rules—very specific ways to find a limit for a more complicated function. For example, you can find limits for functions that are added, subtracted, multiplied or divided together.

## Limit for a Polynomial Function (Algebraic Method)

**Example problem:** What is the limit at x = 2 for the function

f(x) = (x^{2} +√2x)?

Step 1: **Look at the Properties of Limits rules **and identify the rule that is related to the type of function you have. The function given in this question is a combination of a polynomial function ((x^{2}) and a radical function ( √ 2x). It’s what’s called **an additive function, f(x) + g(x)**. The rule that applies (found in the properties of limits list) is:

lim _{x→a} [ f(x) ± g(x) ] = lim_{1} ± lim_{2}

Step 2: **Insert your function into the rule **you identified in Step 1.

lim _{x→2} [ (x^{2} + √ 2x) ] = lim _{x→2} (x^{2}) + lim _{x→2}(√ 2x).

Step 3: **Evaluate the limits** for the parts of the function. If you’ve broken your function into parts, in most cases you can find the limit with direct substitution:

lim _{x→2} [ (x^{2} + √2x) ] = (2^{2} + √2(2) = 4 + 2

Step 4: **Perform the addition** (or subtraction, or whatever the rule indicates):

lim _{x→2} [ (x^{2} + √2x) ] = 4 + 2 = 6

*That’s it!*

Back to Top

## Polynomial Sequences

A **polynomial sequence **can be generated by various degree polynomials. As there are an infinite number of any particular type of polynomial, there are an infinite number of possible polynomial sequences.

**Examples:**

These sequences are usually integer valued (i.e. their inputs are 1, 2, 3, …).

**Examples:**

Degree | Generating Function | Polynomial Sequence |

1 | f(x) = 3x | {3, 6, 9,…} |

2 | f(x) = 2x^{2} |
{2, 8, 18, 32, …} |

3 | f(x) = 4x^{3} |
{4, 32, 108, 256, …} |

4 | f(x) = x^{4} + 1 |
{2, 17, 82, 257, …} |

5 | f(x) = x^{5} – 99 |
{-98, -67, 144, 925, …} |

## Finding a Generating Polynomial Function for a Polynomial Sequence

One way to identify the generating polynomial function is to** plot points on a graph.**

**Example question: **What function generates the polynomial sequence {0, 1, 4, 7,…}?

**Solution:**

Step 1: **Make a table of x and y values.** Your x-values are the places of each term (1, 2, 3, 4) and your y-values are the terms of the sequence: {0, 1, 4, 7,…}.

x | y |

1 | 0 |

2 | 1 |

3 | 4 |

4 | 7 |

Step 2: **Sketch a graph** of the points from Step 1:

Step 3: **Find the formula:** From the graph, it’s clear that this sequence is generated by a linear function. The formula for a linear function is y = mx + b.

The slope is the common difference between the points. The common difference is 3, so m = 3.

“b” is the y-intercept. For this graph, it looks like that’s at 1. So our formula is:

f(x) = 3x + 1

Step 4: **Test a couple of points in the formula.** Plugging in a couple of points to the formula will confirm the formula you found in Step 3 is correct.

- f(0) = 3(0) + 1 = 1
- f(1) = 3(1) + 1 = 4.

The polynomial function generating the sequence is f(x) = 3x + 1.

That’s it!

## finding the Degree of the Generating Polynomial Function

Finding the common difference is the key to finding out which degree polynomial function generated any particular sequence. In general, keep taking differences until you get a constant in a row. The number of times you have to take differences is the degree of your polynomial.

**Example question:** What is the degree of the polynomial that generated the sequence {2, 8, 18, 32}?

**Solution**: Find the differences between terms:

6 10 14

And again:

6 6.

We had to find the common difference twice to get a constant row, so the polynomial sequence {2, 8, 18, 32} was generated by a **second-degree polynomial**.

## Polynomial Function: References

Graph: Desmos.com.

Arfken, G. “Orthogonal Polynomials.” Mathematical Methods for Physicists, 3rd ed. Orlando, FL: Academic Press, pp. 520-521, 1985.

Aufmann, R. et al. (2005). Intermediate Algebra: An Applied Approach. Cengage Learning.

Buss, S. (2003). 3D Computer Graphics. A Mathematical Introduction with OpenGL. Cambridge University Press.

Culham, J. (2020). Chebyshev Polynomials. Retrieved August 22, 2020 from: mhtl.uwaterloo.ca/courses/me755/web_chap6.pdf

Davidson, J. (1998). First Degree Polynomials. Retrieved 10/20/2018 from: https://www.sscc.edu/home/jdavidso/Math/Catalog/Polynomials/First/First.html

Egge, E. (2018). Combinatorics of Symmetric Functions. Retrieved December 2, 2019 from: https://d31kydh6n6r5j5.cloudfront.net/uploads/sites/66/2019/04/eggecompsdescription.pdf

Huebner, K. et al. (2001). The Finite Element Method for Engineers. Wiley.

Iseri, Howard. Lecture Notes:

Shapes of Cubic Functions. MA 1165 – Lecture 05. Retrieved from http://faculty.mansfield.edu/hiseri/Old%20Courses/SP2009/MA1165/1165L05.pdf

Iyanaga, S. and Kawada, Y. (Eds.). “Systems of Orthogonal Functions.” Appendix A, Table 20 in Encyclopedic Dictionary of Mathematics. Cambridge, MA: MIT Press, p. 1477, 1980

Orthogonal Polynomials. Retrieved February 12, 2020 from: https://sydney.edu.au/science/chemistry/~mjtj/CHEM3117/Resources/poly_etc.pdf

Jagerman, L. (2007). Ophthalmologists, Meet Zernike and Fourier! Trafford Publishing.

Mason, J. & Handscomb, S. (2002). Chebyshev Polynomials. CRC Press.

Mario123. Symmetric Polynomials. Retrieved December 2, 2019 from: file:///C:/Users/brit6/Downloads/Symmetric%20Polynomials.pdf

Negrinho, R. (2013). Shape Representation Via Symmetric Polynomials: a Complete Invariant Inspired by the Bispectrum. Retrieved December 2, 2019 from: https://www.cs.cmu.edu/~negrinho/assets/papers/msc_thesis.pdf

Parillo, P. (2006). MIT 6.972 Algebraic techniques and semidefinite optimization. Retrieved September 26, 2020 from: https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-972-algebraic-techniques-and-semidefinite-optimization-spring-2006/lecture-notes/lecture_05.pdf

Sansone, G. Orthogonal Functions. New York: Dover, 1991

Sawitzki, G. (2009). Computational Statistics: An Introduction to R, CRC Press.

Smith, J.O. Spectral Audio Signal Processing, http://ccrma.stanford.edu/~jos/sasp/, online book, 2011 edition, accessed August 23, 2020.

Singhal, M. (2017). Generalizations of Hall-Littlewood Polynomials. Retrieved December 2, 2019 from: https://math.mit.edu/research/highschool/primes/materials/2017/conf/5-4-Singhal.pdf

**CITE THIS AS:**

**Stephanie Glen**. "Polynomial Function: Definition, Examples, Degrees" From

**CalculusHowTo.com**: Calculus for the rest of us! https://www.calculushowto.com/types-of-functions/polynomial-function/

**Need help with a homework or test question? **With Chegg Study, you can get step-by-step solutions to your questions from an expert in the field. Your first 30 minutes with a Chegg tutor is free!