侗族有什么节日:Kernel smoother

来源:百度文库 编辑:偶看新闻 时间:2024/04/28 22:55:34
#cn-toggle-box { position: absolute; z-index: 90; top: 7px; }body.ltr #cn-toggle-box { right: 7px; }body.rtl #cn-toggle-box { left: 7px; }#mwcc_banner { padding:0; position: relative;}.mwcc_image { border: 0;}.mwcc_mainlink { font-size: 1.50em; font-weight: bold; font-family: Helvetica, Arial, sans-serif;}.mwcc_sublink { font-size: 1.25em; font-weight: bold; font-family: Helvetica, Arial, sans-serif;}.mwcc_textwrap { text-align:left; margin: 5px;}
It takes a great coder to serve 400 million users.
Sign up for the October 2011 coding challenge now.

Kernel smoother

From Wikipedia, the free encyclopediaJump to: navigation, search

A kernel smoother is a statistical technique for estimating a real valued function by using its noisy observations, when no parametric model for this function is known. The estimated function is smooth, and the level of smoothness is set by a single parameter.

This technique is most appropriate for low dimensional (p < 3) data visualization purposes. Actually, the kernel smoother represents the set of irregular data points as a smooth line or surface.

Contents

  • 1 Definitions
  • 2 Nearest neighbor smoother
  • 3 Kernel average smoother
  • 4 Local linear regression
  • 5 Local polynomial regression
  • 6 See also
  • 7 References

[edit] Definitions

Let be a kernel defined by

where:

  • is the Euclidean norm
  • hλ(X0) is a parameter (kernel radius)
  • D(t) typically is a positive real valued function, which value is decreasing (or not increasing) for the increasing distance between the X and X0.

Popular kernels used for smoothing include

  • Epanechnikov
  • Tri-cube
  • Gaussian

Let be a continuous function of X. For each , the Nadaraya-Watson kernel-weighted average (smooth Y(X) estimation) is defined by

where:

  • N is the number of observed points
  • Y(Xi) are the observations at Xi points.

In the following sections, we describe some particular cases of kernel smoothers.

[edit] Nearest neighbor smoother

The idea of the nearest neighbor smoother is the following. For each point X0, take m nearest neighbors and estimate the value of Y(X0) by averaging the values of these neighbors.

Formally, , where X[m] is the mth closest to X0 neighbor, and

Example:

In this example, X is one-dimensional. For each X0, the is an average value of 16 closest to X0 points (denoted by red). The result is not smooth enough.

[edit] Kernel average smoother

The idea of the kernel average smoother is the following. For each data point X0, choose a constant distance size λ (kernel radius, or window width for p = 1 dimension), and compute a weighted average for all data points that are closer than λ to X0 (the closer to X0 points get higher weights).

Formally, hλ(X0) = λ = constant, and D(t) is one of the popular kernels.

Example:

For each X0 the window width is constant, and the weight of each point in the window is schematically denoted by the yellow figure in the graph. It can be seen that the estimation is smooth, but the boundary points are biased. The reason for that is the non-equal number of points (from the right and from the left to the X0) in the window, when the X0 is close enough to the boundary.

[edit] Local linear regression

Main article: Local regression

In the two previous sections we assumed that the underlying Y(X) function is locally constant, therefore we were able to use the weighted average for the estimation. The idea of local linear regression is to fit locally a straight line (or a hyperplane for higher dimensions), and not the constant (horizontal line). After fitting the line, the estimation is provided by the value of this line at X0 point. By repeating this procedure for each X0, one can get the estimation function . Like in previous section, the window width is constant hλ(X0) = λ = constant. Formally, the local linear regression is computed by solving a weighted least square problem.

For one dimension (p = 1):

The closed form solution is given by:

where:

Example:

The resulting function is smooth, and the problem with the biased boundary points is solved.

[edit] Local polynomial regression

Instead of fitting locally linear functions, one can fit polynomial functions.

For p=1, one should minimize:

with

In general case (p>1), one should minimize:

[edit] See also

  • Kernel (statistics)
  • Kernel methods
  • Kernel density estimation
  • Kernel regression
  • Local regression

[edit] References

  • Li, Q. and J.S. Racine. Nonparametric Econometrics: Theory and Practice. Princeton University Press, 2007, ISBN 0691121613.
  • T. Hastie, R. Tibshirani and J. Friedman, The Elements of Statistical Learning, Chapter 6, Springer, 2001. ISBN 0387952845 (companion book site).
Retrieved from "http://en.wikipedia.org/w/index.php?title=Kernel_smoother&oldid=401885878" Trustworthy Objective Heavily biasedComplete Contains most key informationWell-written Incomprehensible We will send you a confirmation e-mail. We will not share your address with anyone. (Privacy policy) Thanks! Your ratings have been saved.Please take a moment to complete a short survey. Thanks! Your ratings have been saved.Do you want to create an account?An account will help you track your edits, get involved in discussions, and be a part of the community.or Thanks! Your ratings have been saved.Did you know that you can edit this page? Categories:
  • Non-parametric statistics
Personal tools
  • Log in / create account
Namespaces
  • Article
  • Discussion
Variants
    Views
    • Read
    • Edit
    • View history
    Actions
      Navigation
      • Main page
      • Contents
      • Featured content
      • Current events
      • Random article
      • Donate to Wikipedia
      Interaction
      • Help
      • About Wikipedia
      • Community portal
      • Recent changes
      • Contact Wikipedia
      Toolbox
      • What links here
      • Related changes
      • Upload file
      • Special pages
      • Cite this page
      • Rate this page
      Print/export
      • Create a book
      • Download as PDF
      • Printable version