parametric versus non parametric

 Parametric and nonparametric are two types of statistical models used in machine learning and other data analysis tasks.

Parametric models assume a specific distribution of the data, such as a normal distribution, and estimate the parameters of the distribution based on the training data. Once the parameters have been estimated, the model can be used to make predictions or decisions on new, unseen data. Examples of parametric models include linear regression, logistic regression, and Naive Bayes.

Nonparametric models, on the other hand, do not make any assumptions about the underlying distribution of the data. Instead, they attempt to learn the structure of the data directly from the training data, without assuming any specific form for the probability distribution. Nonparametric models are typically more flexible and can handle a wider variety of data distributions, but they may require more data to learn the structure of the data. Examples of nonparametric models include decision trees, random forests, and support vector machines.

In general, parametric models have a fixed number of parameters, while nonparametric models have a variable number of parameters that depend on the size of the training data. Parametric models are often easier to interpret and faster to train, but may not be as flexible as nonparametric models. Nonparametric models are often used when there is no clear parametric form for the data, or when the data is too complex for a simple parametric model to capture.

Post a Comment

Previous Post Next Post