Loss function

From Wikipedia Quality
Jump to: navigation, search

In mathematical optimization, statistics, econometrics, decision theory, machine learning and computational neuroscience, a loss function or cost function is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event. An optimization problem seeks to minimize a loss function. An objective function is either a loss function or its negative (in specific domains, variously called a reward function, a profit function, a utility function, a fitness function, etc.), in which case it is to be maximized.

In statistics, typically a loss function is used for parameter estimation, and the event in question is some function of the difference between estimated and true values for an instance of data. The concept, as old as Laplace, was reintroduced in statistics by Abraham Wald in the middle of the 20th century. In the context of economics, for example, this is usually economic cost or regret. In classification, it is the penalty for an incorrect classification of an example. In actuarial science, it is used in an insurance context to model benefits paid over premiums, particularly since the works of Harald Cramér in the 1920s. In optimal control the loss is the penalty for failing to achieve a desired value. In financial risk management the function is mapped to a monetary loss.

In classical statistics—both frequentist and Bayesian—a loss function is typically treated as something of a background mathematical convention. Critics such as W. Edwards Deming and Nassim Nicholas Taleb have argued that loss functions require much greater attention than they have traditionally been given and that loss functions used in real world decision making need to reflect actual empirical experience. They argue that real-world loss functions are often very different from the smooth, symmetric ones used by classical convention, and are often highly asymmetric, nonlinear, and discontinuous.