A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://developers.google.com/machine-learning/crash-course/overfitting/overfitting below:

Overfitting | Machine Learning | Google for Developers

Overfitting

Stay organized with collections Save and categorize content based on your preferences.

Overfitting means creating a model that matches (memorizes) the training set so closely that the model fails to make correct predictions on new data. An overfit model is analogous to an invention that performs well in the lab but is worthless in the real world.

Tip: Overfitting is a common problem in machine learning, not an academic hypothetical.

In Figure 11, imagine that each geometric shape represents a tree's position in a square forest. The blue diamonds mark the locations of healthy trees, while the orange circles mark the locations of sick trees.

Figure 11. Training set: locations of healthy and sick trees in a square forest.

Mentally draw any shapes—lines, curves, ovals...anything—to separate the healthy trees from the sick trees. Then, expand the next line to examine one possible separation.

Expand to see one possible solution (Figure 12). Figure 12. A complex model for distinguishing sick from healthy trees.

The complex shapes shown in Figure 12 successfully categorized all but two of the trees. If we think of the shapes as a model, then this is a fantastic model.

Or is it? A truly excellent model successfully categorizes new examples. Figure 13 shows what happens when that same model makes predictions on new examples from the test set:

Figure 13.Test set: a complex model for distinguishing sick from healthy trees.

So, the complex model shown in Figure 12 did a great job on the training set but a pretty bad job on the test set. This is a classic case of a model overfitting to the training set data.

Fitting, overfitting, and underfitting

A model must make good predictions on new data. That is, you're aiming to create a model that "fits" new data.

As you've seen, an overfit model makes excellent predictions on the training set but poor predictions on new data. An underfit model doesn't even make good predictions on the training data. If an overfit model is like a product that performs well in the lab but poorly in the real world, then an underfit model is like a product that doesn't even do well in the lab.

Figure 14. Underfit, fit, and overfit models.

Generalization is the opposite of overfitting. That is, a model that generalizes well makes good predictions on new data. Your goal is to create a model that generalizes well to new data.

Detecting overfitting

The following curves help you detect overfitting:

A loss curve plots a model's loss against the number of training iterations. A graph that shows two or more loss curves is called a generalization curve. The following generalization curve shows two loss curves:

Figure 15. A generalization curve that strongly implies overfitting.

Notice that the two loss curves behave similarly at first and then diverge. That is, after a certain number of iterations, loss declines or holds steady (converges) for the training set, but increases for the validation set. This suggests overfitting.

In contrast, a generalization curve for a well-fit model shows two loss curves that have similar shapes.

What causes overfitting?

Very broadly speaking, overfitting is caused by one or both of the following problems:

Generalization conditions

A model trains on a training set, but the real test of a model's worth is how well it makes predictions on new examples, particularly on real-world data. While developing a model, your test set serves as a proxy for real-world data. Training a model that generalizes well implies the following dataset conditions:

Explore the preceding conditions through the following exercises.

Exercises: Check your understanding

Consider the following dataset partitions.

What should you do to ensure that the examples in the training set have a similar statistical distribution to the examples in the validation set and the test set?

Shuffle the examples in the dataset extensively before partitioning them.

Yes. Good shuffling of examples makes partitions much more likely to be statistically similar.

Sort the examples from earliest to most recent.

If the examples in the dataset are not stationary, then sorting makes the partitions less similar.

Do nothing. Given enough examples, the law of averages naturally ensures that the distributions will be statistically similar.

Unfortunately, this is not the case. The examples in certain sections of the dataset may differ from those in other sections.

A streaming service is developing a model to predict the popularity of potential new television shows for the next three years. The streaming service plans to train the model on a dataset containing hundreds of millions of examples, spanning the previous ten years. Will this model encounter a problem?

Probably. Viewers' tastes change in ways that past behavior can't predict.

Yes. Viewer tastes are not stationary. They constantly change.

Definitely not. The dataset is large enough to make good predictions.

Unfortunately, viewers' tastes are nonstationary.

Probably not. Viewers' tastes change in predictably cyclical ways. Ten years of data will enable the model to make good predictions on future trends.

Although certain aspects of entertainment are somewhat cyclical, a model trained from past entertainment history will almost certainly have trouble making predictions about the next few years.

A model aims to predict the time it takes for people to walk a mile based on weather data (temperature, dew point, and precipitation) collected over one year in a city whose weather varies significantly by season. Can you build and test a model from this dataset, even though the weather readings change dramatically by season?

Yes

Yes, it is possible to build and test a model from this dataset. You just have to ensure that the data is partitioned equally, so that data from all four seasons is distributed equally into the different partitions.

No

Assuming this dataset contains enough examples of temperature, dew point, and precipitation, then you can build and test a model from this dataset. You just have to ensure that the data is partitioned equally, so that data from all four seasons is distributed equally into the different partitions.

Challenge exercise

You are creating a model that predicts the ideal date for riders to buy a train ticket for a particular route. For example, the model might recommend that users buy their ticket on July 8 for a train that departs July 23. The train company updates prices hourly, basing their updates on a variety of factors but mainly on the current number of available seats. That is:

Your model exhibits low loss on the validation set and the test set but sometimes makes terrible predictions on real-world data. Why?

Click here to see the answer

Answer: The real world model is struggling with a feedback loop.

For example, suppose the model recommends that users buy tickets on July 8. Some riders who use the model's recommendation buy their tickets at 8:30 in the morning on July 8. At 9:00, the train company raises prices because fewer seats are now available. Riders using the model's recommendation have altered prices. By evening, ticket prices might be much higher than in the morning.

Key terms:

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2024-10-09 UTC.

[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-10-09 UTC."],[[["Overfitting occurs when a model performs well on training data but poorly on new, unseen data."],["A model is considered to generalize well if it accurately predicts on new data, indicating it hasn't overfit."],["Overfitting can be detected by observing diverging loss curves for training and validation sets on a generalization curve."],["Common causes of overfitting include unrepresentative training data and overly complex models."],["Dataset conditions for good generalization include examples being independent, identically distributed, and stationary, with similar distributions across partitions."]]],[]]


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4