A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://github.com/signals-dev/Orion/issues/212 below:

Numerical overflow when using contextual metrics · Issue #212 · sintel-dev/Orion · GitHub

Description

I try to reproduce the results from the notebook: Pipeline Scoring Example.ipynb from the github repository.
This notebook evaluate the f1 score of the pipeline lstm_dynamic_threshold on the S-1 timeserie.

When I run the notebook, whithout changing anything, it works whithout raising any exception.

But, he resulting F-1 score is very different from what is expected from the github result.

It seams that I have such an instability issue.

What I Did

When I run the notebook, the final F-1 score I get is:

But according to github, the results should be:

So there is a huge difference, I don't know from where it could come.
When I rerun the notebook, I get differents results: the f1 score takes values from: 0.08 to 0.3.

To give a little more details:

According to the logs of the training, it seems not to have huge difference with the github source:
own results:

result from orion's gihub:

But the anomalies I detect is a longer sequence, so that explain the precision and the recall I get is worse:
own result:

result from orion's gihub:


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4