Proposing two new reconstruction based models
LSTM AE
LSTM autoencoder is encoder-decoder architecture containing two LSTM layers each having 60 units
Dense AE
dense autoencoder is also a encoder-decoder architecture which consists of three Dense layers 60, 20 and 60 units
Both approaches use reconstruction_errors
to find the distance between y
and y_hat
, namely point-wise difference. To eliminate code repetition, we refactor reconstruction_errors
from tadgan
.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4