We will change the way we serve the binaries, so we want to ensure that the binaries are properly migrated. Additionally, we can take this opportunity to have some scripts (potentially GH actions) that we can use to check if the binaries are fine and the releases are correct.
Historical ContextWe had being suffering from cache problems for a while:
Seems like the long term solution will be to relocate the binaries to R2:
ImplementationI started building a simple GitHub Action that collects all the releases and generates the URLs for all the available binaries. It then performs a basic HTTP request using curl
to check the response headers. After that, it generates some metrics based on this and presents a simple report in markdown format.
While presenting this proof of concept in Slack, the collaborators provided super useful feedback and suggested features that we can implement.
Current approach
The idea of using a CRON Job to collect availability metrics may not be very effective for the cache issues scenario, but there are many features that can be valuable to us.
Features requested/ideas
iojs.org/dist
as NVM depends on it (@ljharb)SHASUMS256
files are correctly signed (@UlisesGascon)SHASUMS256
SHASUMS256
are availableSHASUMS256
I will request to transfer the repo to the Node.js org when the code is stable and documented, currently is quite hacky code
Next stepsI have started to consolidate the feedback into issues:
There are some things that bubble to the surface while implementing the systematic checks:
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4