A processing environment for HyP3 Plugins in AWS.
Clone the repository
git clone git@github.com:ASFHyP3/hyp3.git
cd hyp3
Create and activate a conda environment
conda env create -f environment.yml
conda activate hyp3
Run the tests:
Alternatively, you can invoke pytest
directly (e.g. for passing command-line arguments):
eval $(make env)
make render && pytest
In particular, to skip tests that require a network connection, run:
And to run only those tests:
When writing new tests, decorate such tests with @pytest.mark.network
.
Also, remember to re-run make render
after making changes to rendered files.
Important
It's not currently possible to deploy HyP3 fully independent of ASF due to our integration with ASF Vertex. If you'd like your own deployment of HyP3, please open an issue here or email our User Support Office at uso@asf.alaska.edu with your request.
We currently have HyP3 deployments in various AWS accounts managed by three different organizations, also referred to as "security environments" throughout our code and docs (because the AWS accounts have different security requirements depending on the organization):
Important
JPL deployments must start with the JPL security environment, but can be migrated to JPL-public
after they are fully deployed and approved to have a public bucket.
For JPL, these deployment docs assume that:
power_user
roleFor a new EDC deployment, you need the following items (not necessarily a comprehensive list):
EDC UAT/prod deployment steps are not fully documented here. When deploying HyP3 to a new EDC account for the first time, you should also refer to the SOP for deploying HyP3 to EDC. You should then be able to deploy additional copies of HyP3 to an EDC Sandbox account by following this README alone.
After deploying HyP3 to an EDC Sandbox account, you'll need to follow our documentation on Accessing Private API Gateways in Earthdata Cloud.
Tip
You can expand and collapse details specific to a security environment as you go through this README. Make sure you're looking at the details for the security environment you're deploying into!
Ensure there is a CloudFormation templates bucketIn order to deploy HyP3, the AWS account will need an S3 bucket to store AWS CloudFormation templates in the same region as the deployment.
For JPL and EDC security environments, this will likely have already been set up for you by their respective cloud services team. You can confirm this by going to the AWS S3 console and looking for a bucket named something like cf-templates-<HASH>-<region>
. If not, follow the ASF steps below.
Note: This section only needs to be completed once per region used in an AWS account.
A new account will not have a bucket for storing AWS CloudFormation templates, which is needed to deploy a CloudFormation stack. AWS will automatically make a suitable bucket if you try and create a new CloudFormation Stack in the AWS Console:
cf-templates-<HASH>-<region>
The primary and recommended way to deploy HyP3 is though our GitHub Actions CI/CD pipeline. For ASF and JPL, this requires some setup:
ASF: Create a service user and deployment roleIn order to integrate an ASF deployment we'll need:
These can be done by deploying the ASF CI stack.
Warning: This stack should only be deployed once per AWS account. This stack also assumes you are only deploying into a single AWS Region. If you are deploying into multiple regions in the same AWS account, you'll need to adjust the IAM permissions that are limited to a single region.
From the repository root, run the following command, replacing <profile>
and <template-bucket>
with the appropriate values for your AWS account:
aws --profile <profile> cloudformation deploy \ --stack-name hyp3-ci \ --template-file cicd-stacks/ASF-deployment-ci-cf.yml \ --capabilities CAPABILITY_NAMED_IAM \ --parameter-overrides TemplateBucketName=<template-bucket>
Once the github-actions
IAM user has been created, you can create an AWS access key for that user, which we will use to deploy HyP3 via CI/CD tooling:
JPL restricts developers from creating IAM roles or policies inside their AWS commercial cloud accounts. However, HyP3 can be deployed into a JPL managed AWS commercial account as long as JPL's roles-as-code
tooling is provided in the account and in the same region as the deployment. Currently, the only regions supported are us-west-1
, us-west-2
, us-east-1
, and us-east-2
.
To request roles-as-code
tooling be deployed in a JPL account, open a Cloud Team Service Desk request here: https://itsd-jira.jpl.nasa.gov/servicedesk/customer/portal/13/create/461?q=roles&q_time=1644889220558
For more information about roles-as-code
, see:
Note: You must be on the JPL VPN to view the .jpl.nasa.gov
links in this document.
In order to integrate a JPL deployment into our CI/CD pipelines, a JPL-created "service user" is needed to get long-term (90-day) AWS access keys. When requesting a service user, you'll need to request that an appropriate deployment policy containing all the necessary permissions for deployment is attached to the user. An appropriate deployment policy can be created in a JPL account by deploying the JPL CI stack.
From the repository root, run:
aws cloudformation deploy \ --stack-name hyp3-ci \ --template-file cicd-stacks/JPL-deployment-policy-cf.yml
Warning: This stack should only be deployed once per AWS account. This stack also assumes you are only deploying into a single AWS Region. If you are deploying into multiple regions in the same AWS account, you'll need to adjust the IAM permissions that are limited to a single region.
Then open a Cloud Team Service Desk request for a service user account here: https://itsd-jira.jpl.nasa.gov/servicedesk/customer/portal/13/create/416?q=service%20user&q_time=1643746791578 with the deployed policy name in the "Managed Permissions to be Attached" field. The policy name should look like hyp3-ci-DeployPolicy-*
, and can be found either in the IAM console or listed under the hyp3-ci
CloudFormation Stack Resources.
Once the JPL service user has been created, you should receive an AWS access key which can be used to deploy HyP3 via CI/CD tooling.
Important: These keys will be stored in the associated JPL-managed AWS account in an AWS SecretsManager secret with the same name as the service user. JPL automatically rotates them every 90 days and so will need to be periodically refreshed in the GitHub deploy environment secrets (described below).
Create Earthdata Login userAssuming the job spec(s) for your chosen job type(s) require the EARTHDATA_USERNAME
and EARTHDATA_PASSWORD
secrets, you will need to create an Earthdata Login user for your deployment if you do not already have one:
Go to AWS console -> Secrets Manager, then:
Warning
This step must be done by an ASF employee.
To allow HTTPS connections, HyP3 needs an SSL certificate that is valid for its deployment domain name (URL).
If HyP3 is being deployed to an ASF-managed AWS account, we can use the master certificate that covers all *.asf.alaska.edu
domains. Otherwise, we'll need a deployment specific certificate.
Important: Skip this step for EDC Sandbox deployments.
ASF-managed AWS account: Upload the ASF master SSL certificateUpload the *.asf.alaska.edu
SSL certificate to AWS Certificate Manager (ACM):
asf.alaska.edu.cer
file go in "Certificate body"asf.alaska.edu.key
file go in "Certificate private key"intermediates.pem
file go in "Certificate chain"Submit a Platform request in ASF JIRA for a new certificate, including the domain name (e.g. hyp3-foobar.asf.alaska.edu
).
Once you receive the certificate's private key and links to download the certificate in various formats, download these files:
and then upload them to AWS Certificate Manager (ACM):
Warning
This step must be done by someone with admin access to the ASFHyP3/hyp3 repository, which is generally only possible for ASF employees on HyP3 development teams.
main
for prod deployments, develop
for test deployments, or a feature branch name for sandbox deployments.)AWS_REGION
- e.g. us-west-2
BUCKET_READ_PRINCIPALS
(EDC only) - List of AWS IAM principals granted read access to data in S3 for Earthdata Cloud deployments. For EDC Sandbox deployments, if you don't know what to put here, you can simply set it to arn:aws:iam::<edc-sandbox-account-id>:root
, where <edc-sandbox-account-id>
is the AWS account ID for the EDC Sandbox account.CERTIFICATE_ARN
(ASF and JPL only) - ARN of the AWS Certificate Manager certificate that you imported manually (aws console -> certificate manager -> list certificates, e.g. arn:aws:acm:us-west-2:xxxxxxxxxxxx:certificate/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
)CLOUDFORMATION_ROLE_ARN
(ASF only) - part of the hyp3-ci
stack that you deployed, e.g. arn:aws:iam::xxxxxxxxxxxx:role/hyp3-ci-CloudformationDeploymentRole-XXXXXXXXXXXXX
SECRET_ARN
- ARN for the AWS Secrets Manager Secret that you created manuallyV2_AWS_ACCESS_KEY_ID
- AWS access key ID:
github-actions
userV2_AWS_SECRET_ACCESS_KEY
- The corresponding secret access keyVPC_ID
- ID of the default VPC for this AWS account and region (aws console -> vpc -> your VPCs, e.g. vpc-xxxxxxxxxxxxxxxxx
)SUBNET_IDS
- Comma delimited list (no spaces) of the default subnets for the VPC specified in VPC_ID
(aws console -> vpc -> subnets, e.g. subnet-xxxxxxxxxxxxxxxxx,subnet-xxxxxxxxxxxxxxxxx,subnet-xxxxxxxxxxxxxxxxx,subnet-xxxxxxxxxxxxxxxxx
)You will need to add the deployment to the matrix in an existing GitHub Actions deploy-*.yml
workflow or create a new one for the deployment. If you need to create a new one, we recommend copying one of the deploy-*-sandbox.yml
workflows, and then updating all of the fields (environment
, domain
, template_bucket
, etc.) as appropriate for your deployment. Also make sure to update the top-level name
of the workflow and the name of the branch to deploy from. (This is typically main
for prod deployments, develop
for test deployments, or a feature branch name for sandbox deployments.)
Tip
If you're deploying from a feature branch, make sure to protect it from accidental deletion.
The deployment workflow will run as soon as you merge your changes into the branch specified in the workflow file.
Once HyP3 is deployed, there are a few follow on tasks you may need to do for a fully functional HyP3 deployment.
Create DNS record for new HyP3 APIWarning
This step must be done by an ASF employee.
Important: Skip this step for EDC Sandbox deployments.
Open a PR adding a line to https://gitlab.asf.alaska.edu/operations/puppet/-/blob/production/modules/legacy_dns/files/asf.alaska.edu.db for the new custom domain name (AWS console -> api gateway -> custom domain names -> "API Gateway domain name").
Ask the Platform team in the ~development-support
channel in Mattermost to review/merge the PR.
Changes should take effect within 15-60 minutes after merging. Confirm that a Swagger UI is available at your chosen API URL.
Update the AWS Accounts and HyP3 Deployments spreadsheet.
Tip
While waiting for the DNS PR, you can edit your local DNS name resolution so you can connect to your deployment:
nslookup <gateway-domain-name>
/etc/hosts
to connect one of the returned IP addresses with your custom domain name, like:
XX.XX.XXX.XXX <deployment-name>.asf.alaska.edu
Remember to remove this after the DNS PR is merged!
JPL: Allow a public HyP3 content bucket for JPL accountsBy default, JPL commercial AWS accounts have an S3 account level Block All Public Access set which must be disabled by the JPL Cloud team in order to attach a public bucket policy to the HyP3 content bucket.
The steps to disable the account level Block All Public Access setting is outlined in the S3 section here: https://wiki.jpl.nasa.gov/display/cloudcomputing/AWS+Service+Policies+at+JPL
Once this setting has been disabled, you can attach a public bucket policy to the HyP3 content bucket by redeploying HyP3 using the JPL-public
security environment.
Warning: This step must be done by an ASF employee.
If your HyP3 deployment uses the RTC_GAMMA
or INSAR_GAMMA
job types and is the first such deployment in this AWS account, you will need to grant the AWS account permission to pull the hyp3-gamma
container.
In the HyP3 AWS account (not the AWS account for the new deployment), go to AWS console -> Elastic Container Registry -> hyp3-gamma -> Permissions -> "Edit policy JSON":
Principal
(e.g. arn:aws:iam::xxxxxxxxxxxx:root
)SID
To delete a HyP3 deployment, delete any of the resources created above that are no longer needed.
Before deleting the HyP3 CloudFormation stack, you should manually empty and delete the contentbucket
and logbucket
for the deployment via the S3 console.
The API can be run locally for testing and development purposes:
tests/cfg.env
to specify the names of the DynamoDB tables from the HyP3 deployment. Delete all of the AWS_*
variables.<profile>
with the AWS config profile that corresponds to the HyP3 deployment:
AWS_PROFILE=<profile> make run
Running on http://127.0.0.1:8080
in the output. Open the URL in your browser and verify that you see the Swagger UI for the locally running API.GET /user
endpoint and verify that it returns the correct information for your username.RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4