This plugin uses Hammerspace backend as distributed data storage for containers.
Supports CSI Spec 1.1.0
Implements the Identity, Node, and Controller interfaces as single Golang binary.
File-backed Block Volume (raw device)
File-backed Mounted volume (filesystem)
Share-backed Mounted volume (shared filesystem)
Ensure that nfs-utils is installed on the Kubernetes hosts
Ubuntu
CentOS
The plugin container(s) must run as privileged containers
Kubernetes specific deployment instructions are located at here
Configuration parameters for the driver (passed as environment variables to plugin container):
*
Required
CSI_ENDPOINT
Location on host for gRPC socket (Ex: /tmp/csi.sock) *CSI_NODE_NAME
Identifier for the host the plugin is running on *HS_ENDPOINT
Hammerspace API gateway *HS_USERNAME
Hammerspace username (admin role credentials) *HS_PASSWORD
Hammerspace password HS_TLS_VERIFY
false
Whether to validate the Hammerspace API gateway certificates HS_DATA_PORTAL_MOUNT_PREFIX
Override the prefix for data portal mounts. Ex /mnt/data-portal
CSI_MAJOR_VERSION
"1"
The major version of the CSI interface used to communicate with the plugin. Valid values are "1" and "0"
Supported volume parameters for CreateVolume requests (maps to Kubernetes storage class params):
Name Default DescriptionexportOptions
Export options applied to shares created by plugin. Format is ';' seperated list of subnet,access,rootSquash. Ex *,RW,false; 172.168.0.0/20,RO,true
deleteDelay
-1
The value of the delete delay parameter passed to Hammerspace when the share is deleted. '-1' implies Hammerspace cluster defaults. volumeNameFormat
%s
The name format to use when creating shares or files on the backend. Must contain a single '%s' that will be replaced with unique volume id information. Ex: csi-volume-%s-us-east
objectives
""
Comma separated list of objectives to set on created shares and files in addition to default objectives. blockBackingShareName
The share in which to store Block Volume files. If it does not exist, the plugin will create it. Alternatively, a preexisting share can be used. Must be specified if provisioning Block Volumes. mountBackingShareName
The share in which to store File-backed Mount Volume files. If it does not exist, the plugin will create it. Alternatively, a preexisting share can be used. Must be specified if provisioning Filesystem Volumes other than 'nfs'. fsType
nfs
The file system type to place on created mount volumes. If a value other than "nfs", then a file-backed volume is created instead of an NFS share. additionalMetadataTags
Comma separated list of tags to set on files and shares created by the plugin. Format is ',' separated list of key=value pairs. Ex storageClassName=hs-storage,fsType=nfs
Currently, only the topology.csi.hammerspace.com/is-data-portal
key is supported. Values are 'true' and 'false'
sudo make build
Update VERSION file, then
docker push hammerspaceinc/csi-plugin:$(cat VERSION)
Manual tests can be facilitated by using the Dev Image. Local files can be exposed to the container to facilitate iterative development and testing.
Example Usage:
Building the image -
Create ENV file for plugin and csi-sanity configuration.
echo " CSI_ENDPOINT=/tmp/csi.sock HS_ENDPOINT=https://anvil.example.com HS_USERNAME=admin HS_PASSWORD=admin HS_TLS_VERIFY=false CSI_NODE_NAME=test SANITY_PARAMS_FILE=/tmp/csi_sanity_params.yaml " > ~/csi-env
Create params file for csi-sanity (defines the parameters passed to CreateVolume)
echo " blockBackingShareName: test-csi-block deleteDelay: 0 objectives: "test-objective" " > ~/csi_sanity_params.yaml
Running the image -
docker run --privileged=true \ --cap-add ALL \ --cap-add CAP_SYS_ADMIN \ -v /tmp/:/tmp/:shared \ -v /dev/:/dev/ \ --env-file ~/csi-env \ -it \ -v ~/csi_sanity_params.yaml:/tmp/csi_sanity_params.yaml \ -v ~/csi-plugin:/csi-plugin/:shared \ --name=csi-dev \ hammerspaceinc/csi-plugin-dev
Running CSI plugin in dev image
make compile # Recompile ./bin/hs-csi-plugin
Using csc to call the plugin -
# open additional shell into dev container docker exec -it csi-dev /bin/sh # use csc tool ## Call GetPluginInfo CSI_DEBUG=true CSI_ENDPOINT=/tmp/csi.sock csc identity plugin-info ## Make a 1GB file-backed mount volume CSI_DEBUG=true CSI_ENDPOINT=/tmp/csi.sock csc controller create --cap 5,mount,ext4 --req-bytes 1073741824 --params mountBackingShareName=file-backed test-filesystem ## Delete volume CSI_DEBUG=true CSI_ENDPOINT=/tmp/csi.sock csc controller delete /file-backed/test-filesystem ## Explore additional commands csc -h
make unittest
These tests are functional and will create and delete volumes on the backend.
Must have connections from the host to the HS_ENDPOINT. This can be run from within the Dev image. Uses the CSI sanity package
Make parameters
echo " fsType: nfs blockBackingShareName: test-csi-block deleteDelay: 0 objectives: "test-objective" " > ~/csi_sanity_params.yaml
Run sanity tests
export CSI_ENDPOINT=/tmp/csi.sock export HS_ENDPOINT="https://anvil.example.com" export HS_USERNAME=admin export HS_PASSWORD=admin export HS_TLS_VERIFY=false export CSI_NODE_NAME=test export SANITY_PARAMS_FILE=~/csi_sanity_params.yaml make sanity
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4