from cloudvolume import CloudVolume vol = CloudVolume('gs://mylab/mouse/image', parallel=True, progress=True) image = vol[:,:,:] # Download a whole image stack into a numpy array from the cloud vol[:,:,:] = image # Upload an entire image stack from a numpy array to the cloud label = 1 mesh = vol.mesh.get(label) skel = vol.skeleton.get(label)
CloudVolume is a serverless Python client for random access reading and writing of Neuroglancer volumes in "Precomputed" format, a set of representations for arbitrarily large volumetric images, meshes, and skeletons. CloudVolume is typically paired with Igneous, a Kubernetes compatible system for generating image hierarchies, meshes, skeletons, and other dependency free jobs that can be applied to petavoxel scale images.
Precomputed volumes are typically stored on AWS S3, Google Storage, or locally. CloudVolume can read and write to these object storage providers given a service account token with appropriate permissions. However, these volumes can be stored on any service, including an ordinary webserver or local filesystem, that supports key-value access.
The combination of Neuroglancer, Igneous, and CloudVolume comprises a system for visualizing, processing, and sharing (via browser viewable URLs) petascale datasets within and between laboratories. A typical example usage would be to visualize raw electron microscope scans of mouse, fish, or fly brains up to a cubic millimeter in physical dimension. Neuroglancer and Igneous would enable you to visualize each step of the process of montaging the image, fine tuning alignment vector fields, creating segmentation layers, ROI masks, or performing other types of analysis. CloudVolume enables you to read from and write to each of these layers. Recently, we have introduced the ability to interact with the graph server ("PyChunkGraph") that backs proofreading automated segmentations via the graphene://
format.
You can find a collection of CloudVolume accessible and Neuroglancer viewable datasets at https://neurodata.io/project/ocp/, an open data project by some of our collaborators.
graphene://
).graphene://
) and multi-cloud.precomputed
, graphene
, zarr
, (read-only) n5
, and boss
formats.compressed_segmentation
, compresso
, crackle
(BETA), fpzip
, zfpc
, png
, and brotli
)fill_missing=True
).delete_black_uploads=True
).Cloud-volume is regularly tested on Ubuntu with 3.8, 3.9, 3.10, 3.11, and 3.12. We officially support Linux and Mac OS. Windows is community supported. After installation, you'll also need to set up your cloud credentials if you're planning on writing files or reading from a private dataset. Once you're finished setting up, you can try reading from a public dataset.
pip install cloud-volume # standard installation
CloudVolume depends on several PyPI packages which are Cython bindings for C++. We have provided compiled binaries for many platforms and python versions, however if you are on an unsupported system, pip will attempt to install from source. In that case, follow the instructions below.
Windows Note: If you get errors related to a missing C++ compiler, this blog post might help you: https://www.scivision.dev/python-windows-visual-c-14-required/
Tag Description Dependencies bossboss://
format support intern, blosc zarr zarr://
format support blosc test Supports testing pytest mesh_viewer mesh.viewer()
GUI vtk skeleton_viewer skeleton.viewer()
GUI matplotlib all_viewers All viewers now and in the future. vtk, matplotlib dask Supports converting to/from dask arrays dask[array] em_codecs Image codecs: JPEG, JPEG-XL, and PNG imagecodecs, simplejpeg, pyspng-seunglab seg_codecs Segmentation: compressed-segmentation, compresso, crackle compressed-segmentation, compresso, crackle-codec fp_codecs Floating point: fpzip, zfpc fpzip, zfpc all_codecs Installs all optional compression codecs: em_codecs, seg_codecs, fp_codecs, blosc see above
Example:
pip install cloud-volume[all_codecs,test,all_viewers]
gzip, brotli, JPEG, and compressed-segmentation codecs are installed by default.
C++ compiler required.
sudo apt-get install g++ python3-dev pip install numpy pip install cloud-volume
Due to packaging problems endemic to Python, Cython packages that depend on numpy require numpy header files be installed before attempting to install the package you want. The numpy headers are not recognized unless numpy is installed in a seperate process that runs first. There are hacks for this issue, but I haven't gotten them to work. If you think binaries should be available for your platform, please let us know by opening an issue.
This can be desirable if you want to hack on CloudVolume itself.
git clone git@github.com:seung-lab/cloud-volume.git cd cloud-volume # With virtualenvwrapper mkvirtualenv cv workon cv # With only virtualenv virtualenv venv source venv/bin/activate sudo apt-get install g++ python3-dev pip install numpy # additional step needed for accelerated compressed_segmentation and fpzip pip install -e . # without optional dependencies pip install -e .[all_viewers] # with e.g. the all_viewers optional dependency
By default, CloudVolume's configuration and cache files are stored in $HOME/.cloudvolume
(or in $HOME/.cloudfiles
since we use CloudFiles for the backend). You can configure where CloudVolume looks for these files with the environment variable $CLOUD_VOLUME_DIR
.
Credentials are stored in $CLOUD_VOLUME_DIR/secrets
. You'll need credentials only for the services you'll use. If you plan to use the local filesystem, you won't need any. For Google Storage (setup instructions here), default account credentials will be used if available and no service account is provided.
If neither of those two conditions apply, you need a service account credential. If you have your credentials handy, you can provide them like so as a dict, JSON string, or a bare token if the service will accept that.
cv = CloudVolume(..., secrets=...)
However, it may be simpler to save your credential to disk so you don't have to always provide it. google-secret.json
is a service account credential for Google Storage, aws-secret.json
is a service account for S3, etc. You can support multiple projects at once by prefixing the bucket you are planning to access to the credential filename. google-secret.json
will be your defaut service account, but if you also want to also access bucket ABC, you can provide ABC-google-secret.json
and you'll have simultaneous access to your ordinary buckets and ABC. The secondary credentials are accessed on the basis of the bucket name, not the project name.
mkdir -p ~/.cloudvolume/secrets/ mv aws-secret.json ~/.cloudvolume/secrets/ # needed for Amazon mv google-secret.json ~/.cloudvolume/secrets/ # needed for Google mv boss-secret.json ~/.cloudvolume/secrets/ # needed for the BOSS mv matrix-secret.json ~/.cloudvolume/secrets/ # needed for Matrix mv tigerdata-secret.json ~/.cloudvolume/secrets/ # needed for Tigerdata
aws-secret.json
and matrix-secret.json
Create an IAM user service account that can read, write, and delete objects from at least one bucket.
{ "AWS_ACCESS_KEY_ID": "$MY_AWS_ACCESS_KEY_ID", "AWS_SECRET_ACCESS_KEY": "$MY_SECRET_ACCESS_TOKEN" }
You can create the google-secret.json
file here. You don't need to manually fill in JSON by hand, the below example is provided to show you what the end result should look like. You should be able to read, write, and delete objects from at least one bucket.
{ "type": "service_account", "project_id": "$YOUR_GOOGLE_PROJECT_ID", "private_key_id": "...", "private_key": "...", "client_email": "...", "client_id": "...", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://accounts.google.com/o/oauth2/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "" }
Note: used to be called chunkedgraph-secret.json. This is still supported but deprecated.
If you have a token from Graphene/Chunkedgraph server, create the cave-secret.json
file as shown in the example below. You may also pass the token to CloudVolume(..., secrets=token)
.
{ "token": "<your_token>" }
Note that to take advantage of multiple credential files, prepend the fully qualified domain name (FQDN) of the server instead of the bucket for GCS and S3. For example, sudomain.domain.com-cave-secret.json
.
CloudVolume supports reading and writing to Neuroglancer data layers on Amazon S3, Google Storage, The BOSS, and the local file system.
Supported URLs are of the forms:
$FORMAT://$PROTOCOL://$BUCKET/$DATASET/$LAYER
The format or protocol fields may be omitted where required. In the case of the precomputed format, the format specifier is optional.
Format Protocols Default Example precomputed gs, s3, http, https, file, matrix, tigerdata Yes gs://mybucket/dataset/layer graphene gs, s3, http, https, file, matrix, tigerdata graphene://gs://mybucket/dataset/layer boss N/A boss://collection/experiment/channel n5 gs, s3, http, https, file, matrix, tigerdata n5://gs://mybucket/dataset/layer zarr gs, s3, http, https, file, matrix, tigerdata zarr://gs://mybucket/dataset/layerCloudVolume also supports alternative s3 aliases via CloudFiles.
Neuroglancer relies on an info
file located at the root of a dataset layer to tell it how to compute file locations and interpret the data in each file. CloudVolume piggy-backs on this functionality.
In the below example, assume you are creating a new segmentation volume from a 3d numpy array "rawdata". Note Precomputed stores data in Fortran (column major, aka CZYX) order. You should do a small test to see if the image is written transposed. You can fix this by uploading rawdata.T
. A more detailed example for uploading a local volume is located here.
from cloudvolume import CloudVolume info = CloudVolume.create_new_info( num_channels = 1, layer_type = 'segmentation', data_type = 'uint64', # Channel images might be 'uint8' # raw, png, jpeg, compressed_segmentation, fpzip, kempressed, zfpc, compresso, crackle encoding = 'raw', resolution = [4, 4, 40], # Voxel scaling, units are in nanometers voxel_offset = [0, 0, 0], # x,y,z offset in voxels from the origin mesh = 'mesh', # Pick a convenient size for your underlying chunk representation # Powers of two are recommended, doesn't need to cover image exactly chunk_size = [ 512, 512, 16 ], # units are voxels volume_size = [ 250000, 250000, 25000 ], # e.g. a cubic millimeter dataset ) vol = CloudVolume(cfg.path, info=info) vol.commit_info() vol[cfg.x: cfg.x + cfg.length, cfg.y:cfg.y + cfg.length, cfg.z: cfg.z + cfg.length] = rawdata[:,:,:]Encoding Image Type Lossless Neuroglancer Viewable Description raw Any Y Y Serialized numpy arrays. png Image Y Y Multiple slices stiched into a single PNG. jpeg Image N Y Multiple slices stiched into a single JPEG. jxl Image Optional Y* Multiple slices stiched into a single JPEG-XL. compressed_segmentation Segmentation Y Y Renumbered numpy arrays to reduce data width. Also used by Neuroglancer internally. compresso Segmentation Y Y Lossless high compression algorithm for connectomics segmentation. crackle Segmentation Y Y* Lossless high compression algorithm for connectomics segmentation. fpzip Floating Point Y Y* Takes advantage of IEEE 754 structure + L1 Lorenzo predictor to get higher compression. kempressed Anisotropic Z Floating Point N** Y* Adds manipulations on top of fpzip to achieve higher compression. zfpc Alignment Vector Fields N*** Y* zfp stream container.
* Not integrated into official Neuroglancer yet, but available on a fork which can be seen on Github here. ** Lossless if your data can handle adding and then subtracting 2. *** Lossless by default, but you probably want to use the lossy mode.
Note on compressed_segmentation
: To use, make sure compressed_segmentation_block_size
is specified (usually [8,8,8]
. This field will appear in the info
file in the relevant scale.
Note on zfpc
: To configure, use the fields zfpc_rate
, zfpc_precision
, zfpc_tolerance
, zfpc_correlated_dims
in the relevant scale of the info
file.
# Basic Examples vol = CloudVolume('gs://mybucket/retina/image') vol = CloudVolume('gs://mybucket/retina/image', secrets=token, dict or json) vol = CloudVolume('gs://bucket/dataset/channel', mip=0, bounded=True, fill_missing=False) vol = CloudVolume('gs://bucket/dataset/channel', mip=[ 8, 8, 40 ], bounded=True, fill_missing=False) # set mip at this resolution vol = CloudVolume('gs://bucket/datasset/channel', info=info) # New info file from scratch image = vol[:,:,:] # Download the entire image stack into a numpy array image = vol.download(bbox, mip=2, renumber=True) # download w/ smaller dtype image = vol.download(bbox, mip=2, label=777) # download binary image for label uniq = vol.unique(bbox, mip=0) # efficient extraction of unique labels listing = vol.exists( np.s_[0:64, 0:128, 0:64] ) # get a report on which chunks actually exist exists = vol.image.has_data(mip=0) # boolean check to see if any data is there listing = vol.delete( np.s_[0:64, 0:128, 0:64] ) # delete this region (bbox must be chunk aligned) vol[64:128, 64:128, 64:128] = image # Write a 64^3 image to the volume img = vol.download_point( (x,y,z), size=256, mip=3 ) # download region around (mip 0) x,y,z at mip 3 pts = vol.scattered_points([ (x1,y1,z1), (x2,y2,z2) ]) # download voxel labels located at indicated points # download image files without decompressing or rendering them. Good for caching! files = vol.download_files(bbox, mip, decompress=False) # creates an anonymous in-memory CloudVolume that # will self-clean when the reference count drops to zero. # Store compressed images in memory for quick access! mem_vol = vol.image.memory_cutout(bbox, mip=1, encoding="compresso") # Server vol.viewer() # launches neuroglancer compatible web server on http://localhost:1337 # Microviewer (outdated, see https://github.com/seung-lab/microviewer/) img = vol[64:1028, 64:1028, 64:128] img.viewer() # launches web viewer on http://localhost:8080 # Meshes vol.mesh.save(12345) # save 12345 as ./12345.ply on disk vol.mesh.save([12345, 12346, 12347]) # merge three segments into one file vol.mesh.save(12345, file_format='obj') # 'ply' and 'obj' are both supported vol.mesh.get(12345) # return the mesh as vertices and faces instead of writing to disk vol.mesh.get([ 12345, 12346 ]) # return these two segids fused into a single mesh vol.mesh.get([ 12345, 12346 ], fuse=False) # return { 12345: mesh, 12346: mesh } vol.mesh.put(meshes) # works for unsharded legacy only vol.mesh.delete(segids) # works for unsharded meshes only mesh.viewer() # Opens GUI. Requires vtk. # Skeletons skel = vol.skeleton.get(12345) vol.skeleton.upload_raw(segid, skel.vertices, skel.edges, skel.radii, skel.vertex_types) vol.skeleton.upload(skel) # specified in nm, only available for datasets with a generated index skels = vol.skeleton.get_by_bbox( Bbox( (0,0,0), (500, 500, 500) ) ) vol.skeleton.spatial_index # None if not available skel.empty() # boolean bytes = skel.encode() # encode to Precomputed format (bytes) skel = Skeleton.decode(bytes) # decode from PrecomputedFormat skel = skel.crop(slices or bbox) # eliminate vertices and edges outside bbox skel = skel.consolidate() # eliminate duplicate vertices and edges skel3 = skel.merge(skel2) # merge two skeletons into one skel = skel.clone() # create copy skel = Skeleton.from_swc(swcstr) # decode an SWC file skel_str = skel.to_swc() # convert to SWC file in string representation skel.viewer() # Opens GUI. Requires matplotlib skel.cable_length() # sum of all edge lengths skel = skel.downsample(2) # reduce size of skeleton by factor of 2 skel = skel.average_smoothing(3) # rolling average, n=3 skel1 == skel2 # check if contents of internal arrays match Skeleton.equivalent(skel1, skel2) # ...even if there are differences like differently numbered edges # Parallel Operation vol = CloudVolume('gs://mybucket/retina/image', parallel=True) # Use all cores vol.parallel = 4 # e.g. any number > 1, use this many cores data = vol[:] # uses shared memory to coordinate processes under the hood # Shared Memory Output (can be used by other processes) vol = CloudVolume(...) # data backed by a shared memory buffer # location is optional (defaults to vol.shared_memory_id) data = vol.download_to_shared_memory(np.s_[:], location='some-example') vol.unlink_shared_memory() # delete the shared memory associated with this cloudvolume vol.shared_memory_id # get/set the default shared memory location for this instance # Shared Memory Upload vol = CloudVolume(...) vol.upload_from_shared_memory('my-shared-memory-id', # do not prefix with /dev/shm bbox=Bbox( (0,0,0), (10000, 7500, 64) )) # Download or Upload directly with Files # The files must be in Precomputed raw format. vol.download_to_file('/path/to/file', bbox=Bbox(...), mip=0) # bbox is the download region vol.upload_from_file('/path/to/file', bbox=Bbox(...), mip=0) # bbox is the region it represents # Transfer w/o Excess Memory Allocation vol = CloudVolume(...) # single core, send all of vol to destination, no painting memory # you can also transcode the image encoding and compression type vol.transfer_to('gs://bucket/dataset/layer', vol.bounds) # Caching, default located at $HOME/.cloudvolume/cache/$PROTOCOL/$BUCKET/$DATASET/$LAYER/$RESOLUTION # You can also set the cache location using # cache=str or with environment variable CLOUD_VOLUME_CACHE_DIR vol = CloudVolume('gs://mybucket/retina/image', cache=True) # Basic Example image = vol[0:10,0:10,0:10] # Download partial image and cache vol[0:10,0:10,0:10] = image # Upload partial image and cache # Resizing and clearing the LRU in-memory cache vol = CloudVolume(..., lru_bytes=num_bytes) # >= 0, 0 means disabled vol.image.lru.resize(num_bytes) # same vol.image.lru.clear() len(vol.image.lru) # number of items in lru vol.image.lru.nbytes # size in bytes (not counting LRU structures, nor recursive) vol.image.lru.items() # etc, also functions as a dict # Can use more memory, but generally faster access to LRU cache # You can set the encoding to anything valid for this image type # to e.g. save space and/or accelerate certain query types. vol = CloudVolume(..., lru_bytes=num_bytes, lru_encoding='raw') # Evaluating the on-disk Cache vol.cache.list() # list files in cache at this mip level vol.cache.list(mip=1) # list files in cache at mip 1 vol.cache.list_meshes() vol.cache.list_skeletons() vol.cache.num_files() # number of files at this mip level vol.cache.num_bytes(all_mips=True) # Return num files for each mip level in a list vol.cache.num_bytes() # number of bytes taken up by files, size on disk can be bigger vol.cache.num_bytes(all_mips=True) # Return num bytes for each mip level in a list vol.cache.enabled = True/False # Turn the cache on/off vol.cache.path = Str # set the cache location vol.cache.compress = None/True/False # None: Link to cloud setting, Boolean: Force cache to compressed (True) or uncompressed (False) # Deleting Cache vol.cache.flush() # Delete local cache for this layer at this mip level vol.cache.flush(preserve=Bbox(...)) # Same, but preserve cache in a region of space vol.cache.flush_region(region=Bbox(...), mips=[...]) # Delete the cached files in this region at these mip levels (default all mips) vol.cache.flush_info() vol.cache.flush_provenance() # Using Green Threads import gevent.monkey gevent.monkey.patch_all(thread=False) cv = CloudVolume(..., green_threads=True) img = cv[...] # now green threads will be used # Dask Interface (requires dask installation) arr = cv.to_dask() arr = cloudvolume.dask.from_cloudvolume(cloudpath) # same as to_dask res = cloudvolume.dask.to_cloudvolume(arr, cloudpath, compute=bool, return_store=bool)
CloudVolume( cloudpath:str, mip:int=0, bounded:bool=True, autocrop:bool=False, fill_missing:bool=False, cache:CacheType=False, compress_cache:CompressType=None, cdn_cache:bool=True, progress:bool=INTERACTIVE, info:dict=None, provenance:dict=None, compress:CompressType=None, compress_level:Optional[int]=None, non_aligned_writes:bool=False, parallel:ParallelType=1, delete_black_uploads:bool=False, background_color:int=0, green_threads:bool=False, use_https:bool=False, max_redirects:int=10, mesh_dir:Optional[str]=None, skel_dir:Optional[str]=None, agglomerate:bool=False, secrets:SecretsType=None, spatial_index_db:Optional[str]=None, lru_bytes:int = 0, cache_locking:bool = True )
agglomerate: (bool, graphene only) sets the default mode for downloading
images to agglomerated (True) vs watershed (False).
autocrop: (bool) If the specified retrieval bounding box exceeds the
volume bounds, process only the area contained inside the volume.
This can be useful way to ensure that you are staying inside the
bounds when `bounded=False`.
background_color: (number) Specifies what the "background value" of the
volume is (traditionally 0). This is mainly for changing the behavior
of delete_black_uploads.
bounded: (bool) If a region outside of volume bounds is accessed:
True: Throw an error
False: Allow accessing the region. If no files are present, an error
will still be thrown. Consider combining this option with
`fill_missing=True`. However, this can be dangrous as it allows
missing files and potentially network errors to be intepreted as
zeros.
cache: (bool or str) Store downs and uploads in a cache on disk
and preferentially read from it before redownloading.
- falsey value: no caching will occur.
- True: cache will be located in a standard location.
- non-empty string: cache is located at this file path
After initialization, you can adjust this setting via:
`cv.cache.enabled = ...` which accepts the same values.
Note: This cache is totally separate from the LRU controlled by
lru_bytes.
cache_locking: (bool) The local cache will use file locks via fasteners to prevent issues with multi-process cache access. If this is not a concern, performance can be slightly improved by setting this to False. This uses CloudFiles' locking mechanism.
cdn_cache: (int, bool, or str) Sets Cache-Control HTTP header on uploaded
image files. Most cloud providers perform some kind of caching. As of
this writing, Google defaults to 3600 seconds. Most of the time you'll
want to go with the default.
- int: number of seconds for cache to be considered fresh (max-age)
- bool: True: max-age=3600, False: no-cache
- str: set the header manually
compress: (bool, str, None) pick which compression method to use.
None: (default) gzip for raw arrays and no additional compression
for compressed_segmentation and fpzip.
bool:
True=gzip,
False=no compression, Overrides defaults
str:
'gzip': Extension so that we can add additional methods in the future
like lz4 or zstd.
'br': Brotli compression, better compression rate than gzip
'': no compression (same as False).
compress_level: (int, None) level for compression. Higher number results
in better compression but takes longer.
Defaults to 9 for gzip (ranges from 0 to 9).
Defaults to 5 for brotli (ranges from 0 to 11).
compress_cache: (None or bool) If not None, override default compression
behavior for the cache.
delete_black_uploads: (bool) If True, on uploading an entirely black chunk,
issue a DELETE request instead of a PUT. This can be useful for avoiding storing
tiny files in the region around an ROI. Some storage systems using erasure coding
don't do well with tiny file sizes.
fill_missing: (bool) If a chunk file is unable to be fetched:
True: Use a block of zeros
False: Throw an error
green_threads: (bool) Use green threads instead of preemptive threads. This
can result in higher download performance for some compression types. Preemptive
threads seem to reduce performance on multi-core machines that aren't densely
loaded as the CPython threads are assigned to multiple cores and the thrashing
+ GIL reduces performance. You'll need to add the following code to the top
of your program to use green threads:
import gevent.monkey
gevent.monkey.patch_all(threads=False)
lru_bytes: (int) number of bytes used to cache recently used image
tiles in memory. This is an in-memory cache and is completely separate from
the `cache` parameter that handles disk IO. Tiles are stripped over only their
second stage compression.
info: (dict) In lieu of fetching a neuroglancer info file, use this one.
This is useful when creating new datasets and for repeatedly initializing
a new cloudvolume instance.
max_redirects: (int) if > 0, allow up to this many redirects via info file 'redirect'
data fields. If <= 0, allow no redirections and access the current info file directly
without raising an error.
mesh_dir: (str) if not None, override the info['mesh'] key before pulling the
mesh info file.
mip: (int or iterable) Which level of downsampling to read and write from.
0 is the highest resolution. You can also specify the voxel resolution
like mip=[6,6,30] which will search for the appropriate mip level.
non_aligned_writes: (bool) Enable non-aligned writes. Not multiprocessing
safe without careful design. When not enabled, a
cloudvolume.exceptions.AlignmentError is thrown for non-aligned writes.
https://github.com/seung-lab/cloud-volume/wiki/Advanced-Topic:-Non-Aligned-Writes
parallel (int: 1, bool): Number of extra processes to launch, 1 means only
use the main process. If parallel is True use the number of CPUs
returned by multiprocessing.cpu_count(). When parallel > 1, shared
memory (Linux) or emulated shared memory via files (other platforms)
is used by the underlying download.
progress: (bool) Show progress bars.
Defaults to True in interactive python, False in script execution mode.
provenance: (string, dict) In lieu of fetching a provenance
file, use this one.
secrets: (dict) provide per-instance authorization tokens. If not provided,
defaults to looking in .cloudvolume/secrets for necessary tokens.
skel_dir: (str) if not None, override the info['skeletons'] key before
pulling the skeleton info file.
spatial_index_db: (str) A path to an sqlite3 or mysql database that follows
the following uri schema. sqlite is assumed if no scheme is present in
the uri.
[sqlite://]filename.db
mysql://<username>:<password>@<host>:<port>/<db_name>
Igneous generated datasets include a JSON based spatial
database that tiles the dataset. This can be fast enough up to about 100 TVx
datasets. Above that, a proper database is required for efficient queries.
We provide multiple SQL database types that the index can be hosted on.
use_https: (bool) maps gs:// and s3:// to their respective https paths. The
https paths hit a cached, read-only version of the data and may be faster.
Better documentation coming later, but for now, here's a summary of the most useful method calls. Use help(cloudvolume.CloudVolume.$method) for more info.
.obj
format. You can combine equivialences into a single object too.$HOME/.cloudvolume/cache
.gsutil
or aws s3
.vol.shared_memory_id
)vol.unlink_shared_memory()
)Accessed as vol.$PROPERTY
like vol.mip
. Parens next to each property mean (data type:default, writability). (r) means read only, (w) means write only, (rw) means read/write.
vol.commit_info()
to save your changes to storage.vol.commit_provenance()
to save your changes to storage.* These properties can also be accessed with a function named like vol.mip_$PROPERTY($MIP)
. By default they return the current mip level assigned to the CloudVolume, but any mip level can be accessed via the corresponding mip_
function. Example: vol.mip_resolution(2)
would return the resolution of mip 2.
When you download an image using CloudVolume it gives you a VolumeCutout
. These are numpy.ndarray
subclasses that support a few extra properties to help make book keeping easier. The major advantage is save_images()
which can help you view your dataset as PNG slices.
dataset_name
- The dataset this image came from.layer
- Which layer it came from.mip
- Which mip it came fromlayer_type
- "image" or "segmentation"bounds
- The bounding box of the cutoutnum_channels
- Alias for vol.shape[3]
save_images()
- Save Z slice PNGs of the current image to ./saved_images
for manual inspectionviewer()
- Start a local web server (http://localhost:8080) that can view small volumes interactively. This was recently changed from view
as view
is a useful numpy method.If you have Precomputed volume onto local disk and would like to point neuroglancer to it:
vol = CloudVolume(...) vol.viewer()
You can then point any version of neuroglancer at it using precomputed://http://localhost:1337/NAME_OF_LAYER
.
CloudVolume includes a built-in dependency free viewer for 3D volumetric datasets smaller than about 2GB uncompressed. It supports bool, uint8, uint16, uint32, float32, and float64 numpy data types for both images and segmentation and can render a composite overlay of image and segmentation.
You can launch a viewer using the .viewer()
method of a VolumeCutout object or by using the view(...)
or hyperview(...)
functions that come with the cloudvolume module. This launches a web server on http://localhost:8080
. You can read more on the wiki.
from cloudvolume import CloudVolume, view, hyperview channel_vol = CloudVolume(...) seg_vol = CloudVolume(...) img = vol[...] seg = vol[...] img.viewer() # works on VolumeCutouts seg.viewer() # segmentation type derived from info view(img) # alternative for arbitrary numpy arrays view(seg, segmentation=True) hyperview(img, seg) # img and seg shape must match >>> Viewer server listening to http://localhost:8080
There are also seperate viewers for skeleton and mesh objects that can be invoked by calling .viewer()
on either object. However, skeletons depend on matplotlib
and meshes depend on vtk
and OpenGL to function.
pip install vtk matplotlib
Python 2.7 is no longer supported by CloudVolume. Updated versions of pip
will download the last supported release 1.21.1. You can read more on the policy page: https://github.com/seung-lab/cloud-volume/wiki/Policy#python-27-end-of-life
Thank you to everyone that has contributed past or current to CloudVolume or the ecosystem it serves. We love you!
Jeremy Maitin-Shepard created Neuroglancer and defined the Precomputed format. Yann Leprince provided a pure Python codec for the compressed_segmentation format. Jeremy Maitin-Shepard and Stephen Plaza created C++ code defining the compression and decompression (respectively) protocol for compressed_segmentation. Peter Lindstrom et al. created the fpzip algorithm, and contributed a C++ implementation and advice. Nico Kemnitz adapted our data to fpzip using the "Kempression" protocol (we named it, not him). Dan Bumbarger contributed code and information helpful for getting CloudVolume working on Windows. Fredrik Kihlander's pure python implementation of murmurhash3 and Austin Appleby developed murmurhash3 which is necessary for the sharded format. Ben Falk advocated for and did the bulk of the work on brotli compression. Some of the ideas in CloudVolume are based on work by Jingpeng Wu in BigArrays.jl. Sven Dorkenwald, Manuel Castro, and Akhilesh Halageri contributed advice and code towards implementing the graphene interface. Oluwaseun Ogedengbe contributed documentation for the sharded format. Eric Perlman wrote the reader for Neuroglancer Multi-LOD meshes. Ignacio Tartavull and William Silversmith wrote the initial version of CloudVolume.
Please cite the Igneous paper if you used this package in your research:
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4