Command Line Tools#
The Open Data Cube offers a CLI for common administrative tasks related to the Open Data Cube.
datacube#
Data Cube command-line interface
datacube [OPTIONS] COMMAND [ARGS]...
Options
- --version#
- -v, --verbose#
Use multiple times for more verbosity
- --log-file <log_file>#
Specify log file
- -E, --env <env>#
- -C, --config, --config_file <config>#
- --log-queries#
Print database queries.
dataset#
Dataset management commands
datacube dataset [OPTIONS] COMMAND [ARGS]...
add#
Add datasets to the Data Cube
datacube dataset add [OPTIONS] [DATASET_PATHS]...
Options
- -p, --product <product_names>#
Only match against products specified with this option, you can supply several by repeating this option with a new product name
- -x, --exclude-product <exclude_product_names>#
Attempt to match to all products in the DB except for products specified with this option, you can supply several by repeating this option with a new product name
- --auto-add-lineage, --no-auto-add-lineage#
WARNING: will be deprecated in datacube v1.9. Default behaviour is to automatically add lineage datasets if they are missing from the database, but this can be disabled if lineage is expected to be present in the DB, in this case add will abort when encountering missing lineage dataset
- --verify-lineage, --no-verify-lineage#
WARNING: will be deprecated in datacube v1.9. Lineage referenced in the metadata document should be the same as in DB, default behaviour is to skip those top-level datasets that have lineage data different from the version in the DB. This option allows omitting verification step.
- --dry-run#
Check if everything is ok
- --ignore-lineage#
Pretend that there is no lineage data in the datasets being indexed
- --confirm-ignore-lineage#
WARNING: this flag has been deprecated and will be removed in datacube v1.9. Pretend that there is no lineage data in the datasets being indexed, without confirmation
- --archive-less-mature <archive_less_mature>#
Find and archive less mature versions of the dataset, will fail if more mature versions of the dataset already exist. Can also specify a millisecond delta amount to be taken into acount when comparing timestamps. Default delta is 500ms.
Arguments
- DATASET_PATHS#
Optional argument(s)
archive#
Archive datasets
datacube dataset archive [OPTIONS] [IDS]...
Options
- -d, --archive-derived#
Also recursively archive derived datasets
- --dry-run#
Don’t archive. Display datasets that would get archived
- --all#
Ignore id list - archive ALL non-archived datasets (warning: may be slow on large databases)
Arguments
- IDS#
Optional argument(s)
info#
Display dataset information
datacube dataset info [OPTIONS] [IDS]...
Options
- --show-sources#
Also show source datasets
- --show-derived#
Also show derived datasets
- -f <f>#
Output format
- Default
yaml
- Options
csv | yaml
- --max-depth <max_depth>#
Maximum sources/derived depth to travel
Arguments
- IDS#
Optional argument(s)
purge#
Purge archived datasets
datacube dataset purge [OPTIONS] [IDS]...
Options
- --dry-run#
Don’t archive. Display datasets that would get archived
- --all#
Ignore id list - purge ALL archived datasets (warning: may be slow on large databases)
Arguments
- IDS#
Optional argument(s)
restore#
Restore datasets
datacube dataset restore [OPTIONS] [IDS]...
Options
- -d, --restore-derived#
Also recursively restore derived datasets
- --dry-run#
Don’t restore. Display datasets that would get restored
- --derived-tolerance-seconds <derived_tolerance_seconds>#
Only restore derived datasets that were archived this recently to the original dataset
- --all#
Ignore id list - restore ALL archived datasets (warning: may be slow on large databases)
Arguments
- IDS#
Optional argument(s)
search#
Search available Datasets
EXPRESSIONS
Select datasets using [EXPRESSIONS] to filter by date, product type, spatial extents or other searchable fields.
FIELD: x, y, lat, lon, time, product, …
datacube dataset search [OPTIONS] [EXPRESSIONS]...
Options
- --limit <limit>#
Limit the number of results
- -f <f>#
Output format
- Default
yaml
- Options
csv | yaml
Arguments
- EXPRESSIONS#
Optional argument(s)
update#
Update datasets in the Data Cube
datacube dataset update [OPTIONS] [DATASET_PATHS]...
Options
- --allow-any <keys_that_can_change>#
Allow any changes to the specified key (a.b.c)
- --dry-run#
Check if everything is ok
- --location-policy <location_policy>#
What to do with previously recorded dataset location(s)
- ‘keep’: keep as alternative location [default]- ‘archive’: mark as archived- ‘forget’: remove from the index- Options
keep | archive | forget
- --archive-less-mature <archive_less_mature>#
Find and archive less mature versions of the dataset, will fail if more mature versions of the dataset already exist. Can also specify a millisecond delta amount to be taken into acount when comparing timestamps. Default delta is 500ms.
Arguments
- DATASET_PATHS#
Optional argument(s)
uri-search#
Search by dataset locations
PATHS may be either file paths or URIs
datacube dataset uri-search [OPTIONS] [PATHS]...
Options
- --search-mode <search_mode>#
Exact, prefix or guess based searching
- Options
exact | prefix | guess
Arguments
- PATHS#
Optional argument(s)
ingest#
WARNING: Ingestion has been deprecated in v1.8.14 and will be removed in v1.9 Ingest datasets
datacube ingest [OPTIONS]
Options
- -c, --config-file <config_file>#
Ingest configuration file
- --year <year>#
Limit the process to a particular year
- --queue-size <queue_size>#
Task queue size
- --save-tasks <save_tasks>#
Save tasks to the specified file
- --load-tasks <load_tasks>#
Load tasks from the specified file
- -d, --dry-run#
Check if everything is ok
- --allow-product-changes#
Allow the output product definition to be updated if it differs.
- --executor <executor>#
WARNING: executors have been deprecated in v1.8.14, and will be removed in v1.9. Run parallelized, either locally or distributed. eg: –executor multiproc 4 (OR) –executor distributed 10.0.0.8:8888
metadata#
Metadata type commands
datacube metadata [OPTIONS] COMMAND [ARGS]...
add#
Add or update metadata types in the index
datacube metadata add [OPTIONS] [FILES]...
Options
- --allow-exclusive-lock, --forbid-exclusive-lock#
Allow index to be locked from other users while updating (default: false)
Arguments
- FILES#
Optional argument(s)
list#
List metadata types that are defined in the generic index.
datacube metadata list [OPTIONS]
show#
Show information about a metadata type.
datacube metadata show [OPTIONS] [METADATA_TYPE_NAME]...
Options
- -f <output_format>#
Output format
- Default
yaml
- Options
yaml | json
Arguments
- METADATA_TYPE_NAME#
Optional argument(s)
update#
Update existing metadata types.
An error will be thrown if a change is potentially unsafe.
(An unsafe change is anything that may potentially make the metadata type incompatible with existing types of the same name)
datacube metadata update [OPTIONS] [FILES]...
Options
- --allow-unsafe, --forbid-unsafe#
Allow unsafe updates (default: false)
- --allow-exclusive-lock, --forbid-exclusive-lock#
Allow index to be locked from other users while updating (default: false)
- -d, --dry-run#
Check if everything is ok
Arguments
- FILES#
Optional argument(s)
product#
Product commands
datacube product [OPTIONS] COMMAND [ARGS]...
add#
Add or update products in the generic index.
datacube product add [OPTIONS] [FILES]...
Options
- --allow-exclusive-lock, --forbid-exclusive-lock#
Allow index to be locked from other users while updating (default: false)
Arguments
- FILES#
Optional argument(s)
list#
List products that are defined in the generic index.
datacube product list [OPTIONS]
Options
- -f <output_format>#
Output format
- Default
default
- Options
default | csv | yaml | tab
show#
Show details about a product in the generic index.
datacube product show [OPTIONS] [PRODUCT_NAME]...
Options
- -f <output_format>#
Output format
- Default
yaml
- Options
yaml | json
Arguments
- PRODUCT_NAME#
Optional argument(s)
update#
Update existing products.
An error will be thrown if a change is potentially unsafe.
(An unsafe change is anything that may potentially make the product incompatible with existing datasets of that type)
datacube product update [OPTIONS] [FILES]...
Options
- --allow-unsafe, --forbid-unsafe#
Allow unsafe updates (default: false)
- --allow-exclusive-lock, --forbid-exclusive-lock#
Allow index to be locked from other users while updating (default: false)
- -d, --dry-run#
Check if everything is ok
Arguments
- FILES#
Optional argument(s)
system#
System commands
datacube system [OPTIONS] COMMAND [ARGS]...
check#
Check and display current configuration
datacube system check [OPTIONS]
clone#
Clone an existing ODC index into a new, empty index
datacube system clone [OPTIONS] SOURCE_ENV
Options
- --batch-size <batch_size>#
Size of batches for bulk-adding to the new index
Arguments
- SOURCE_ENV#
Required argument
init#
Initialise the database
datacube system init [OPTIONS]
Options
- --default-types, --no-default-types#
Add default types? (default: true)
- --init-users, --no-init-users#
Include user roles and grants. (default: true)
- --recreate-views, --no-recreate-views#
Recreate dynamic views
- --rebuild, --no-rebuild#
Rebuild all dynamic fields (caution: slow)
- --lock-table, --no-lock-table#
Allow table to be locked (eg. while creating missing indexes)
user#
User management commands
datacube user [OPTIONS] COMMAND [ARGS]...
create#
Create a User
datacube user create [OPTIONS] {user|ingest|manage|admin} USER
Options
- --description <description>#
Arguments
- ROLE#
Required argument
- USER#
Required argument
delete#
Delete a User
datacube user delete [OPTIONS] [USERS]...
Arguments
- USERS#
Optional argument(s)
grant#
Grant a role to users
datacube user grant [OPTIONS] {user|ingest|manage|admin} [USERS]...
Arguments
- ROLE#
Required argument
- USERS#
Optional argument(s)
list#
List users
datacube user list [OPTIONS]
Options
- -f <f>#
Output format
- Default
yaml
- Options
csv | yaml
datacube-search#
Search the Data Cube
datacube-search [OPTIONS] COMMAND [ARGS]...
Options
- --version#
- -v, --verbose#
Use multiple times for more verbosity
- --log-file <log_file>#
Specify log file
- -E, --env <env>#
- -C, --config, --config_file <config>#
- --log-queries#
Print database queries.
- -f <f>#
Output format
- Default
pretty
- Options
csv | pretty
datasets#
Search available Datasets
EXPRESSIONS
Select datasets using [EXPRESSIONS] to filter by date, product type, spatial extents or other searchable fields.
FIELD: x, y, lat, lon, time, product, …
datacube-search datasets [OPTIONS] [EXPRESSIONS]...
Arguments
- EXPRESSIONS#
Optional argument(s)
product-counts#
Count product Datasets available by period
PERIOD: eg. 1 month, 6 months, 1 year
EXPRESSIONS
Select datasets using [EXPRESSIONS] to filter by date, product type, spatial extents or other searchable fields.
FIELD: x, y, lat, lon, time, product, …
datacube-search product-counts [OPTIONS] PERIOD [EXPRESSIONS]...
Arguments
- PERIOD#
Required argument
- EXPRESSIONS#
Optional argument(s)
datacube-worker#
datacube-worker [OPTIONS]
Options
- --executor <executor>#
WARNING: executors have been deprecated in v1.8.14, and will be removed in v1.9. (distributed|dask(alias for distributed)) host:port
- --nprocs <nprocs>#
Number of worker processes to launch
Note
To check all available CLI go to https://github.com/opendatacube/datacube-core/tree/develop/datacube/scripts