# datacube.utils.cog.write_cog¶

datacube.utils.cog.write_cog(geo_im, fname, blocksize=None, ovr_blocksize=None, overview_resampling=None, overview_levels=None, **extra_rio_opts)[source]

Save xarray.DataArray to a file in Cloud Optimized GeoTiff format.

This function is “Dask aware”. If geo_im is a Dask array, then the output of this function is also a Dask Delayed object. This allows us to save multiple images concurrently across a Dask cluster. If you are not familiar with Dask this can be confusing, as no operation is performed until .compute() method is called, so if you call this function with Dask array it will return immediately without writing anything to disk.

# Example: save red band from first time slice to file "red.tif"
write_cog(xx.isel(time=0).red, "red.tif").compute()
# or compute input first instead
write_cog(xx.isel(time=0).red.compute(), "red.tif")

Parameters
Returns

Path to which output was written

Returns

Bytes if fname=":mem:"

Return type

Union[Path, bytes, Delayed]

Returns

dask.Delayed object if input is a Dask array

Note

memory requirements

This function generates temporary in memory tiff file without compression to speed things up. It then adds overviews to this file and only then copies it to the final destination with requested compression settings. This is necessary to produce compliant COG, since COG standard demands overviews to be placed before native resolution data and double pass is the only way to achieve this currently.

This means that this function will use about 1.5 to 2 times memory taken by geo_im.