datacube.utils.cog.to_cog(geo_im, blocksize=None, ovr_blocksize=None, overview_resampling=None, overview_levels=None, **extra_rio_opts)[source]

Compress xarray.DataArray into Cloud Optimized GeoTiff bytes in memory.

This function doesn’t write to disk, it compresses in RAM, this is useful for saving data to S3 or other cloud object stores.

This function is “Dask aware”. If geo_im is a Dask array, then the output of this function is also a Dask Delayed object. This allows us to compress multiple images concurrently across a Dask cluster. If you are not familiar with Dask this can be confusing, as no operation is performed until .compute() method is called, so if you call this function with Dask array it will return immediately without compressing any data.

  • geo_im (DataArray) – xarray.DataArray with crs

  • blocksize (Optional[int]) – Size of internal tiff tiles (512x512 pixels)

  • ovr_blocksize (Optional[int]) – Size of internal tiles in overview images (defaults to blocksize)

  • overview_resampling (Optional[str]) – Use this resampling when computing overviews

  • overview_levels (Optional[List[int]]) – List of shrink factors to compute overiews for: [2,4,8,16,32]

  • nodata – Set nodata flag to this value if supplied, by default nodata is read from the attributes of the input array (geo_im.attrs['nodata']).

  • extra_rio_opts – Any other option is passed to


In-memory GeoTiff file as bytes

Return type

Union[bytes, Delayed]


dask.Delayed object if input is a Dask array

Also see write_cog()