- hosting and archiving catalogs at /pool/data/catalogs and in the cloud (`archive-catalog.sh`)
- creating statistics for catalogs including kpis like no. of files and datasets
One **main catalog** collects all catalogs in /pool/data/catalogs and serves as the *entry point* for dkrz's intake users.
## builder/
## environment.yml
This folder contains scripts for generating the catalog data bases (`.csv.gz`).
Use that file with `conda env create -f environment.yml` to generate a software environment which allows you to use the notebooks wihtin this repository.
## esm-collections/
All **esm-collections** available at DKRZ are saved within this folder. Those are `.json` files which can be opend with `intake.open_esm_datastore()`.
## environment.yml
Use that file with `conda env create -f environment.yml` to generate a software environment which allows you to use the notebooks wihtin this repository.
## archive-catalog.sh
This script is part of the updating cronjob. It
## builder/
* test the newly created catalogs
* writes those catalogs to
* a place which is linked to /pool/data/catalogs
* to the swift cloud store
* archives the old version of the catalog
This folder contains scripts for generating the catalog data bases (`.csv.gz`).