* static / ad hoc tool generate --\> ?static catalog gen? / ?test stac ingest?
* Expect (cross catalog and non-ESGF catalogs)
* static cross catalog overlay ??
* EERIE
* see [cloudify](https://gitlab.dkrz.de/data-infrastructure-services/cloudify/-/tree/main/workshop?ref_type=heads)
* NextGems etc: intake
* (intake --\> stac) !?
* (Freva: tbd. in February meeting)
* crawler --\> direct solr ingest --\> (stac gen?)
- status test servers, prototyping setup etc.
-
# Discussion
## discussion figures:
- DKRZ metadata crawler / indexer approach: build new - reuse existing approaches etc.
* ceda indexer: developer left, very generic tool, relatively good code base but more generic then what we need, generation of kerchunk / aggregation done separately - not clear if it has major advantages to build on this ...
* esgf-pub: crawling / gridmapfile generation done separately anyway, not a good code base, quite CMIP/ESGF specific, unclear future development roadmap (current funding problems, freeze situation) ..
* eerie approach: zarr / kerchunking approach central, yet we probably can not borrow much from other approaches (esgf etc.)
* freva indexer: disadvantages are that we need to build our higher aggregation levels based on the freva base level, these devs. are then quite dependant on the freva/solr base layer, which is old and dkrz specific. Major advantage would be that we could relay on a shared stable production dkrz indexing solution
- separate discussion about catalog of catalog approach and specific DKRZ catalog solution
#### action items:
* continue discussion with freva people
* @carsten: look once again into the ceda crawling approach, to see whether there is a major advantage on reusing/building upon their code base
* carsten/fabi: discuss kerchunking approach - virtualzarr etc. , follow new zarr3 related approaches
* continue discussion as part of our regular thursday meetings ..