This notebook series guides you through the *cloudify* service: Serving Xarray datasets as zarr-datasets with xpublish and enabled server-side processing with dask. It introduces to the basic concepts with some examples. It was designed to work on DKRZ's HPC.
This notebook series guides you through the *cloudify* service: Serving Xarray datasets as zarr-datasets with xpublish and enabled server-side processing with dask. It introduces to the basic concepts with some examples. It was designed to work on DKRZ's HPC.
In the following, you will learn how to start and control the cloudify service.
In the following, you will learn how to start and control the cloudify service.
**Is there any other reason why to run cloudify on the only internally accessible DKRZ HPC?**
**Is there any other reason why to run cloudify on the only internally accessible DKRZ HPC?**
If you *cloudify* a virtual dataset prepared as a highly aggregated, analysis-ready dataset, clients can subset from this *one* large aggregated dataset instead of searching the file system.
If you *cloudify* a virtual dataset prepared as a highly aggregated, analysis-ready dataset, clients can subset from this *one* large aggregated dataset instead of searching the file system.
2. For being able to allow secure *https* access, we need a ssl certificate. For testing purposes and for levante, we can use a self-signed one. Additionally, right now, some applications do only allow access through https. We can create it like this:
2. For being able to allow secure *https* access, we need a ssl certificate. For testing purposes and for levante, we can use a self-signed one. Additionally, right now, some applications do only allow access through https. We can create it like this:
3. We write a cloudify script for data serving and start to host an example dataset in a background process. We need to consider some settings:
3. We write a cloudify script for data serving and start to host an example dataset in a background process. We need to consider some settings:
**Port**
**Port**
The resulting service listens on a specifc *port*. In case we share a node, we can only use ports that are not allocated already. To enbale us all to run an own app, we agree to use a port `90XX` where XX are the last two digits of our account.
The resulting service listens on a specifc *port*. In case we share a node, we can only use ports that are not allocated already. To enbale us all to run an own app, we agree to use a port `90XX` where XX are the last two digits of our account.
**Dask Cluster**
**Dask Cluster**
Dask is necessary for lazy access of the data. Additionally, a dask cluster can help us to do server-side processing like uniform encoding. When starting the imported predefined dask cluster, it will use the following resources:
Dask is necessary for lazy access of the data. Additionally, a dask cluster can help us to do server-side processing like uniform encoding. When starting the imported predefined dask cluster, it will use the following resources:
```python
```python
n_workers=2,
n_workers=2,
threads_per_worker=8,
threads_per_worker=8,
memory_limit="16GB"
memory_limit="16GB"
```
```
which should be sufficient for at least two clients in parallel. We store it in an environment variable so that xpublish can find it. We futhermore have to allign the two event loops of dask and xpublish's asyncio with `nest_asyncio.apply()`. Event loops can be seen as *while* loops for a permanently running main worker.
which should be sufficient for at least two clients in parallel. We store it in an environment variable so that xpublish can find it. We futhermore have to allign the two event loops of dask and xpublish's asyncio with `nest_asyncio.apply()`. Event loops can be seen as *while* loops for a permanently running main worker.
**Plug-ins**
**Plug-ins**
Xpublish finds pre-installed plugins like the intake-plugin by itself. Own plugins need to be registered.
Xpublish finds pre-installed plugins like the intake-plugin by itself. Own plugins need to be registered.