Skip to content
Snippets Groups Projects
Commit 2c0bc902 authored by Marco Kulüke's avatar Marco Kulüke
Browse files

Merge branch 'dev_maria' into 'master'

add link to server-side explanation and fix typo

See merge request mipdata/tutorials-and-use-cases!7
parents 1932d361 579a162b
No related branches found
No related tags found
1 merge request!7add link to server-side explanation and fix typo
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
# Calculate a climate index in a server hosting all the climate model data # Calculate a climate index in a server hosting all the climate model data
We will show here how to count the annual summer days for a particular geolocation of your choice using the results of a climate model, in particular, we can chose one of the historical or one of the shared socioeconomic pathway (ssp) experiments of the Coupled Model Intercomparison Project [CMIP6](https://pcmdi.llnl.gov/CMIP6/). We will show here how to count the annual summer days for a particular geolocation of your choice using the results of a climate model, in particular, we can chose one of the historical or one of the shared socioeconomic pathway (ssp) experiments of the Coupled Model Intercomparison Project [CMIP6](https://pcmdi.llnl.gov/CMIP6/).
This Jupyter notebook is meant to run in the Jupyterhub server of the German Climate Computing Center [DKRZ](https://www.dkrz.de/) which is an [ESGF](https://esgf.llnl.gov/) repository that hosts 4 petabytes of CMIP6 data. Please, choose the Python 3 unstable kernel on the Kernel tab above, it contains all the common geoscience packages. See more information on how to run Jupyter notebooks at DKRZ [here](https://www.dkrz.de/up/systems/mistral/programming/jupyter-notebook). Find there how to run this Jupyter notebook in the DKRZ server out of the Jupyterhub, which will entail that you create the environment accounting for the required package dependencies. Running this Jupyter notebook in your premise, which is also known as "client-side computing", will also require that you install the necessary packages on you own but it will anyway fail because you will not have direct access to the data pool, which one of the main benefits of the "server-side data-near computing" we demonstrate in this use case. This Jupyter notebook is meant to run in the Jupyterhub server of the German Climate Computing Center [DKRZ](https://www.dkrz.de/) which is an [ESGF](https://esgf.llnl.gov/) repository that hosts 4 petabytes of CMIP6 data. Please, choose the Python 3 unstable kernel on the Kernel tab above, it contains all the common geoscience packages. See more information on how to run Jupyter notebooks at DKRZ [here](https://www.dkrz.de/up/systems/mistral/programming/jupyter-notebook). Find there how to run this Jupyter notebook in the DKRZ server out of the Jupyterhub, which will entail that you create the environment accounting for the required package dependencies. Running this Jupyter notebook in your premise, which is also known as [client-side](https://en.wikipedia.org/wiki/Client-side) computing, will also require that you install the necessary packages on you own but it will anyway fail because you will not have direct access to the data pool. Direct access to the data pool is one of the main benefits of the [server-side](https://en.wikipedia.org/wiki/Server-side) data-near computing we demonstrate in this use case.
Thanks to the data and computer scientists Marco Kulüke, Fabian Wachsmann, Regina Kwee-Hinzmann, Caroline Arnold, Felix Stiehler, Maria Moreno, and Stephan Kindermann at DKRZ for their contribution to this notebook. Thanks to the data and computer scientists Marco Kulüke, Fabian Wachsmann, Regina Kwee-Hinzmann, Caroline Arnold, Felix Stiehler, Maria Moreno, and Stephan Kindermann at DKRZ for their contribution to this notebook.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
In this use case you will learn the following: In this use case you will learn the following:
- How to access a dataset from the DKRZ CMIP6 model data archive - How to access a dataset from the DKRZ CMIP6 model data archive
- How to count the annual number of summer days for a particular geolocation using this model dataset - How to count the annual number of summer days for a particular geolocation using this model dataset
- How to visualize the results - How to visualize the results
You will use: You will use:
- [Intake](https://github.com/intake/intake) for finding the data in the catalog of the DKRZ archive - [Intake](https://github.com/intake/intake) for finding the data in the catalog of the DKRZ archive
- [Xarray](http://xarray.pydata.org/en/stable/) for loading and processing the data - [Xarray](http://xarray.pydata.org/en/stable/) for loading and processing the data
- [hvPlot](https://hvplot.holoviz.org/index.html) for visualizing the data in the Jupyter notebook and save the plots in your local computer - [hvPlot](https://hvplot.holoviz.org/index.html) for visualizing the data in the Jupyter notebook and save the plots in your local computer
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## 0. Load Packages ## 0. Load Packages
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
import numpy as np # fundamental package for scientific computing import numpy as np # fundamental package for scientific computing
import pandas as pd # data analysis and manipulation tool import pandas as pd # data analysis and manipulation tool
import xarray as xr # handling labelled multi-dimensional arrays import xarray as xr # handling labelled multi-dimensional arrays
import intake # to find data in a catalog, this notebook explains how it works import intake # to find data in a catalog, this notebook explains how it works
from ipywidgets import widgets # to use widgets in the Jupyer Notebook from ipywidgets import widgets # to use widgets in the Jupyer Notebook
from geopy.geocoders import Nominatim # Python client for several popular geocoding web services from geopy.geocoders import Nominatim # Python client for several popular geocoding web services
import folium # visualization tool for maps import folium # visualization tool for maps
import hvplot.pandas # visualization tool for interactive plots import hvplot.pandas # visualization tool for interactive plots
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## 1. Which dataset do we need? -> Choose Shared Socioeconomic Pathway, Place, and Year ## 1. Which dataset do we need? -> Choose Shared Socioeconomic Pathway, Place, and Year
<a id='selection'></a> <a id='selection'></a>
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Produce the widget where we can select what experiment we are interested on # Produce the widget where we can select what experiment we are interested on
experiments = {'historical':range(1850, 2015), 'ssp585':range(2015, 2101), 'ssp126':range(2015, 2101), experiments = {'historical':range(1850, 2015), 'ssp585':range(2015, 2101), 'ssp126':range(2015, 2101),
'ssp245':range(2015, 2101), 'ssp119':range(2015, 2101), 'ssp434':range(2015, 2101), 'ssp245':range(2015, 2101), 'ssp119':range(2015, 2101), 'ssp434':range(2015, 2101),
'ssp460':range(2015, 2101)} 'ssp460':range(2015, 2101)}
experiment_box = widgets.Dropdown(options=experiments, description="Select experiment: ", disabled=False,) experiment_box = widgets.Dropdown(options=experiments, description="Select experiment: ", disabled=False,)
display(experiment_box) display(experiment_box)
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Produce the widget where we can select what geolocation and year are interested on # Produce the widget where we can select what geolocation and year are interested on
place_box = widgets.Text(description="Enter place:") place_box = widgets.Text(description="Enter place:")
display(place_box) display(place_box)
x = experiment_box.value x = experiment_box.value
year_box = widgets.Dropdown(options=x, description="Select year: ", disabled=False,) year_box = widgets.Dropdown(options=x, description="Select year: ", disabled=False,)
display(year_box) display(year_box)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### 1.1 Find Coordinates of chosen Place ### 1.1 Find Coordinates of chosen Place
If ambiguous, the most likely coordinates will be chosen, e.g. "Hamburg" results in "Hamburg, 20095, Deutschland", (53.55 North, 10.00 East) If ambiguous, the most likely coordinates will be chosen, e.g. "Hamburg" results in "Hamburg, 20095, Deutschland", (53.55 North, 10.00 East)
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# We use the module Nominatim gives us the geographical coordinates of the place we selected above # We use the module Nominatim gives us the geographical coordinates of the place we selected above
geolocator = Nominatim(user_agent="any_agent") geolocator = Nominatim(user_agent="any_agent")
location = geolocator.geocode(place_box.value) location = geolocator.geocode(place_box.value)
print(location.address) print(location.address)
print((location.latitude, location.longitude)) print((location.latitude, location.longitude))
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### 1.2 Show Place on a Map ### 1.2 Show Place on a Map
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# We use the folium package to plot our selected geolocation in a map # We use the folium package to plot our selected geolocation in a map
m = folium.Map(location=[location.latitude, location.longitude]) m = folium.Map(location=[location.latitude, location.longitude])
tooltip = location.latitude, location.longitude tooltip = location.latitude, location.longitude
folium.Marker([location.latitude, location.longitude], tooltip=tooltip).add_to(m) folium.Marker([location.latitude, location.longitude], tooltip=tooltip).add_to(m)
display(m) display(m)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
We have defined the place and time. Now, we can search for the climate model dataset. We have defined the place and time. Now, we can search for the climate model dataset.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## 2. Intake Catalog ## 2. Intake Catalog
Similar to the shopping catalog at your favorite online bookstore, the intake catalog contains information (e.g. model, variables, and time range) about each dataset (the title, author, and number of pages of the book, for instance) that you can access before loading the data. It means that thanks to the catalog, you can find where is the book just by using some keywords and you do not need to hold it in your hand to know the number of pages, for instance. Similar to the shopping catalog at your favorite online bookstore, the intake catalog contains information (e.g. model, variables, and time range) about each dataset (the title, author, and number of pages of the book, for instance) that you can access before loading the data. It means that thanks to the catalog, you can find where is the book just by using some keywords and you do not need to hold it in your hand to know the number of pages, for instance.
### 2.1 Load the Intake Catalog ### 2.1 Load the Intake Catalog
We load the catalog descriptor with the intake package. The catalog is updated daily. The catalog descriptor is created by the DKRZ developers that manage the catalog, you do not need to care so much about it, knowing where it is and loading it is enough: We load the catalog descriptor with the intake package. The catalog is updated daily. The catalog descriptor is created by the DKRZ developers that manage the catalog, you do not need to care so much about it, knowing where it is and loading it is enough:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Path to catalog descriptor on the DKRZ server # Path to catalog descriptor on the DKRZ server
col_url = "/work/ik1017/Catalogs/mistral-cmip6.json" col_url = "/work/ik1017/Catalogs/mistral-cmip6.json"
# Open the catalog with the intake package and name it "col" as short for "collection" # Open the catalog with the intake package and name it "col" as short for "collection"
col = intake.open_esm_datastore(col_url) col = intake.open_esm_datastore(col_url)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Let's see what is inside the intake catalog. The underlying data base is given as a pandas dataframe which we can access with "col.df". Then, "col.df.head()" shows us the first rows of the table of the catalog. Let's see what is inside the intake catalog. The underlying data base is given as a pandas dataframe which we can access with "col.df". Then, "col.df.head()" shows us the first rows of the table of the catalog.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
This catalog contains all datasets of the CMIP6 archive at DKRZ. In the next step we narrow the results down by chosing a model and variable. This catalog contains all datasets of the CMIP6 archive at DKRZ. In the next step we narrow the results down by chosing a model and variable.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### 2.2 Browse the Intake Catalog ### 2.2 Browse the Intake Catalog
In this example we chose the Max-Planck Earth System Model in High Resolution Mode ("MPI-ESM1-2-HR") and the maximum temperature near surface ("tasmax") as variable. We also choose an experiment. CMIP6 comprises several kind of experiments. Each experiment has various simulation members. you can find more information in the [CMIP6 Model and Experiment Documentation](https://pcmdi.llnl.gov/CMIP6/Guide/dataUsers.html#5-model-and-experiment-documentation). In this example we chose the Max-Planck Earth System Model in High Resolution Mode ("MPI-ESM1-2-HR") and the maximum temperature near surface ("tasmax") as variable. We also choose an experiment. CMIP6 comprises several kind of experiments. Each experiment has various simulation members. you can find more information in the [CMIP6 Model and Experiment Documentation](https://pcmdi.llnl.gov/CMIP6/Guide/dataUsers.html#5-model-and-experiment-documentation).
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Store the name of the model we chose in a variable named "climate_model" # Store the name of the model we chose in a variable named "climate_model"
climate_model = "MPI-ESM1-2-LR" # here we choose Max-Plack Institute's Earth Sytem Model in high resolution climate_model = "MPI-ESM1-2-LR" # here we choose Max-Plack Institute's Earth Sytem Model in high resolution
# This is how we tell intake what data we want # This is how we tell intake what data we want
query = dict( query = dict(
source_id = climate_model, # the model source_id = climate_model, # the model
variable_id = "tasmax", # temperature at surface, maximum variable_id = "tasmax", # temperature at surface, maximum
table_id = "day", # daily maximum table_id = "day", # daily maximum
experiment_id = experiment_box.label, # what we selected in the drop down menu,e.g. SSP2.4-5 2015-2100 experiment_id = experiment_box.label, # what we selected in the drop down menu,e.g. SSP2.4-5 2015-2100
member_id = "r10i1p1f1", # "r" realization, "i" initialization, "p" physics, "f" forcing member_id = "r10i1p1f1", # "r" realization, "i" initialization, "p" physics, "f" forcing
) )
# Intake looks for the query we just defined in the catalog of the CMIP6 data pool at DKRZ # Intake looks for the query we just defined in the catalog of the CMIP6 data pool at DKRZ
cat = col.search(**query) cat = col.search(**query)
# Show query results # Show query results
cat.df cat.df
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
The result of the query are like the list of results you get when you search for articles in the internet by writing keywords in your search engine (Duck duck go, Ecosia, Google,...). Thanks to the intake package, we did not need to know the path of each dataset, just selecting some keywords (the model name, the variable,...) was enough to obtain the results. If advance users are still interested in the location of the data inside the DKRZ archive, intake also provides the path and the OpenDAP URL (see the last columns above). The result of the query are like the list of results you get when you search for articles in the internet by writing keywords in your search engine (Duck duck go, Ecosia, Google,...). Thanks to the intake package, we did not need to know the path of each dataset, just selecting some keywords (the model name, the variable,...) was enough to obtain the results. If advance users are still interested in the location of the data inside the DKRZ archive, intake also provides the path and the OpenDAP URL (see the last columns above).
Now we will find which file in the dataset contains our selected year so in the next section we can just load that specific file and not the whole dataset. Now we will find which file in the dataset contains our selected year so in the next section we can just load that specific file and not the whole dataset.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### 2.3 Find the Dataset That Contains the Year You Selected in Drop Down Menu Above ### 2.3 Find the Dataset That Contains the Year You Selected in Drop Down Menu Above
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Create a copy of cat.df, thus further modifications do not affect it # Create a copy of cat.df, thus further modifications do not affect it
query_result_df = cat.df.copy() # new dataframe to play with query_result_df = cat.df.copy() # new dataframe to play with
# Each dataset contains many files, extract the initial and final year of each file # Each dataset contains many files, extract the initial and final year of each file
query_result_df["start_year"] = query_result_df["time_range"].str[0:4].astype(int) # add column with start year query_result_df["start_year"] = query_result_df["time_range"].str[0:4].astype(int) # add column with start year
query_result_df["end_year"] = query_result_df["time_range"].str[9:13].astype(int) # add column with end year query_result_df["end_year"] = query_result_df["time_range"].str[9:13].astype(int) # add column with end year
# Delete the time range column # Delete the time range column
query_result_df.drop(columns=["time_range"], inplace = True) # "inplace = False" will drop the column in the view but not in the actual dataframe query_result_df.drop(columns=["time_range"], inplace = True) # "inplace = False" will drop the column in the view but not in the actual dataframe
query_result_df.iloc[0:3] query_result_df.iloc[0:3]
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Select the file that contains the year we selected in the drop down menu above, e.g. 2015 # Select the file that contains the year we selected in the drop down menu above, e.g. 2015
selected_file = query_result_df_m[(year_box.value >= query_result_df["start_year"]) & ( selected_file = query_result_df_m[(year_box.value >= query_result_df["start_year"]) & (
year_box.value <= query_result_df["end_year"])] year_box.value <= query_result_df["end_year"])]
# Path of the file that contains the selected year # Path of the file that contains the selected year
selected_path = selected_file["path"].values[0] selected_path = selected_file["path"].values[0]
# Show the path of the file that contains the selected year # Show the path of the file that contains the selected year
selected_path selected_path
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## 3. Load the model data ## 3. Load the model data
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Load Data with the open_dataset() xarray method # Load Data with the open_dataset() xarray method
ds_tasmax = xr.open_dataset(selected_path) ds_tasmax = xr.open_dataset(selected_path)
# Open variable "tasmax" over the whole time range # Open variable "tasmax" over the whole time range
tasmax_xr = ds_tasmax["tasmax"] tasmax_xr = ds_tasmax["tasmax"]
# Define start and end time string # Define start and end time string
time_start = str(year_box.value) + "-01-01T12:00:00.000000000" time_start = str(year_box.value) + "-01-01T12:00:00.000000000"
time_end = str(year_box.value) + "-12-31T12:00:00.000000000" time_end = str(year_box.value) + "-12-31T12:00:00.000000000"
# Slice selected year # Slice selected year
tasmax_year_xr = tasmax_xr.loc[time_start:time_end, :, :] tasmax_year_xr = tasmax_xr.loc[time_start:time_end, :, :]
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Let's have a look at the xarray data array # Let's have a look at the xarray data array
tasmax_year_xr tasmax_year_xr
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
We see not only the numbers, but also information about it, such as long name, units, and the data history. This information is called metadata. We see not only the numbers, but also information about it, such as long name, units, and the data history. This information is called metadata.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## 4. Compare Model Grid Cell with chosen Location ## 4. Compare Model Grid Cell with chosen Location
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Find nearest model coordinate by finding the index of the nearest grid point # Find nearest model coordinate by finding the index of the nearest grid point
abslat = np.abs(tasmax_year_xr["lat"] - location.latitude) abslat = np.abs(tasmax_year_xr["lat"] - location.latitude)
abslon = np.abs(tasmax_year_xr["lon"] - location.longitude) abslon = np.abs(tasmax_year_xr["lon"] - location.longitude)
c = np.maximum(abslon, abslat) c = np.maximum(abslon, abslat)
([xloc], [yloc]) = np.where(c == np.min(c)) # xloc and yloc are the indices of the neares model grid point ([xloc], [yloc]) = np.where(c == np.min(c)) # xloc and yloc are the indices of the neares model grid point
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Draw map again # Draw map again
m = folium.Map(location=[location.latitude, location.longitude], zoom_start=8) m = folium.Map(location=[location.latitude, location.longitude], zoom_start=8)
tooltip = location.latitude, location.longitude tooltip = location.latitude, location.longitude
folium.Marker( folium.Marker(
[location.latitude, location.longitude], [location.latitude, location.longitude],
tooltip=tooltip, tooltip=tooltip,
popup="Location selected by You", popup="Location selected by You",
).add_to(m) ).add_to(m)
# #
tooltip = float(tasmax_year_xr["lat"][yloc].values), float(tasmax_year_xr["lon"][xloc].values) tooltip = float(tasmax_year_xr["lat"][yloc].values), float(tasmax_year_xr["lon"][xloc].values)
folium.Marker( folium.Marker(
[tasmax_year_xr["lat"][yloc], tasmax_year_xr["lon"][xloc]], [tasmax_year_xr["lat"][yloc], tasmax_year_xr["lon"][xloc]],
tooltip=tooltip, tooltip=tooltip,
popup="Model Grid Cell Center", popup="Model Grid Cell Center",
).add_to(m) ).add_to(m)
# Define coordinates of model grid cell (just for visualization) # Define coordinates of model grid cell (just for visualization)
rect_lat1_model = (tasmax_year_xr["lat"][yloc - 1] + tasmax_year_xr["lat"][yloc]) / 2 rect_lat1_model = (tasmax_year_xr["lat"][yloc - 1] + tasmax_year_xr["lat"][yloc]) / 2
rect_lon1_model = (tasmax_year_xr["lon"][xloc - 1] + tasmax_year_xr["lon"][xloc]) / 2 rect_lon1_model = (tasmax_year_xr["lon"][xloc - 1] + tasmax_year_xr["lon"][xloc]) / 2
rect_lat2_model = (tasmax_year_xr["lat"][yloc + 1] + tasmax_year_xr["lat"][yloc]) / 2 rect_lat2_model = (tasmax_year_xr["lat"][yloc + 1] + tasmax_year_xr["lat"][yloc]) / 2
rect_lon2_model = (tasmax_year_xr["lon"][xloc + 1] + tasmax_year_xr["lon"][xloc]) / 2 rect_lon2_model = (tasmax_year_xr["lon"][xloc + 1] + tasmax_year_xr["lon"][xloc]) / 2
# Draw model grid cell # Draw model grid cell
folium.Rectangle( folium.Rectangle(
bounds=[[rect_lat1_model, rect_lon1_model], [rect_lat2_model, rect_lon2_model]], bounds=[[rect_lat1_model, rect_lon1_model], [rect_lat2_model, rect_lon2_model]],
color="#ff7800", color="#ff7800",
fill=True, fill=True,
fill_color="#ffff00", fill_color="#ffff00",
fill_opacity=0.2, fill_opacity=0.2,
).add_to(m) ).add_to(m)
m m
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Climate models have a finite resolution. Hence, models do not provide the data of a particular point, but the mean over a model grid cell. Take this in mind when comparing model data with observed data (e.g. weather stations). Climate models have a finite resolution. Hence, models do not provide the data of a particular point, but the mean over a model grid cell. Take this in mind when comparing model data with observed data (e.g. weather stations).
Now, we will visualize the daily maximum temperature time series of the model grid cell. Now, we will visualize the daily maximum temperature time series of the model grid cell.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## 5. Draw Temperature Time Series and Count Summer days ## 5. Draw Temperature Time Series and Count Summer days
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
The definition of a summer day varies from region to region. According to the [German Weather Service](https://www.dwd.de/EN/ourservices/germanclimateatlas/explanations/elements/_functions/faqkarussel/sommertage.html), "a summer day is a day on which the maximum air temperature is at least 25.0°C". Depending on the place you selected, you might want to apply a different threshold to calculate the summer days index. The definition of a summer day varies from region to region. According to the [German Weather Service](https://www.dwd.de/EN/ourservices/germanclimateatlas/explanations/elements/_functions/faqkarussel/sommertage.html), "a summer day is a day on which the maximum air temperature is at least 25.0°C". Depending on the place you selected, you might want to apply a different threshold to calculate the summer days index.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
tasmax_year_place_xr = tasmax_year_xr[:, yloc, xloc] - 273.15 # Convert Kelvin to °C tasmax_year_place_xr = tasmax_year_xr[:, yloc, xloc] - 273.15 # Convert Kelvin to °C
tasmax_year_place_df = pd.DataFrame(index = tasmax_year_place_xr['time'].values, tasmax_year_place_df = pd.DataFrame(index = tasmax_year_place_xr['time'].values,
columns = ['Temperature', 'Summer Day Threshold']) # create the dataframe columns = ['Temperature', 'Summer Day Threshold']) # create the dataframe
tasmax_year_place_df.loc[:, 'Model Temperature'] = tasmax_year_place_xr.values # insert model data into the dataframe tasmax_year_place_df.loc[:, 'Model Temperature'] = tasmax_year_place_xr.values # insert model data into the dataframe
tasmax_year_place_df.loc[:, 'Summer Day Threshold'] = 25 # insert the threshold into the dataframe tasmax_year_place_df.loc[:, 'Summer Day Threshold'] = 25 # insert the threshold into the dataframe
# Plot data and define title and legend # Plot data and define title and legend
tasmax_year_place_df.hvplot.line(y=['Model Temperature', 'Summer Day Threshold'], tasmax_year_place_df.hvplot.line(y=['Model Temperature', 'Summer Day Threshold'],
value_label='Temperature in °C', legend='bottom', value_label='Temperature in °C', legend='bottom',
title='Daily maximum Temperature near Surface for '+place_box.value, title='Daily maximum Temperature near Surface for '+place_box.value,
height=500, width=620) height=500, width=620)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
As we can see, the maximum daily temperature is highly variable over the year. As we are using the mean temperature in a model grid cell, the amount of summer days might we different that what you would expect at a single location. As we can see, the maximum daily temperature is highly variable over the year. As we are using the mean temperature in a model grid cell, the amount of summer days might we different that what you would expect at a single location.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Summer days index calculation # Summer days index calculation
no_summer_days_model = tasmax_year_place_xr[tasmax_year_place_xr > 25].size # count the number of summer days no_summer_days_model = tasmax_year_place_xr[tasmax_year_place_xr > 25].size # count the number of summer days
# Print results in a sentence # Print results in a sentence
print("According to the German Weather Service definition, in the " +experiment_box.label +" experiment the " print("According to the German Weather Service definition, in the " +experiment_box.label +" experiment the "
+climate_model +" model shows " +str(no_summer_days_model) +" summer days for " +str(place_box.value) +climate_model +" model shows " +str(no_summer_days_model) +" summer days for " +str(place_box.value)
+ " in " + str(year_box.value) +".") + " in " + str(year_box.value) +".")
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
[Try another location and year](#selection) [Try another location and year](#selection)
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment