Commit 1ea423e5 authored by Oliver Heidmann's avatar Oliver Heidmann
Browse files

Merge branch 'develop' of gitlab.dkrz.de:mpim-sw/cdo into develop

parents a8007e43 9c6886f0
Uwe Schulzweida, <uwe.schulzweida AT mpimet.mpg.de>, is the main author.
Ralf Mueller, <ralf.mueller AT mpimet.mpg.de>
Oliver Heidmann, <oliver.heidmann AT mpimet.mpg.de>
Cedrick Ansorge, <cedrick.ansorge AT mpimet.mpg.de>
Luis Kornblueh, <luis.kornblueh AT mpimet.mpg.de>
Fabian Wachsmann <wachsmann@dkrz.de>
Modali Kameswarrao <kameswarrao.modali@mpimet.mpg.de>
Fabian Wachsmann <wachsmann AT dkrz.de>
Cedrick Ansorge, <cedrick.ansorge AT mpimet.mpg.de>
Modali Kameswarrao <kameswarrao.modali AT mpimet.mpg.de>
Ralf Quast, <ralf.quast AT brockmann-consult.de>
......@@ -26,8 +26,11 @@ Authors:
This program was developed at the Max-Planck-Institute for Meteorology.
Uwe Schulzweida, <uwe.schulzweida AT mpimet.mpg.de>, is the main author.
Ralf Mueller, <ralf.mueller AT mpimet.mpg.de>
Oliver Heidmann, <oliver.heidmann AT mpimet.mpg.de>
Luis Kornblueh, <luis.kornblueh AT mpimet.mpg.de>
Fabian Wachsmann <wachsmann AT dkrz.de>
Cedrick Ansorge, <cedrick.ansorge AT mpimet.mpg.de>
Modali Kameswarrao <kameswarrao.modali AT mpimet.mpg.de>
Ralf Quast, <ralf.quast AT brockmann-consult.de>
Send questions, comments and bug reports to <https://code.mpimet.mpg.de/projects/cdo>
......
\subsection{Operator chaining}\label{generalChaining}
\emph{Operator chaining} allows to combine two or more operators on the command line into a single
CDO call. This allows the creation of complex operations out of more simple ones: reductions over
{\CDO} call. This allows the creation of complex operations out of more simple ones: reductions over
several dimensions, file merges and all kinds of analysis processes. All operators with a fixed
number of input streams and one output stream can pass the result directly to an other operator.
For differentiation between files and operators all operators must be written with a prepended "–"
......@@ -9,14 +10,15 @@ when chaining.
cdo -monmean -add -mulc,2.0 infile1 -daymean infile2 outfile (CDO example call)
\end{verbatim}
Here \texttt{monmean} will have the output of \texttt{add} while \texttt{add} takes the output of
\texttt{mulc,2.0} and \texttt{daymean}. \texttt{infile 1} and \texttt{infile 2} are inputs for their predecessor.
\texttt{mulc,2.0} and \texttt{daymean}. \texttt{infile1} and \texttt{infile2} are inputs for their predecessor.
When mixing operators with an arbitrary number of
input streams extra care needs to be taken. The following examples illustrates why.
\begin{enumerate}
\item \texttt{cdo info -timavg infile?}
\item \texttt{cdo info -timavg infile1 infile2}
\item \texttt{cdo info -timavg infile?}
\item \texttt{cdo timavg infile1 tmpfile} \\
\texttt{cdo info tmpfile infile2}
\texttt{cdo info tmpfile infile2} \\
\texttt{rm tmpfile}
\end{enumerate}
All three examples produce identical results. The time average will be computed only on the first
input file.\\\\
......@@ -40,10 +42,9 @@ Operator chaining is implemented over POSIX Threads (pthreads).
Therefore this {\CDO} feature is not available on operating systems without POSIX Threads
support!
\subsection{Chaining Benefits}
Combining operators can have several benefits. The most obvious is a
performance increase through reducing disk I/O:\\
performance increase through reducing disk I/O:
\begin{verbatim}
cdo sub -dayavg infile2 -timavg infile1 outfile
\end{verbatim}
......@@ -59,25 +60,21 @@ files can have a big influence on the overall performance.\\
A second aspect is the execution of operators: Limited by the algorythms potentially
all operators of a chain can run in parallel.
\section{Advanced Usage}
In this section we will introduce advanced features of CDO. These include operator grouping which
allows to write more complex CDO calls and the apply keyword which allows to shorten calls that
In this section we will introduce advanced features of {\CDO}. These include operator grouping which
allows to write more complex {\CDO} calls and the apply keyword which allows to shorten calls that
need an operator to be executed on multiple files as well as wildcards which allow to search paths
for file signatures. These features have several restrictions and follow rules that depend on the
input/output properties. These required properties of operators can be investigated with the
following commands:
\begin{verbatim}
cdo --attribs [obase/arbitrary/filesOnly/onlyFirst/noOutput] [operatorName]?
cdo --attribs [arbitrary/filesOnly/onlyFirst/noOutput/obase]
\end{verbatim}
\begin{itemize}
\item \emph{arbitrary} describes all operators where the number of inputs is not defined.
\item \emph{filesOnly} are operators that can have other operators as input.
OnlyFirst shows which operators can only be at the most left position of the polish notation
argument chain.
\item \emph{onlyFirst} shows which operators can only be at the most left position of the polish notation argument chain.
\item \emph{noOutput} are all operators that do not print to any file (e.g info)
\item \emph{obase} Here obase describes an operator that does not use the output argument as file but e.g as a file
name base (output base). This is almost exclusivly used for operators the split input files.
......@@ -93,9 +90,9 @@ Wildcards are a standard feature of command line interpreters (shells)
on many operating systems. They are placeholder characters used in file paths that are expanded by
the interpreter into file lists. For further information the
\href{https://tldp.org/LDP/abs/html}{Advance Bash Scripting Guide} is a
valuable source of information. Handling of input is a central issue for CDO
valuable source of information. Handling of input is a central issue for {\CDO}
and in some circumstances it is not enough to use the wildcards from the shell.
That's why CDO can handle them on its own.\newline
That's why {\CDO} can handle them on its own.\newline
\begin{tabular}{|l|l|}
\hline
\textbf{all files} &
......@@ -107,19 +104,19 @@ That's why CDO can handle them on its own.\newline
\hline
2020-3* and 2020-3-??.txt & 2020-3-01.txt 2020-3-02.txt 2020-3-12.txt 2020-3-13.txt 2020-3-15.txt \\
\hline
2020-2-?1.txt & 2020-3-01.txt \\
2020-3-?1.txt & 2020-3-01.txt \\
\hline
*.grb & 2021.grb 2020.grb \\
\hline
\end{tabular}\newline
\\
Use single quotes if the input stream names matched to a single wildcard expression. In this case
CDO will do the pattern matching and the output can be combined with other operators. Here is an
{\CDO} will do the pattern matching and the output can be combined with other operators. Here is an
example for this feature:
\begin{verbatim}
cdo timavg -select,name=temperature 'infile?' outfile
\end{verbatim}
In earlier versions of CDO this was necessary to have the right files parsed to the right
In earlier versions of {\CDO} this was necessary to have the right files parsed to the right
operator. Newer version support this with the argument grouping
feature (see \ref{argGroups}). We advice the use of the grouping mechanism instead of the single quoted wildcards since this
feature could be deprecated in future versions. \newline\newline
......@@ -131,7 +128,7 @@ systems without the \textit{glob()} function!\newline
In section \ref{generalChaining} we described that it is not possible to chain operators
with an arbitrary number of inputs. In this section we want to show
how this can be achieved through the use of \emph{operator grouping} with
angled brackets \texttt{[]}. Using these brackets CDO can assigned the inputs
angled brackets \texttt{[]}. Using these brackets {\CDO} can assigned the inputs
to their corresponding operators during the execution of the command line. The
ability to write operator combination in a parenthis-free way is partly given
up in favor of allowing operators with arbitrary number of inputs. This allows
......@@ -139,27 +136,15 @@ a much more compact way to handle large number of input files.\\ The following
example shows an example which we will transform from a non-working solution to
a working one.
\begin{verbatim}
cdo -infov -div -fldmean -cat infile1 -mulc,-1 infile2 -fldmax infile3
cdo -infon -div -fldmean -cat infile1 -mulc,2.0 infile2 -fldmax infile3
\end{verbatim}
This exmple will throw the following error:
This example will throw the following error:
\begin{verbatim}
cdo -infov -div -fldmean -cat infile1 -mulc,2.0 infile2 -fldmax infile3
cdo (Warning): Did you forget to use '[' and/or ']' for multiple variable input operators?
cdo (Warning): use option --variableInput, for description
cdo (Abort): Too few streams specified! Operator div needs 2 input streams and 1 output stream!
\end{verbatim}
The error comes from -div. This operator needs two input streams and one output stream, but -cat has claimed all
possible streams on its right hand side as input and didn't leave anything for the remaining input
or output stream of -div.
For this we can declare a group which will be passed to the operator in front of the group.
\begin{verbatim}
cdo -infov -div -fldmean -cat [ infile1 -mulc,2.0 infile2 ] -fldmax infile3
\end{verbatim}
It is possible to have groups inside groups:
\begin{verbatim}
cdo -infov -div -fldmean -cat [ fileA1 infileC2 -merge [ infileB1 infileB2 ] ] -fldmax infileD
\end{verbatim}
The error is raised by the operator \emph{div}. This operator needs two input
streams and one output stream, but the \emph{cat} operator has claimed all
possible streams on its right hand side as input because it accepts an
......@@ -167,20 +152,20 @@ arbitrary number of inputs. Hence it didn't leave anything for the remaining
input or output streams of \emph{div}. For this we can declare a group which will be
passed to the operator left of the group.
\begin{verbatim}
cdo -infov -div -fldmean -cat [ infile1 -mulc,2.0 infile2 ] -fldmax infile3
cdo -infon -div -fldmean -cat [ infile1 -mulc,2.0 infile2 ] -fldmax infile3
\end{verbatim}
For full flexibility it is possible to have groups inside groups:
\begin{verbatim}
cdo -infov -div -fldmean -cat [ fileA1 infileC2 -merge [ infileB1 infileB2 ] ] -fldmax infileD
cdo -infon -div -fldmean -cat [ fileA1 infileC2 -merge [ infileB1 infileB2 ] ] -fldmax infileD
\end{verbatim}
\subsection{Apply Keyword}\label{applykeyword}
When working with medium or large number of similar files there is a common
problem of a processing step (often a reduction) which needs to be performed on
all of them before a more
specific analysis can be applied. Ususally this can be done in two ways: One
options is to use
option is to use
\texttt{merge} to glue everything together and chain the reduction step
after it. The second options is to write a for-loop over all inputs which perform
after it. The second option is to write a for-loop over all inputs which perform
the basic processing on each of the files separately and call \texttt{merge} one
the results. Unfortunately both options
have side-effects: The first one needs a lot of memory because all files are
......@@ -202,11 +187,10 @@ The following is an example with three input files:\\
\end{verbatim}
\caption{Usage and result of apply keyword}
\end{figure}
Apply is especially useful when combined with wildcards. The previous example can be shortened
further.
Apply is especially useful when combined with wildcards. The previous example can be shortened further.
\begin{verbatim}
cdo -merge -apply,-daymean [ file? ] outfile
\end{verbatim}
\end{verbatim}
As shown this feature allows to simplify commands with medium amount of files and to move reductions further
back. This can also have a positive impact on the performance.
\begin{figure}[H]
......@@ -230,7 +214,7 @@ back. This can also have a positive impact on the performance.
In the example in figure \ref{simpApply} the resulting call will dramatically save process
interaction as well as execution times since the reduction (daymean) is applied on the files first. That means
that the merge operator will receive the reduced files and the operations for merging the whole
data is saved. For other CDO calls further improvements can be made by adding more arguments to
data is saved. For other {\CDO} calls further improvements can be made by adding more arguments to
apply (\ref{multiArgApply})
\begin{figure}[H]
A less performant example.
......@@ -246,7 +230,7 @@ apply (\ref{multiArgApply})
\paragraph{Restrictions:}While the apply keyword can be extremely helpful it has several restrictions (for now!).
\begin{itemize}
\item Apply inputs can only be files, wildcards and operators that have 0 inputs and 1 output.
\item Apply can not be used as the first cdo operator.
\item Apply can not be used as the first {\CDO} operator.
\item Apply arguments can only be operators with 1 input and 1 output.
\item Grouping inside the Apply argument or input is not allowed.
\end{itemize}
......
......@@ -54,6 +54,8 @@ The main {\CDO} features are:
\input{usage}
\input{cdo_advanced_usage}
\input{grid}
\input{zaxis}
......
......@@ -15,14 +15,7 @@ The following {\CDO} operators are parallelized with OpenMP:
\textbf{Module} & \textbf{Operator} & \textbf{Description} \\ \hline
Afterburner & after & ECHAM standard post processor \\ \hline
Detrend & detrend & Detrend \\ \hline
Ensstat & ensmin & Ensemble minimum \\ \hline
Ensstat & ensmax & Ensemble maximum \\ \hline
Ensstat & enssum & Ensemble sum \\ \hline
Ensstat & ensmean & Ensemble mean \\ \hline
Ensstat & ensavg & Ensemble average \\ \hline
Ensstat & ensvar & Ensemble variance \\ \hline
Ensstat & ensstd & Ensemble standard deviation \\ \hline
Ensstat & enspctl & Ensemble percentiles \\ \hline
Ensstat & ens<STAT> & Statistical values over an ensemble \\ \hline
EOF & eof & Empirical Orthogonal Functions \\ \hline
Filter & bandpass & Bandpass filtering \\ \hline
Filter & lowpass & Lowpass filtering \\ \hline
......@@ -35,16 +28,11 @@ Genweights & gennn & Generate nearest neighbor remap weights \\ \hli
Genweights & gencon & Generate 1st order conservative remap weights \\ \hline
Genweights & gencon2 & Generate 2nd order conservative remap weights \\ \hline
Genweights & genlaf & Generate largest area fraction remap weights \\ \hline
Gridboxstat & gridboxmin & Gridbox minimum \\ \hline
Gridboxstat & gridboxmax & Gridbox maximum \\ \hline
Gridboxstat & gridboxsum & Gridbox sum \\ \hline
Gridboxstat & gridboxmean & Gridbox mean \\ \hline
Gridboxstat & gridboxavg & Gridbox average \\ \hline
Gridboxstat & gridboxvar & Gridbox variance \\ \hline
Gridboxstat & gridboxstd & Gridbox standard deviation \\ \hline
Gridboxstat & gridbox<STAT> & Statistical values over grid boxes\\ \hline
Intlevel & intlevel & Linear level interpolation \\ \hline
Intlevel3d & intlevel3d & Linear level interpolation from/to 3D vertical coordinates \\ \hline
Remapeta & remapeta & Remap vertical hybrid level \\ \hline
Remap & remapbil & Bilinear interpolation \\ \hline
Remap & remapbil & Bilinear interpolation \\ \hline
Remap & remapbic & Bicubic interpolation \\ \hline
Remap & remapdis & Distance-weighted average remapping \\ \hline
Remap & remapnn & Nearest neighbor remapping \\ \hline
......@@ -54,6 +42,7 @@ Remap & remaplaf & Largest area fraction remapping \\ \hline
Smooth & smooth & Smooth grid points \\ \hline
Spectral & sp2gp, gp2sp & Spectral transformation \\ \hline
Vertintap & ap2pl, ap2hl & Vertical interpolation on hybrid sigma height coordinates \\ \hline
Vertintgh & gh2hl & Vertical height interpolation \\ \hline
Vertintml & ml2pl, ml2hl & Vertical interpolation on hybrid sigma pressure coordinates \\ \hline
\end{tabular}
......@@ -70,7 +70,7 @@ prompt> cat mypartab
out_name = ta
standard_name = air_temperature
units = "K"
missing_value = 1e+20
missing_value = 1.0e+20
valid_min = 157.1
valid_max = 336.3
/
......@@ -81,7 +81,7 @@ To apply this parameter table to a dataset use:
cdo -f nc cmorlite,mypartab,convert infile outfile
@EndVerbatim
This command renames the variable @boldtt{t} to @boldtt{ta}. The standard name of this variable is set to @boldtt{air_temperature} and
the unit is set to [@boldtt{K}] (converts the unit if necessary). The missing value will be set to @boldtt{1e+20}.
the unit is set to [@boldtt{K}] (converts the unit if necessary). The missing value will be set to @boldtt{1.0e+20}.
In addition it will be checked whether the values of the variable are in the range of @bold{157.1} to @boldtt{336.3}.
The result will be stored in NetCDF.
@EndExample
......@@ -81,7 +81,7 @@ prompt> cat mypartab
out_name = ta
standard_name = air_temperature
units = "K"
missing_value = 1e+20
missing_value = 1.0e+20
valid_min = 157.1
valid_max = 336.3
/
......@@ -92,7 +92,7 @@ To apply this parameter table to a dataset use:
cdo setpartabn,mypartab,convert infile outfile
@EndVerbatim
This command renames the variable @bold{t} to @bold{ta}. The standard name of this variable is set to @bold{air_temperature} and
the unit is set to [@bold{K}] (converts the unit if necessary). The missing value will be set to @bold{1e+20}.
the unit is set to [@bold{K}] (converts the unit if necessary). The missing value will be set to @bold{1.0e+20}.
In addition it will be checked whether the values of the variable are in the range of @bold{157.1} to @bold{336.3}.
@EndExample
......@@ -15,6 +15,7 @@ In this section the abbreviations as in the following table are used:
\vspace{3mm}
\fbox{\parbox{15cm}{
\begin{eqnarray*}
\begin{array}{l}
\makebox[3cm][l]{\textbf{sum}} \\
......@@ -80,7 +81,13 @@ In this section the abbreviations as in the following table are used:
\sqrt{
\left ( \sum_{j=1}^{n} w_j \right )^{-1} \sum\limits_{i=1}^{n} w_i \,
\left ( x_i - \left ( \sum_{j=1}^{n} w_j \right )^{-1} \sum\limits_{j=1}^{n} w_j \, x_j \right)^2 } \\
\\
\end{eqnarray*}
}}
\fbox{\parbox{15cm}{
\begin{eqnarray*}
\begin{array}{l}
\makebox[3cm][l]{Kurtosis} \\
\makebox[3cm][l]{\textbf{kurt}} \\
......
......@@ -155,8 +155,6 @@ There are more than 700 operators available.
A detailed description of all operators can be found in the
\textbf{\htmlref{Reference Manual}{refman}} section.
\input{cdo_advanced_usage}
\subsection{Parallelized operators}
Some of the {\CDO} operators are shared memory parallelized with OpenMP.
......
......@@ -251,6 +251,7 @@ cdo_usage()
fprintf(stderr, " Options:\n");
set_text_color(stderr, BLUE);
fprintf(stderr, " -a Generate an absolute time axis\n");
fprintf(stderr, " --attribs [arbitrary/filesOnly/onlyFirst/noOutput/obase]\n");
fprintf(stderr, " -b <nbits> Set the number of bits for the output precision\n");
fprintf(stderr, " (I8/I16/I32/F32/F64 for nc1/nc2/nc4/nc4c/nc5; U8/U16/U32 for nc4/nc4c/nc5;"
" F32/F64 for grb2/srv/ext/ieg; P1 - P24 for grb1/grb2)\n");
......@@ -266,7 +267,7 @@ cdo_usage()
fprintf(stderr, " -g <grid> Set default grid name or file. Available grids: \n");
fprintf(stderr,
" F<XXX>, t<RES>, tl<RES>, global_<DXY>, r<NX>x<NY>, g<NX>x<NY>, gme<NI>, lon=<LON>/lat=<LAT>\n");
fprintf(stderr, " -h, --help Help information for the perators\n");
fprintf(stderr, " -h, --help Help information for the operators\n");
fprintf(stderr, " --no_history Do not append to NetCDF \"history\" global attribute\n");
fprintf(stderr, " --netcdf_hdr_pad, --hdr_pad, --header_pad <nbr>\n");
fprintf(stderr, " Pad NetCDF output header with nbr bytes\n");
......@@ -1716,18 +1717,19 @@ print_openmp_info()
}
#endif
extern "C" int (*proj_lonlat_to_lcc_func)(double, double, double, double, double, double, double, size_t, double*, double*);
extern "C" int (*proj_lcc_to_lonlat_func)(double, double, double, double, double, double, double, double, double, size_t, double*, double*);
extern "C" int (*proj_lonlat_to_stere_func)(double, double, double, double, double, size_t, double*, double*);
extern "C" int (*proj_stere_to_lonlat_func)(double, double, double, double, double, double, double, size_t, double*, double*);
static void
set_external_proj_func(void)
{
extern int (*proj_lonlat_to_lcc_func)();
extern int (*proj_lcc_to_lonlat_func)();
proj_lonlat_to_lcc_func = (int (*)()) proj_lonlat_to_lcc;
proj_lcc_to_lonlat_func = (int (*)()) proj_lcc_to_lonlat;
extern int (*proj_lonlat_to_stere_func)();
extern int (*proj_stere_to_lonlat_func)();
proj_lonlat_to_stere_func = (int (*)()) proj_lonlat_to_stere;
proj_stere_to_lonlat_func = (int (*)()) proj_stere_to_lonlat;
proj_lonlat_to_lcc_func = proj_lonlat_to_lcc;
proj_lcc_to_lonlat_func = proj_lcc_to_lonlat;
proj_lonlat_to_stere_func = proj_lonlat_to_stere;
proj_stere_to_lonlat_func = proj_stere_to_lonlat;
}
const char *
......
......@@ -181,9 +181,9 @@ cdo_compute_concave_overlap_areas(size_t N, search_t &search, const grid_cell &t
auto overlap_cells = search.overlap_buffer.data();
auto source_cell = search.src_grid_cells.data();
double coordinates_x[3] = { -1, -1, -1 };
double coordinates_y[3] = { -1, -1, -1 };
double coordinates_xyz[3][3] = { { -1, -1, -1 }, { -1, -1, -1 }, { -1, -1, -1 } };
double coordinates_x[3] = { -1.0, -1.0, -1.0 };
double coordinates_y[3] = { -1.0, -1.0, -1.0 };
double coordinates_xyz[3][3] = { { -1.0, -1.0, -1.0 }, { -1.0, -1.0, -1.0 }, { -1.0, -1.0, -1.0 } };
enum yac_edge_type edge_types[3] = { GREAT_CIRCLE, GREAT_CIRCLE, GREAT_CIRCLE };
grid_cell target_partial_cell;
......
......@@ -46,7 +46,7 @@ remapDistwgtWeights(size_t numNeighbors, RemapSearch &rsearch, RemapVars &rv)
std::vector<knnWeightsType> knnWeights;
for (int i = 0; i < Threading::ompNumThreads; ++i) knnWeights.push_back(knnWeightsType(numNeighbors));
double start = Options::cdoVerbose ? cdo_get_wtime() : 0;
double start = Options::cdoVerbose ? cdo_get_wtime() : 0.0;
// Loop over destination grid
......@@ -197,7 +197,7 @@ intgriddis(Field &field1, Field &field2, size_t numNeighbors)
remapSearchInit(mapType, remap.search, remap.src_grid, remap.tgt_grid);
double start = Options::cdoVerbose ? cdo_get_wtime() : 0;
double start = Options::cdoVerbose ? cdo_get_wtime() : 0.0;
// Loop over destination grid
......
......@@ -86,35 +86,6 @@ remapWriteDataScrip(const char *interp_file, RemapMethod mapType, SubmapType sub
#ifdef HAVE_LIBNETCDF
int nc_file_id = -1; /* id for NetCDF file */
int nc_srcgrdsize_id = -1; /* id for source grid size */
int nc_dstgrdsize_id = -1; /* id for destination grid size */
int nc_srcgrdcorn_id = 0; /* id for number of source grid corners */
int nc_dstgrdcorn_id = 0; /* id for number of dest grid corners */
int nc_srcgrdrank_id = -1; /* id for source grid rank */
int nc_dstgrdrank_id = -1; /* id for dest grid rank */
int nc_numlinks_id = -1; /* id for number of links in mapping */
int nc_numwgts_id = -1; /* id for number of weights for mapping */
int nc_srcgrddims_id = -1; /* id for source grid dimensions */
int nc_dstgrddims_id = -1; /* id for dest grid dimensions */
int nc_srcgrdcntrlat_id = -1; /* id for source grid center latitude */
int nc_dstgrdcntrlat_id = -1; /* id for dest grid center latitude */
int nc_srcgrdcntrlon_id = -1; /* id for source grid center longitude */
int nc_dstgrdcntrlon_id = -1; /* id for dest grid center longitude */
int nc_srcgrdimask_id = -1; /* id for source grid mask */
int nc_dstgrdimask_id = -1; /* id for dest grid mask */
int nc_srcgrdcrnrlat_id = -1; /* id for latitude of source grid corners */
int nc_srcgrdcrnrlon_id = -1; /* id for longitude of source grid corners */
int nc_dstgrdcrnrlat_id = -1; /* id for latitude of dest grid corners */
int nc_dstgrdcrnrlon_id = -1; /* id for longitude of dest grid corners */
int nc_srcgrdarea_id = -1; /* id for area of source grid cells */
int nc_dstgrdarea_id = -1; /* id for area of dest grid cells */
int nc_srcgrdfrac_id = -1; /* id for area fraction on source grid */
int nc_dstgrdfrac_id = -1; /* id for area fraction on dest grid */
int nc_srcadd_id = -1; /* id for map source address */
int nc_dstadd_id = -1; /* id for map destination address */
int nc_rmpmatrix_id = -1; /* id for remapping matrix */
int nc_dims2_id[2]; /* NetCDF ids for 2d array dims */
const char *map_name = "SCRIP remapping with CDO";
......@@ -221,6 +192,7 @@ remapWriteDataScrip(const char *interp_file, RemapMethod mapType, SubmapType sub
}
// Create NetCDF file for mapping and define some global attributes
int nc_file_id = -1;
nce(nc_create(interp_file, writemode, &nc_file_id));
// Map name
......@@ -263,32 +235,39 @@ remapWriteDataScrip(const char *interp_file, RemapMethod mapType, SubmapType sub
// Prepare NetCDF dimension info
// Define grid size dimensions
int nc_srcgrdsize_id = -1, nc_dstgrdsize_id = -1;
nce(nc_def_dim(nc_file_id, "src_grid_size", src_grid.size, &nc_srcgrdsize_id));
nce(nc_def_dim(nc_file_id, "dst_grid_size", tgt_grid.size, &nc_dstgrdsize_id));
// Define grid corner dimension
int nc_srcgrdcorn_id = -1, nc_dstgrdcorn_id = -1;
if (src_grid.lneed_cell_corners) nce(nc_def_dim(nc_file_id, "src_grid_corners", src_grid.num_cell_corners, &nc_srcgrdcorn_id));
if (tgt_grid.lneed_cell_corners) nce(nc_def_dim(nc_file_id, "dst_grid_corners", tgt_grid.num_cell_corners, &nc_dstgrdcorn_id));
// Define grid rank dimension
int nc_srcgrdrank_id = -1, nc_dstgrdrank_id = -1;
nce(nc_def_dim(nc_file_id, "src_grid_rank", src_grid.rank, &nc_srcgrdrank_id));
nce(nc_def_dim(nc_file_id, "dst_grid_rank", tgt_grid.rank, &nc_dstgrdrank_id));
// Define map size dimensions
int nc_numlinks_id = -1, nc_numwgts_id = -1;
nce(nc_def_dim(nc_file_id, "num_links", rv.num_links, &nc_numlinks_id));
nce(nc_def_dim(nc_file_id, "num_wgts", rv.num_wts, &nc_numwgts_id));
// Define grid dimensions
int nc_srcgrddims_id = -1, nc_dstgrddims_id = -1;
nce(nc_def_var(nc_file_id, "src_grid_dims", sizetype, 1, &nc_srcgrdrank_id, &nc_srcgrddims_id));
nce(nc_def_var(nc_file_id, "dst_grid_dims", sizetype, 1, &nc_dstgrdrank_id, &nc_dstgrddims_id));
// Define all arrays for NetCDF descriptors
// Define grid center latitude array
int nc_srcgrdcntrlat_id = -1, nc_dstgrdcntrlat_id = -1;
nce(nc_def_var(nc_file_id, "src_grid_center_lat", NC_DOUBLE, 1, &nc_srcgrdsize_id, &nc_srcgrdcntrlat_id));
nce(nc_def_var(nc_file_id, "dst_grid_center_lat", NC_DOUBLE, 1, &nc_dstgrdsize_id, &nc_dstgrdcntrlat_id));
// Define grid center longitude array
int nc_srcgrdcntrlon_id = -1, nc_dstgrdcntrlon_id = -1;
nce(nc_def_var(nc_file_id, "src_grid_center_lon", NC_DOUBLE, 1, &nc_srcgrdsize_id, &nc_srcgrdcntrlon_id));
nce(nc_def_var(nc_file_id, "dst_grid_center_lon", NC_DOUBLE, 1, &nc_dstgrdsize_id, &nc_dstgrdcntrlon_id));
......@@ -297,6 +276,7 @@ remapWriteDataScrip(const char *interp_file, RemapMethod mapType, SubmapType sub
nc_dims2_id[0] = nc_srcgrdsize_id;
nc_dims2_id[1] = nc_srcgrdcorn_id;
int nc_srcgrdcrnrlat_id = -1, nc_srcgrdcrnrlon_id = -1;
if (src_grid.lneed_cell_corners)
{
nce(nc_def_var(nc_file_id, "src_grid_corner_lat", NC_DOUBLE, 2, nc_dims2_id, &nc_srcgrdcrnrlat_id));
......@@ -306,6 +286,7 @@ remapWriteDataScrip(const char *interp_file, RemapMethod mapType, SubmapType sub
nc_dims2_id[0] = nc_dstgrdsize_id;
nc_dims2_id[1] = nc_dstgrdcorn_id;
int nc_dstgrdcrnrlat_id = -1, nc_dstgrdcrnrlon_id = -1;
if (tgt_grid.lneed_cell_corners)
{
nce(nc_def_var(nc_file_id, "dst_grid_corner_lat", NC_DOUBLE, 2, nc_dims2_id, &nc_dstgrdcrnrlat_id));
......@@ -331,14 +312,17 @@ remapWriteDataScrip(const char *interp_file, RemapMethod mapType, SubmapType sub
// Define grid mask
int nc_srcgrdimask_id = -1;
nce(nc_def_var(nc_file_id, "src_grid_imask", NC_INT, 1, &nc_srcgrdsize_id, &nc_srcgrdimask_id));
nce(nc_put_att_text(nc_file_id, nc_srcgrdimask_id, "units", 8, "unitless"));
int nc_dstgrdimask_id = -1;
nce(nc_def_var(nc_file_id, "dst_grid_imask", NC_INT, 1, &nc_dstgrdsize_id, &nc_dstgrdimask_id));
nce(nc_put_att_text(nc_file_id, nc_dstgrdimask_id, "units", 8, "unitless"));
// Define grid area arrays
int nc_srcgrdarea_id = -1, nc_dstgrdarea_id = -1;
if (lgridarea)
{
nce(nc_def_var(nc_file_id, "src_grid_area", NC_DOUBLE, 1, &nc_srcgrdsize_id, &nc_srcgrdarea_id));
......@@ -350,20 +334,24 @@ remapWriteDataScrip(const char *interp_file, RemapMethod mapType, SubmapType sub
// Define grid fraction arrays
int nc_srcgrdfrac_id = -1;
nce(nc_def_var(nc_file_id, "src_grid_frac", NC_DOUBLE, 1, &nc_srcgrdsize_id, &nc_srcgrdfrac_id));
nce(nc_put_att_text(nc_file_id, nc_srcgrdfrac_id, "units", 8, "unitless"));
int nc_dstgrdfrac_id = -1;
nce(nc_def_var(nc_file_id, "dst_grid_frac", NC_DOUBLE, 1, &nc_dstgrdsize_id, &nc_dstgrdfrac_id));
nce(nc_put_att_text(nc_file_id, nc_dstgrdfrac_id, "units", 8, "unitless"));
// Define mapping arrays
int nc_srcadd_id = -1, nc_dstadd_id = -1;
nce(nc_def_var(nc_file_id, "src_address", sizetype, 1, &nc_numlinks_id, &nc_srcadd_id));
nce(nc_def_var(nc_file_id, "dst_address", sizetype, 1, &nc_numlinks_id, &nc_dstadd_id));
nc_dims2_id[0] = nc_numlinks_id;
nc_dims2_id[1] = nc_numwgts_id;
int nc_rmpmatrix_id = -1;
nce(nc_def_var(nc_file_id, "remap_matrix", NC_DOUBLE, 2, nc_dims2_id, &nc_rmpmatrix_id));
// End definition stage
......@@ -501,33 +489,6 @@ remapReadDataScrip(const char *interp_file, int gridID1, int gridID2, RemapMetho
bool lgridarea = false;
int status;
int nc_srcgrdsize_id; /* id for source grid size */
int nc_dstgrdsize_id; /* id for destination grid size */
int nc_srcgrdcorn_id; /* id for number of source grid corners */
int nc_dstgrdcorn_id; /* id for number of dest grid corners */
int nc_srcgrdrank_id; /* id for source grid rank */
int nc_dstgrdrank_id; /* id for dest grid rank */
int nc_numlinks_id; /* id for number of links in mapping */
int nc_numwgts_id; /* id for number of weights for mapping */
int nc_srcgrddims_id; /* id for source grid dimensions */
int nc_dstgrddims_id; /* id for dest grid dimensions */
int nc_srcgrdcntrlat_id; /* id for source grid center latitude */
int nc_dstgrdcntrlat_id; /* id for dest grid center latitude */
int nc_srcgrdcntrlon_id; /* id for source grid center longitude */
int nc_dstgrdcntrlon_id; /* id for dest grid center longitude */
int nc_srcgrdimask_id; /* id for source grid mask */
int nc_dstgrdimask_id; /* id for dest grid mask */
int nc_srcgrdcrnrlat_id = -1; /* id for latitude of source grid corners */
int nc_srcgrdcrnrlon_id = -1; /* id for longitude of source grid corners */
int nc_dstgrdcrnrlat_id = -1; /* id for latitude of dest grid corners */
int nc_dstgrdcrnrlon_id = -1; /* id for longitude of dest grid corners */
int nc_srcgrdarea_id; /* id for area of source grid cells */
int nc_dstgrdarea_id; /* id for area of dest grid cells */
int nc_srcgrdfrac_id; /* id for area fraction on source grid */
int nc_dstgrdfrac_id; /* id for area fraction on dest grid */
int nc_srcadd_id; /* id for map source address */
int nc_dstadd_id; /* id for map destination address */
int nc_rmpmatrix_id; /* id for remapping matrix */
char map_name[1024];
char normalize_opt[64]; /* character string for normalization option */
......@@ -618,16 +579,19 @@ remapReadDataScrip(const char *interp_file, int gridID1, int gridID2, RemapMetho
remapGridInit(tgt_grid);
// Read dimension information
int nc_srcgrdsize_id;
nce(nc_inq_dimid(nc_file_id, "src_grid_size", &nc_srcgrdsize_id));
nce(nc_inq_dimlen(nc_file_id, nc_srcgrdsize_id, &dimlen));
src_grid.size = dimlen;
// if (src_grid.size != gridInqSize(gridID1)) cdoAbort("Source grids have different size!");
int nc_dstgrdsize_id;
nce(nc_inq_dimid(nc_file_id, "dst_grid_size", &nc_dstgrdsize_id));
nce(nc_inq_dimlen(nc_file_id, nc_dstgrdsize_id, &dimlen));
tgt_grid.size = dimlen;
// if (tgt_grid.size != gridInqSize(gridID2)) cdoAbort("Target grids have different size!");
int nc_srcgrdcorn_id;
status = nc_inq_dimid(nc_file_id, "src_grid_corners", &nc_srcgrdcorn_id);
if (status == NC_NOERR)
{
......@@ -637,6 +601,7 @@ remapReadDataScrip(const char *interp_file, int gridID1, int gridID2, RemapMetho
src_grid.lneed_cell_corners = true;
}
int nc_dstgrdcorn_id;
status = nc_inq_dimid(nc_file_id, "dst_grid_corners", &nc_dstgrdcorn_id);