====== Python distribution based on CDAT 8.2.1 notes ====== [ [[.:index|Back to all versions]] ] If you have questions about Python, read: * [[other:python:starting|Working with Python]] * [[other:python:jyp_steps|JYP's recommended steps for learning python]]! ===== Using JYP version ===== If you mostly want to use the **python distribution based on CDAT 8.2.1** installed by JYP, just use the following steps and then read the [[#what_should_i_do_now|What should I do now?]] section below * Initialize //conda// with: ^ Server ^ tcsh ^ bash ^ | LSCE | ''source ~jypeter/.conda3_21-02_jyp.csh'' | ''source ~jypeter/.conda3_21-02_jyp.sh'' | | ciclad | ''source ~jypmce/.conda3_jyp.csh''\\ n/a | ''source ~jypmce/.conda3_jyp.sh''\\ n/a | * Choose one of the [[#environments_summary|installed environments]] and activate it with: ''conda activate env_name'' * python 2.7.x: Deprecated! If you still need to use python 2.7, please use [[other:uvcdat:cdat_conda:cdat_8_1|CDAT 8.1]] * python 3.x: ''conda activate cdatm_py3'' * Type ''which python'' (or the ''wp'' alias) and make sure you get something looking like * ''[...]/miniconda3/envs//bin/python'' * e.g. ''cdatm_py3'' environment at LSCE:\\ ''/home/share/unix_files/cdat/miniconda3_21-02/envs/cdatm_py3/bin/python'' ===== What should I do now? ===== ==== You just want to use the installed version ==== You are ready if you have typed the ''conda activate'' command specified in the previous section! * You may want to add the ''source'' line to your shell configuration file so that you don't have to remember it and type it each time you open a new terminal * Note: it's better to type the ''activate'' command in a terminal only when you want to use this specific version of python, rather than having the ''activate'' in your shell configuration file. This may have some side effects in some cases * Have a look at the [[other:uvcdat:cdat_conda:cdat_8_1#extra_packages_list|Extra package list]] at the bottom of this page to see what is available in this distribution * If you have not done it yet, read (or at least have a look at the list of sections): * [[other:python:starting|Working with Python]] * [[other:python:jyp_steps|JYP's recommended steps for learning python]] ==== You want to replicate this installation on your own server or desktop/laptop ==== Read the next sections of this web page There are some slight differences if you want to install this distribution: * just for you. This is the usual case. Well, if you break something you can always remove everything and start from scratch * for sharing a stable version with several other people The notes at the beginning of each section will tell you if you can skip the section if you are just installing for you ===== What's New in CDAT 8.2.1? ===== * [[https://github.com/CDAT/cdat/releases/tag/v8.2.1|8.2.1 announcement]] and features summary ([[https://github.com/UV-CDAT/uvcdat/releases|all versions]]) * Full [[https://uvcdat.llnl.gov/changelog.html#8.2.1|Change log]] Note: this particular CDAT installation at LSCE (and on other machines/servers) provides **many extra (non-CDAT) packages**. You can jump directly to the [[#extra_packages_list|Extra package list]] at the bottom of this page, if you want to see what is available ===== Installation with Miniconda3 ===== ==== Installing Miniconda3 ==== If //Miniconda3// is not already installed, or if you want to start over with a brand new version, read [[other:uvcdat:cdat_conda:miniconda3_install|JYP steps for installing Miniconda3]]. ==== Installing CDAT 8.2.1 ==== === Prerequisites === We first check that we have indeed access to a //(mini)conda// installation (the //base// environment we get if we have correctly installed miniconda3), and we assume that we have a write-access to the conda disk hierarchy, and a few Gb of disk space available > conda env list | grep base base * /home/share/unix_files/cdat/miniconda3_21-02 > df -h /home/share/ Filesystem Size Used Avail Use% Mounted on prolix3:/share 917G 245G 626G 29% /home/share > du -sh /home/share/unix_files/cdat/miniconda3_21-02 577M /home/share/unix_files/cdat/miniconda3_21-02 We can also use ''conda env list'' and remove some obsolete versions with ''conda remove -n obsolete_environment %%--%%all'', in order to free up some space === Python 2.7 version === Python 2.x is deprecated in CDAT, starting with CDAT 8.2.1. If you still need to use python 2.7, please use [[other:uvcdat:cdat_conda:cdat_8_1|CDAT 8.1]] === Python 3.8 version === It's possible that, when using the //tcsh// shell and trying to use the ''conda create -n cdat-8.2.1_py3 lots_of_parameters_with_wildcards'' line, you will get the following error message: **conda: No match** Apparently the //tcsh// version of conda does not like wildcards too much ([[https://gitmemory.com/issue/conda/conda/9771/608209654|CSH install/search fail with '*' variable in an argument]])... In that case, just type ''bash'' and run the installation command in a //bash// shell The installation line below is based on information from the [[https://github.com/CDAT/cdat/wiki/install#for-linux-python|CDAT wiki page]]. See also [[https://github.com/CDAT/cdat/issues/2265|Some questions and notes about 8.2.1 installation]] bash-4.2$ conda config --env --add channels conda-forge bash-4.2$ conda config --set channel_priority strict bash-4.2$ cat .condarc channels: - conda-forge - defaults $ conda config --describe channel_priority $ conda create -n cdat-8.2.1_nompi_py3 -c conda-forge -c cdat/label/v8.2.1 cdat "libnetcdf=*=nompi_*" "mesalib=18.3.1" "python=3" $ conda create -n cdat-8.2.1_py3 -c conda-forge -c cdat/label/v8.2.1 cdat "libnetcdf=*=mpi_openmpi_*" "mesalib=18.3.1" "python=3" Resulting list of installed packages * ''conda list -n cdat-8.2.1_nompi_py3 > cdat_8.2.1_nompi_21-03-05.txt'' * no mpi: {{ :other:uvcdat:cdat_conda:cdat_8.2.1_nompi_21-03-05.txt |}} * with mpĂ®: {{ :other:uvcdat:cdat_conda:cdat_8.2.1_installed_packages.txt |}} [[https://github.com/CDAT/cdat/issues/2265|Some extra installation notes]] ===== Cloning the base CDAT environment before adding specific packages for LSCE ===== You can skip this section if you are **installing CDAT just for one user (yourself)** on a Linux machine (or a Windows 10 machine with WSL) You will directly make changes and install packages in **your** main python environment This section is about the creation of the **cdatm19** environment Notes about [[https://wiki.lsce.ipsl.fr/pmip3/doku.php/other:python:starting#conda-based_versions_of_uv-cdat|actually using the cdatm19 conda-based python]] Notes: * Why //cloning//? The initial CDAT environment is strictly the one created at PCMDI and certified by PCMDI. Rather than making changes directly in there, we keep it as it is, clone it, and make changes in the cloned environment * Carefully working on different python environments (possibly cloned) is safer in a multi-user environment (i.e. you can have people use a specific environment, and make tests in another environment) * Cloning a full environment uses Linux hard links and requires less disk space than making a real copy $ conda create -n cdatm19_py3 --clone cdat-8.2.1_py3 $ cd /home/share/unix_files/cdat/miniconda3_21-02/envs $ du -sh cdat-8.2.1_py3 cdatm19_py3 2.5G cdat-8.2.1_py3 538M cdatm19_py3 ===== Getting ready for a moving default CDAT environment ===== You can skip this section if you are **installing CDAT just for one user (yourself)** on a Linux machine (or a Windows 10 machine with WSL) You will directly make changes and install packages in **your** main python environment This step could probably be listed at the **end**, especially in a multi-user environment! If there is already a ''cdatm_py3'' link (pointing to an older environment), make sure that the new CDAT environment is stable and working correctly before updating the ''cdatm_py3'' link We create a **//cdatm_py3// symbolic link** in the ''envs'' directory, that has a //stable name// but can be moved to point to the latest default (and hopefully stable) CDAT environment. In that case, most users can just activate this //cdatm_py3// environment and always get the latest stable version. $ cd /home/share/unix_files/cdat/miniconda3_21-02/envs $ ln -s cdatm19_py3 cdatm_py3 conda env list # conda environments: # base * /home/share/unix_files/cdat/miniconda3_21-02 cdat-8.2.1_py3 /home/share/unix_files/cdat/miniconda3_21-02/envs/cdat-8.2.1_py3 cdatm19_py3 /home/share/unix_files/cdat/miniconda3_21-02/envs/cdatm19_py3 cdatm_py3 /home/share/unix_files/cdat/miniconda3_21-02/envs/cdatm_py3 $ ls -l /home/share/unix_files/cdat/miniconda3_21-02/envs/ drwxr-xr-x [...] cdat-8.2.1_py3/ drwxr-xr-x [...] cdatm19_py3/ lrwxrwxrwx [...] cdatm_py3 -> cdatm19_py3/ ===== Customizing CDAT for LSCE ===== ==== A quick test of cdms2 and vcs ==== You can use the quick test shown in [[https://github.com/CDAT/cdat/wiki/install#to-test-your-cdat-installation|To Test Your CDAT Installation]], and check if you get the expected png file You can also check the [[other:uvcdat:cdat_conda:cdat_8_1#testing_vcs|Testing vcs]] section of the LSCE installation of CDAT 8.1 ==== Downloading cdms2/vcs test data ==== You should download the test data (174M of data...) and use it in the example scripts that you want to distribute, and scripts you write for reporting the errors you find (if any...). The downloaded data files will be available in: ''vcs.sample_data'' $ conda activate cdatm19_py3 (cdatm19_py3) $ python -c 'import vcs; vcs.download_sample_data_files(); print("\nFinished downloading sample data to " + vcs.sample_data)' [...] Finished downloading sample data to /home/share/unix_files/cdat/miniconda3_21-02/envs/cdatm19_py3/share/cdat/sample_data (cdatm19_py3) $ du -sh /home/share/unix_files/cdat/miniconda3_21-02/envs/cdatm19_py3/share/cdat/sample_data 174M /home/share/unix_files/cdat/miniconda3_21-02/envs/cdatm19_py3/share/cdat/sample_data $ python >>> import os, vcs >>> vcs.sample_data '/home/jypeter/miniconda3_21-02/envs/cdat-8.2.1_py3/share/cdat/sample_data' >>> os.listdir(vcs.sample_data) ['BlueMarble.ppm', 'clt.nc', 'geo.1deg.ctl', 'geo.1deg.gmp', 'geo.1deg.grb', ...] ==== Packages that have no dependency problems ==== After [[#cloning_the_base_cdat_environment_before_adding_specific_packages_for_lsce|cloning]], we are ready to install some extra (e.g. not directly related to CDAT) packages that may be useful to LSCE users * We first install together as many packages as possible that don't require other channels than //conda-forge//, and that don't request a significant downgrade of what is already installed * We use the //dry-run// option of conda in order to check the version (theoretically the latest) and origin (theoretically //conda-forge//) of the packages that would be installed. Or maybe the result will be that there are too many conflicts and the installation cannot proceed with the requested packages combination * conda may take a very long time to report that there are too many conflicts and we have started using [[https://github.com/mamba-org/mamba|mamba]] that uses different solving libraries and is much (**much!**) faster than conda * if the dry-run reports too many conflicts or requests too many downgrades, we try to remove some (not really important) packages and check if it works better * the last resort is too create a new environment with a smaller set of packages if a user really needs a conflicting package * We later install individual extra packages with ''conda install'', ''mamba install'' or ''pip install'' === Pre-installation check with the dry-run option === /usr/bin/time mamba install -n cdatm19_nompi_py3 --dry-run -c conda-forge basemap basemap-data basemap-data-hires cartopy cmocean cmor ferret_datasets gdal gdal geopandas glances gsw iris joblib netcdf4 palettable pandas pillow pyferret rpy2 scikit-image scikit-learn seaborn seawater shapely spyder statsmodels windspharm wrf-python xarray xlsxwriter [... lots of information about what would be done followed by some information about how long it took ...] Summary: Install: 167 packages Downgrade: 3 packages Total download: 474 MB DryRunExit: Dry run. Exiting. 8.31user 0.51system 0:11.45elapsed 77%CPU === Actual installation === /usr/bin/time mamba install -n cdatm19_nompi_py3 -c conda-forge basemap basemap-data basemap-data-hires cartopy cmocean cmor ferret_datasets gdal gdal geopandas glances gsw iris joblib netcdf4 palettable pandas pillow pyferret rpy2 scikit-image scikit-learn seaborn seawater shapely spyder statsmodels windspharm wrf-python xarray xlsxwriter Resulting list of installed packages * ''conda list -n cdatm19_nompi_py3 > cdatm19_nompi_py3_21-03-06.txt'' * no mpi: {{ :other:uvcdat:cdat_conda:cdatm19_nompi_py3_21-03-06.txt |}} === Extra packages installed with pip === * [[https://earthsystemcog.org/projects/wip/CMIP6DataRequest|dreqPy]]: CMIP6 Data Request Python API * ''conda activate cdatm19_py3''\\ ''pip install dreqPy'' * Update with: ''pip install %%--%%upgrade dreqPy'' * Get version number with:\\ $ drq -v dreqPy version 01.00.29 [Version 01.00.29] * ''ipython_ferretmagic'': more details in the [[#extra_packages_list|Extra packages list section]] === Packages with no dependency problems and were added (or updated) later === * [[https://cmor.llnl.gov/|CMOR]]: CMOR (//Climate Model Output Rewriter//) is used to produce CF-compliant netCDF files * ''conda install -n cdatm19_py3 -c conda-forge cmor'' * [[http://scitools.org.uk/cartopy/|cartopy]]: a Python package designed for geospatial data processing in order to produce maps and other geospatial data analyses * ''conda install -n cdatm19_py3 -c conda-forge cartopy'' * [[https://joblib.readthedocs.io/en/latest/|joblib]]: running Python functions as pipeline jobs * ''conda install -n cdatm19_py3 -c conda-forge joblib'' * [[https://climaf.readthedocs.io/en/latest/|CliMAF]]: a Climate Model Assessment Framework * [[other:uvcdat:cdat_conda:climaf_install|Installation notes]] ==== TODO ==== //Add here packages that would be useful and have not been installed yet, or have some problems that prevent their installation// * [[http://wrf-python.readthedocs.io/en/latest/|wrf-python]]: A collection of diagnostic and interpolation routines for use with output from the Weather Research and Forecasting (WRF-ARW) Model * ''conda install -n cdatm15 -c conda-forge wrf-python'' * [[https://glances.readthedocs.io/en/stable/index.html|glances]]: a cross-platform monitoring tool (similar to ''top'') * ''conda install -n cdatm15 -c conda-forge glances'' * [[https://github.com/nicolargo/glances|glances@github]] ==== Other packages ==== There is no warranty that the packages listed below will work correctly, because it was required to bypass the compatibility checks in order to install them... * NO such packages now! ==== Updating some packages ==== Some packages change more often than others, and can be easily updated the following way: * [[https://earthsystemcog.org/projects/wip/CMIP6DataRequest|dreqPy]] * Update with: ''pip install %%--%%upgrade dreqPy'' * Get version number with: ''drq -v'' ==== Cleaning up things ==== Some packages may have files that can only be read by the person who installed CDAT and the LSCE extensions (eg [[https://github.com/PCMDI/pcmdi_metrics/issues/496|pcmdi-metrics]] in 2.8.0 and [[https://github.com/UV-CDAT/cdp/issues/21|cdp]] in 2.10) We check if some of the installed files are missing read access for the //group// or //other//, and we manually change the permissions >find /home/share/unix_files/cdat/miniconda3/envs \! -perm /g+r,o+r -ls # Everything OK! ===== Extra packages list ===== FIXME: need to clean the list below, and move the packages to the ordered list further down * [[http://unidata.github.io/netcdf4-python/|netcdf4]]: a Python interface to the netCDF C library * [[https://github.com/PCMDI/pcmdi_metrics|PCMDI metrics package]] (PMP): objectively compare results from climate models with observations using well-established statistical tests * [[https://xlsxwriter.readthedocs.io/|XlsxWriter]]: a Python module for creating Excel XLSX files * Note: this is a dependency of ''dreqPy'' * [[https://earthsystemcog.org/projects/wip/CMIP6DataRequest|dreqPy]]: CMIP6 Data Request Python API * [[https://cmor.llnl.gov/|CMOR]]: CMOR (//Climate Model Output Rewriter//) is used to produce CF-compliant netCDF files * [[http://toblerity.org/shapely/project.html|shapely]]: a Python wrapper for GEOS for algebraic manipulation of geometry (manipulation and analysis of geometric objects in the Cartesian plane) * [[https://ajdawson.github.io/windspharm/latest/|windspharm]]: spherical harmonic wind analysis in Python * [[https://climaf.readthedocs.io/en/latest/|CliMAF]]: a Climate Model Assessment Framework The extra packages below are (more or less) listed in package name alphabetical order. These are the extra packages that we explicitly installed, but there are many more packages installed (e.g. {{ :other:uvcdat:cdat_conda:cdatm19_nompi_py3_21-03-06.txt |}})! Initialize the environment and type ''conda list'' if you need an up-to-date list, including the packages' version and where they came from (conda-forge for most packages * [[https://github.com/AutoViML/AutoViz|AutoViz]]: the One-Line Automatic Data Visualization Library. Automatically Visualize any dataset, any size with a single line of code * [[https://matplotlib.org/basemap/|basemap]]: a library for plotting 2D data on maps in Python * [[missing|basemap-data]] and [[https://github.com/conda-forge/basemap-data-hires-feedstock|basemap-data-hires]]: (high resolution) data for ''basemap'' * [[http://scitools.org.uk/cartopy/|cartopy]]: a Python package designed for geospatial data processing in order to produce maps and other geospatial data analyses * see also ''iris'' * [[https://github.com/Try2Code/cdo-bindings|python-cdo]]: Python scripting interface of [[https://code.mpimet.mpg.de/projects/cdo/wiki/Cdo#Documentation|cdo]] * ''python-cdo'' will install the ''cdo'' package (providing the ''cdo ''executable) as a dependency * see also [[https://code.mpimet.mpg.de/projects/cdo/wiki/Cdo%7Brbpy%7D|Using CDO from python or ruby]] * [[https://cds.climate.copernicus.eu/api-how-to|cdsapi]]: The Climate Data Store (CDS) Application Program Interface (API) is a service providing programmatic access to CDS data * CDS = Copernicus [[https://cds.climate.copernicus.eu/|Climate Data Store]] * Example: [[https://cds.climate.copernicus.eu/cdsapp#!/dataset/reanalysis-era5-pressure-levels?tab=form|ERA5 hourly data on pressure levels from 1959 to present]] * [[https://github.com/ecmwf/cfgrib|cfgrib]]: Python interface to map GRIB files to the Unidata's Common Data Model v4 following the CF Conventions * see also ''eccodes'' and ''pygrib'' * [[https://unidata.github.io/cftime/|cftime]]: Python library for decoding time units and variable values in a netCDF file conforming to the Climate and Forecasting (CF) netCDF conventions * Used with ''xarray'' * [[https://cf-xarray.readthedocs.io/|cf_xarray]]: provides an accessor (''DataArray.cf'' or ''Dataset.cf'') that allows you to interpret //Climate and Forecast// metadata convention attributes present on ''xarray'' objects * Used with ''xarray'' * [[https://clustergram.readthedocs.io/|clustergram]]: visualization and diagnostics for cluster analysis * [[https://www.fabiocrameri.ch/colourmaps/|cmcrameri]]: Crameri's Scientific colour maps ''[color]'' * the colormaps are also available in [[https://jiffyclub.github.io/palettable/scientific/|palettable.scientific]] * [[http://matplotlib.org/cmocean/|cmocean]]: beautiful colormaps for oceanography ''[color]'' * [[https://dash.plotly.com/|dash]] and [[https://github.com/plotly/jupyter-dash|jupyter-dash]]: the original low-code framework for rapidly building data apps in Python, R, Julia, and F# * see also ''plotly'' * [[https://github.com/man-group/dtale|D-Tale]] brings you an easy way to view & analyze Pandas data structures. It integrates seamlessly with ipython notebooks & python/ipython terminals. * Install with: ''conda install dtale -c conda-forge'' * [[https://dynamictimewarping.github.io/|python-dtw]]: implementation of Dynamic Time Warping-type (DTW) * Note: older installations provided [[https://github.com/pierre-rouanet/dtw|dtw]]: DTW (Dynamic Time Warping) python module * [[https://confluence.ecmwf.int/display/ECC/|ecCodes]]: ecCodes is a package developed by ECMWF which provides an application programming interface and a set of tools for decoding and encoding GRIB and BUFR messages * need to install ''eccodes'' and ''python-eccodes'' ([[https://github.com/ecmwf/eccodes-python/issues/56|details]]) * see also ''cfgrib'' and ''pygrib'' * [[https://eigen.tuxfamily.org/|eigen]]: a C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms * [[https://ajdawson.github.io/eofs/|eofs]]: a Python package for EOF analysis of spatial-temporal data * [[https://docs.esmvaltool.org/en/latest/|ESMValTool]]: Earth System Model Evaluation Tool * should install ''esmvaltool-python'' in order to reduce the size/number of dependencies, after checking that it does not require downgrading important packages * [[https://flox.readthedocs.io/|flox]]: fast & furious //GroupBy// reductions for dask.array * See also * [[https://xarray.dev/blog/flox|flox: Faster GroupBy reductions with Xarray]] * [[https://flox.readthedocs.io/en/latest/implementation.html|Parallel Algorithms]] * [[http://ferret.pmel.noaa.gov/Ferret/documentation/pyferret|pyferret]] and ''ferret_datasets'': Ferret encapsulated in Python * [[https://github.com/PBrockmann/ipython_ferretmagic|ipython_ferretmagic]]: IPython notebook extension for ferret * Install with: ''pip install ferretmagic'' * OSGeo/[[http://www.gdal.org/|gdal]]: Geospatial Data Abstraction Library. GDAL is a translator library for raster and vector geospatial data formats * [[https://pcjericks.github.io/py-gdalogr-cookbook/|Python GDAL/OGR Cookbook]] * [[https://geopy.readthedocs.io/|GeoPy]]: a Python client for several popular geocoding web services * ''GeoPy'' makes it easy for Python developers to locate the coordinates of addresses, cities, countries, and landmarks across the globe using third-party geocoders and other data sources. * [[https://github.com/piskvorky/gensim?tab=readme-ov-file|gensim]]: a Python library for topic modelling, document indexing and similarity retrieval with large corpora * See also //scikit-learn// * [[https://github.com/TEOS-10/GSW-python|gsw]]: Python implementation of the Thermodynamic Equation Of Seawater * see also //seawater// * [[https://noaa-orr-erd.github.io/gridded/index.html|gridded]]: a single API for accessing / working with gridded model results on multiple grid types * Supports the [[http://cfconventions.org/|CF]], [[https://ugrid-conventions.github.io/ugrid-conventions/|UGRID]] and [[http://sgrid.github.io/sgrid/|SGRID]] conventions * [[https://icclim.readthedocs.io/|icclim]]: icclim (Index Calculation for CLIMate) is a Python library to compute climate indices * [[https://intake.readthedocs.io/|intake]]: a lightweight package for finding, investigating, loading and disseminating data * [[https://intake-esm.readthedocs.io/|intake-esm]]: data cataloging utility built on top of intake, pandas, and xarray * [[https://ipyleaflet.readthedocs.io/en/latest/|ipyleaflet]]: interactive maps in the Jupyter notebook * [[https://ipywidgets.readthedocs.io/|ipywidgets]]: ipywidgets, also known as jupyter-widgets or simply widgets, are interactive HTML widgets for Jupyter notebooks and the IPython kernel * [[https://scitools-iris.readthedocs.io/en/stable/|iris]]: a powerful, format-agnostic, community-driven Python package for analysing and visualising Earth science data * see also ''cartopy'' * install sample data with ''conda install -c conda-forge iris-sample-data'' * [[https://joblib.readthedocs.io/en/latest/|joblib]]: running Python functions as pipeline jobs * [[https://jupyterlab.readthedocs.io/|jupyterlab]]: the next-generation web-based user interface for Project Jupyter * [[https://github.com/plotly/Kaleido|python-kaleido]]: a cross-platform library for generating static images (e.g. png, svg, pdf, etc.) for web-based visualization libraries * [[https://unidata.github.io/MetPy/latest/|MetPy]]: a collection of tools in Python for reading, visualizing, and performing calculations with weather data * [[https://mpltern.readthedocs.io/|mpltern]]: a Python plotting library based on Matplotlib specifically designed for ternary plots * [[https://nc-time-axis.readthedocs.io/en/stable/|nc-time-axis]]: a package that enables making plots in matplotlib with axes made up of ''cftime.datetime'' dates with any calendar type * [[https://gitlab.com/remikz/nccmp|nccmp]]: compare two NetCDF files bitwise, semantically or with a user defined tolerance (absolute or relative percentage) * can probably also be done with [[https://code.mpimet.mpg.de/projects/cdo/embedded/index.html#x1-580002.1.3|cdo -v diffn file1.nc file2.nc]] * [[https://opencv.org/|OpenCV]]: OpenCV (Open Source Computer Vision Library) is an open-source library that includes several hundreds of computer vision algorithms. See also [[https://www.geeksforgeeks.org/opencv-python-tutorial/|OpenCV Python Tutorial]] and **scikit-image** * [[https://openpyxl.readthedocs.io/|openpyxl]]: a Python library to read/write Excel 2010 ''xlsx''/''xlsm'' files * [[https://jiffyclub.github.io/palettable/|Palettable]]: Color palettes for Python ''[color]'' * [[http://pandas.pydata.org/|pandas]]: Python Data Analysis Library * [[https://geopandas.org/|geopandas]]: an open source project to make working with geospatial data in python easier * [[https://peakutils.readthedocs.io/|PeakUtils]]: utilities related to the detection of peaks on 1D data * [[https://python-pillow.org/|pillow]]: the friendly PIL (//Python Imaging Library//) fork * [[https://plotly.com/python/|plotly]]: Plotly's Python graphing library (sometimes referred to as //plotly.py//) makes interactive, publication-quality graphs * see also ''python-kaleido'' and ''dash'' * [[https://www.fatiando.org/pooch/|pooch]]: a friend to fetch your data files (makes it easy to download a file, without messing with ''requests'' and ''urllib'') * [[https://proplot.readthedocs.io/en/latest/|proplot]]: a lightweight **matplotlib wrapper** for making beautiful, publication-quality graphics * [[https://psyplot.github.io/|psyplot]]: Interactive Data Visualization from Python and GUIs * [[https://github.com/jswhit/pygrib|pygrib]]: high-level interface to the ECWMF ECCODES C library for reading GRIB files * see also ''eccodes'' and ''cfgrib'' * [[https://pyleoclim-util.readthedocs.io/|pyleoclim]]: a Python package designed for the analysis of paleoclimate data * **Wait** till it can be installed with ''conda'' ([[https://github.com/LinkedEarth/Pyleoclim_util/discussions/205|Why I'm not installing Pyleoclim yet]]) * [[https://requests.readthedocs.io/|requests]]: is an elegant and simple HTTP library for Python, built for human beings * See also ''pooch'' * [[https://rasterio.readthedocs.io/|rasterio]]: access to geospatial raster data * [[https://corteva.github.io/rioxarray/|rioxarray]]: ''rasterio'' xarray extension * [[https://rpy2.github.io/|rpy2]]: an interface to R running embedded in a Python process * [[http://scikit-image.org/|scikit-image]]: a collection of algorithms for image processing in Python * [[https://scikit-learn.org/|scikit-learn]]: an open source machine learning library that supports supervised and unsupervised learning. It also provides various tools for model fitting, data preprocessing, model selection and evaluation, and many other utilities. * [[https://seaborn.pydata.org/|seaborn]]: statistical data visualization * [[http://pythonhosted.org/seawater/|seawater]]: Python re-write of the CSIRO seawater toolbox * see also ''gsw'' * [[https://www.statsmodels.org/|statsmodels]]: a Python module that provides classes and functions for the estimation of many different statistical models, as well as for conducting statistical tests, and statistical data exploration. * [[https://streamlit.io/|streamlit]]: Streamlit turns data scripts into shareable web apps in minutes * [[https://github.com/SatAgro/suntime|suntime]]: simple sunset and sunrise time calculation python library * **Warning**: not available in conda, use ''pip install suntime'' * [[https://github.com/fbdesignpro/sweetviz|Sweetviz]] is pandas based Python library that generates beautiful, high-density visualizations to kickstart EDA (Exploratory Data Analysis) with just two lines of code. * [[https://www.tensorflow.org/|tensorflow-mkl]]: an end-to-end open source machine learning platform * [[https://anaconda.org/anaconda/tensorflow-mkl|tensorflow-mkl]] will install the **CPU**-based (**not GPU**) version * [[https://github.com/tqdm|tqdm]]: make your loops show a smart progress meter * [[https://uxarray.readthedocs.io/|uxarray]]: provide xarray styled functionality for unstructured grid datasets following [[https://ugrid-conventions.github.io/ugrid-conventions/|UGRID Conventions]] * [[https://docs.xarray.dev/en/stable/|xarray]]: Xarray makes working with labelled multi-dimensional arrays in Python simple, efficient, and fun! * See also: ''flox'', ''xcdat'', ''rioxarray'', ... * [[https://xcdat.readthedocs.io/|xcdat]]: Xarray extended with Climate Data Analysis Tools * [[https://xclim.readthedocs.io/|xclim]]: an operational Python library for climate services, providing numerous climate-related indicator tools with an extensible framework for constructing custom climate indicators, statistical downscaling and bias adjustment of climate model simulations, as well as climate model ensemble analysis tools. * [[https://xesmf.readthedocs.io/|xESMF]]: Universal Regridder for Geospatial Data * [[https://xgrads.readthedocs.io/|xgrads]]: parse and read binary dataset described by a ''.ctl'' file commonly used by [[http://cola.gmu.edu/grads/|GrADS]] or [[http://www.opengrads.org/|openGrADS]] * [[https://xlrd.readthedocs.io/|xlrd]]: a library for reading data and formatting information from Excel files in the historical .xls format * [[https://xoa.readthedocs.io/en/latest/|xoa]]: xarray-based ocean analysis library * ''xoa'' is the successor of [[http://www.ifremer.fr/vacumm/|vacumm]] (vacumm does **not** support Python3) * [[https://xskillscore.readthedocs.io/|xskillscore]]: metrics for verifying forecasts * [[https://docs.profiling.ydata.ai/|ydata-profiling]]: a leading package for data profiling, that automates and standardizes the generation of detailed reports, complete with statistics and visualizations. ==== Removed packages ==== * NO removed packages! ==== Packages NOT supported in Python3 ==== * [[https://github.com/stefraynaud/spanlib|spanlib]]: Spectral Analysis Library * [[http://www.ifremer.fr/vacumm/|vacumm]]: Validation, Analysis, Comparison - Utilities written in Python to validate and analyze Multi-Model outputs, and compare them to observations * Check the ''xoa'' replacement package ===== Environments summary ===== After following the steps above, we get the following environments. Use the ''conda env list'' command (same result as ''conda info %%--%%envs'') to get the up-to-date list of available environments. Use ''conda list -n env_name'' to get a detailed list of which packages/modules are installed, or check the [[other:uvcdat:conda_notes#installation_history|conda Installation history]] section to get more details ^ Environment\\ name ^ Server ^ Summary ^ | cdat-8.2.1_py3 | LSCE\\ ciclad | CDAT 8.2.1 & Python 2.7 | | cdat-8.2.1_py3 | LSCE | CDAT 8.2.1 & Python 3.6 | | cdatm19_py3\\ or //cdatm_py3// | LSCE\\ ciclad | CDAT 8.2.1 & P 2.7 //JYP version// | | cdatm19_py3\\ or //cdatm_py3// | LSCE | CDAT 8.2.1 & P 3.7 //JYP version// | /* standard page footer */ \\ \\ \\ ---- [ [[pmip3:|PMIP3 Wiki Home]] ] - [ [[pmip3:wiki_help|Help!]] ] - [ [[wiki:syntax|Wiki syntax]] ]