User Tools

Site Tools


other:uvcdat:cdat_conda:cdat_8_2_1

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
other:uvcdat:cdat_conda:cdat_8_2_1 [2021/03/03 11:40]
jypeter [Installing CDAT 8.2.1]
other:uvcdat:cdat_conda:cdat_8_2_1 [2021/06/09 15:50]
jypeter [Packages NOT supported in Python3]
Line 1: Line 1:
-====== CDAT 8.2.1 installation ​notes ======+====== ​Python distribution based on CDAT 8.2.1 notes ======
  
 [ [[.:​index|Back to all versions]] ] [ [[.:​index|Back to all versions]] ]
Line 11: Line 11:
 ===== Using JYP version ===== ===== Using JYP version =====
  
-If you mostly want to use the CDAT 8.2.1 installed by JYP, just use the following steps and then read the //What should I do now?// section below+If you mostly want to use the **python distribution based on CDAT 8.2.1** installed by JYP, just use the following steps and then read the [[#​what_should_i_do_now|What should I do now?]] section below
  
   * Initialize //conda// with:   * Initialize //conda// with:
Line 23: Line 23:
     * python 3.x: ''​conda activate cdatm_py3''​     * python 3.x: ''​conda activate cdatm_py3''​
  
-  * Type ''​which python''​ (or the ''​wp''​ alias) and make sure you get something like\\ ''​[...]/​miniconda3/​envs/​env_name/​bin/​python''​+  * Type ''​which python''​ (or the ''​wp''​ alias) and make sure you get something ​looking ​like 
 +    * ''​[...]/​miniconda3<​possibly_some_version>/​envs/<​some_env_name>/​bin/​python''​ 
 +    * e.g. ''​cdatm_py3''​ environment at LSCE:\\ ''/​home/​share/​unix_files/​cdat/​miniconda3_21-02/envs/cdatm_py3/​bin/​python''​
  
  
Line 40: Line 42:
 ==== You want to replicate this installation on your own server or desktop/​laptop ==== ==== You want to replicate this installation on your own server or desktop/​laptop ====
  
 +Read the next sections of this web page
 +
 +There are some slight differences if you want to install this distribution:​
 +  * just for you. This is the usual case. Well, if you break something you can always remove everything and start from scratch
 +  * for sharing a stable version with several other people
 +
 +The notes at the beginning of each section will tell you if you can skip the section if you are just installing for you
 ===== What's New in CDAT 8.2.1? ===== ===== What's New in CDAT 8.2.1? =====
  
Line 45: Line 54:
   * Full [[https://​uvcdat.llnl.gov/​changelog.html#​8.2.1|Change log]]   * Full [[https://​uvcdat.llnl.gov/​changelog.html#​8.2.1|Change log]]
  
-Note: this particular CDAT installation at LSCE (and on other machines/​servers) provides **many extra (non-CDAT) packages**. You can jump directly to the [[#​extra_packages_list|Extra package list]] at the bottom of this page to see what is available+Note: this particular CDAT installation at LSCE (and on other machines/​servers) provides **many extra (non-CDAT) packages**. You can jump directly to the [[#​extra_packages_list|Extra package list]] at the bottom of this page, if you want to see what is available
  
 ===== Installation with Miniconda3 ===== ===== Installation with Miniconda3 =====
  
-We assume that Miniconda3 is already installed. Otherwisefollow the the [[other:​uvcdat:​cdat_conda:​cdat_8_0_py2#​installing_miniconda3|Installing Miniconda]] ​steps (and the //​Post-Miniconda3 installation//​ section) we followed when installing ​CDAT 8.0+==== Installing ​Miniconda3 ​==== 
 + 
 +If //​Miniconda3// ​is not already installed, ​or if you want to start over with a brand new version, read [[other:​uvcdat:​cdat_conda:​miniconda3_install|JYP steps for installing ​Miniconda3]].
  
-==== Updating conda ==== 
  
-The conda package itself can be updated (if need be) with\\ ''​conda update -n base -c defaults conda''​ 
  
 ==== Installing CDAT 8.2.1 ==== ==== Installing CDAT 8.2.1 ====
Line 94: Line 103:
 The installation line below is based on information from the [[https://​github.com/​CDAT/​cdat/​wiki/​install#​for-linux-python|CDAT wiki page]]. See also [[https://​github.com/​CDAT/​cdat/​issues/​2265|Some questions and notes about 8.2.1 installation]] The installation line below is based on information from the [[https://​github.com/​CDAT/​cdat/​wiki/​install#​for-linux-python|CDAT wiki page]]. See also [[https://​github.com/​CDAT/​cdat/​issues/​2265|Some questions and notes about 8.2.1 installation]]
  
 +<​code>​
 +bash-4.2$ conda config --env --add channels conda-forge
 +bash-4.2$ conda config --set channel_priority strict
 +bash-4.2$ cat .condarc
 +channels:
 +  - conda-forge
 +  - defaults
 +$ conda config --describe channel_priority
 +
 +$ conda create -n cdat-8.2.1_nompi_py3 -c conda-forge -c cdat/​label/​v8.2.1 cdat "​libnetcdf=*=nompi_*"​ "​mesalib=18.3.1"​ "​python=3"</​code>​
  
 <​code>​$ conda create -n cdat-8.2.1_py3 -c conda-forge -c cdat/​label/​v8.2.1 cdat "​libnetcdf=*=mpi_openmpi_*"​ "​mesalib=18.3.1"​ "​python=3"</​code>​ <​code>​$ conda create -n cdat-8.2.1_py3 -c conda-forge -c cdat/​label/​v8.2.1 cdat "​libnetcdf=*=mpi_openmpi_*"​ "​mesalib=18.3.1"​ "​python=3"</​code>​
  
 Resulting list of installed packages Resulting list of installed packages
-  * ''​conda list -n cdat-8.2.1_py3 > cdat_8.2.1_installed_packages.txt''​ +  * ''​conda list -n cdat-8.2.1_nompi_py3 ​> cdat_8.2.1_nompi_21-03-05.txt''​ 
-  * {{ :​other:​uvcdat:​cdat_conda:​cdat_8.2.1_installed_packages.txt |}}+  * no mpi: {{ :​other:​uvcdat:​cdat_conda:​cdat_8.2.1_nompi_21-03-05.txt |}} 
 +  * with mpî: {{ :​other:​uvcdat:​cdat_conda:​cdat_8.2.1_installed_packages.txt |}}
  
 [[https://​github.com/​CDAT/​cdat/​issues/​2265|Some extra installation notes]] [[https://​github.com/​CDAT/​cdat/​issues/​2265|Some extra installation notes]]
-===== Cloning CDAT before adding specific packages for LSCE =====+===== Cloning ​the base CDAT environment ​before adding specific packages for LSCE =====
  
 <WRAP center round alert 60%> <WRAP center round alert 60%>
Line 118: Line 138:
  
 Notes: Notes:
-  * Why //​cloning//?​ The initial CDAT environment is strictly the one created at PCMDI and certified by PCMDI. Rather than making changes directly in there, we keep it as it is, clone it, and work on the clone +  * Why //​cloning//?​ The initial CDAT environment is strictly the one created at PCMDI and certified by PCMDI. Rather than making changes directly in there, we keep it as it is, clone it, and make changes in the cloned environment 
-  * Carefully working on different python environments (possibly cloned) is safer in a multi-user environment+  * Carefully working on different python environments (possibly cloned) is safer in a multi-user environment ​(i.e. you can have people use a specific environment,​ and make tests in another environment)
   * Cloning a full environment uses Linux hard links and  requires less disk space than making a real copy   * Cloning a full environment uses Linux hard links and  requires less disk space than making a real copy
  
Line 132: Line 152:
  
  
-===== Getting ready for a moving default CDAT =====+===== Getting ready for a moving default CDAT environment ​=====
  
 <WRAP center round alert 60%> <WRAP center round alert 60%>
Line 143: Line 163:
 This step could probably be listed at the **end**, especially in a multi-user environment! This step could probably be listed at the **end**, especially in a multi-user environment!
  
-If there is already a ''​cdatm_py3''​ link, make sure that the new CDAT version ​is stable and working correctly before updating the ''​cdatm_py3''​ link+If there is already a ''​cdatm_py3''​ link (pointing to an older environment), make sure that the new CDAT environment ​is stable and working correctly before updating the ''​cdatm_py3''​ link
 </​WRAP>​ </​WRAP>​
  
-We create a **//​cdatm_py3//​ symbolic link** in the ''​envs''​ directory, that has a //stable name// but can be moved to point to the latest default (and hopefully stable) CDAT. In that case, most users can just activate this //​cdatm_py3// ​version ​and always get the latest stable version+We create a **//​cdatm_py3//​ symbolic link** in the ''​envs''​ directory, that has a //stable name// but can be moved to point to the latest default (and hopefully stable) CDAT environment. In that case, most users can just activate this //​cdatm_py3// ​environment ​and always get the latest stable version.
  
 <​code>​$ cd /​home/​share/​unix_files/​cdat/​miniconda3_21-02/​envs <​code>​$ cd /​home/​share/​unix_files/​cdat/​miniconda3_21-02/​envs
-$ ln -s cdatm19_py3 cdatm_py3</​code>​+$ ln -s cdatm19_py3 cdatm_py3
  
-===== Customizing ​UV-CDAT for LSCE =====+conda env list 
 +# conda environments:​ 
 +
 +base                  *  /​home/​share/​unix_files/​cdat/​miniconda3_21-02 
 +cdat-8.2.1_py3 ​          /​home/​share/​unix_files/​cdat/​miniconda3_21-02/​envs/​cdat-8.2.1_py3 
 +cdatm19_py3 ​             /​home/​share/​unix_files/​cdat/​miniconda3_21-02/​envs/​cdatm19_py3 
 +cdatm_py3 ​               /​home/​share/​unix_files/​cdat/​miniconda3_21-02/​envs/​cdatm_py3 
 + 
 +$ ls -l /​home/​share/​unix_files/​cdat/​miniconda3_21-02/​envs/​ 
 +drwxr-xr-x [...] cdat-8.2.1_py3/​ 
 +drwxr-xr-x [...] cdatm19_py3/​ 
 +lrwxrwxrwx [...] cdatm_py3 -> cdatm19_py3/​ 
 +</​code>​ 
 + 
 +===== Customizing CDAT for LSCE =====
  
 ==== A quick test of cdms2 and vcs ==== ==== A quick test of cdms2 and vcs ====
Line 160: Line 194:
 ==== Downloading cdms2/vcs test data ==== ==== Downloading cdms2/vcs test data ====
  
-You should download the test data (174M of data...) and use it in the example scripts that you want to distribute, and scripts you write for reporting the errors you find (if any...)+You should download the test data (174M of data...) and use it in the example scripts that you want to distribute, and scripts you write for reporting the errors you find (if any...). The downloaded data files will be available in: ''​vcs.sample_data''​
  
 <​code>​$ conda activate cdatm19_py3 <​code>​$ conda activate cdatm19_py3
Line 170: Line 204:
 (cdatm19_py3) $ du -sh /​home/​share/​unix_files/​cdat/​miniconda3_21-02/​envs/​cdatm19_py3/​share/​cdat/​sample_data (cdatm19_py3) $ du -sh /​home/​share/​unix_files/​cdat/​miniconda3_21-02/​envs/​cdatm19_py3/​share/​cdat/​sample_data
 174M    /​home/​share/​unix_files/​cdat/​miniconda3_21-02/​envs/​cdatm19_py3/​share/​cdat/​sample_data 174M    /​home/​share/​unix_files/​cdat/​miniconda3_21-02/​envs/​cdatm19_py3/​share/​cdat/​sample_data
-</​code>​+ 
 +$ python 
 +>>>​ import os, vcs 
 +>>>​ vcs.sample_data 
 +'/​home/​jypeter/​miniconda3_21-02/​envs/​cdat-8.2.1_py3/​share/​cdat/​sample_data'​ 
 + 
 +>>>​ os.listdir(vcs.sample_data) 
 +['​BlueMarble.ppm',​ '​clt.nc',​ '​geo.1deg.ctl',​ '​geo.1deg.gmp',​ '​geo.1deg.grb',​ ...]</​code>​
  
  
 ==== Packages that have no dependency problems ==== ==== Packages that have no dependency problems ====
  
-After cloning, we are ready to install some extra packages that may be requested by LSCE users+After [[#​cloning_the_base_cdat_environment_before_adding_specific_packages_for_lsce|cloning]], we are ready to install some extra (e.g. not directly related to CDAT) packages that may be useful to LSCE users
  
-  * We first try to install together as many packages as possible that don't require other channels than //​conda-forge//,​ and that don't request a downgrade of what is already installed +  * We first install together as many packages as possible that don't require other channels than //​conda-forge//,​ and that don't request a significant ​downgrade of what is already installed 
-  * We then install individual extra packages with ''​conda install''​ or ''​pip install''​+    * We use the //dry-run// option of conda in order to check the version (theoretically the latest) and origin (theoretically //​conda-forge//​) of the packages that would be installed. Or maybe the result will be that there are too many conflicts and the installation cannot proceed with the requested packages combination 
 +    * conda may take a very long time to report that there are too many conflicts and we have started using [[https://​github.com/​mamba-org/​mamba|mamba]] that uses different solving libraries and is much (**much!**) faster than conda 
 +    * if the dry-run reports too many conflicts or requests too many downgrades, we try to remove some (not really important) packages and check if it works better 
 +      * the last resort is too create a new environment with a smaller set of packages if a user really needs a conflicting package 
 +  * We later install individual extra packages with ''​conda ​install'',​ ''​mamba ​install''​ or ''​pip install''​
  
-<​code>#​ You can use the following to keep a trace of what will be installed +=== Pre-installation check with the dry-run ​option ===
-#$ conda install --dry-run ​-n cdatm19_py3 -c conda-forge pillow pandas statsmodels seaborn scikit-image seawater gsw netcdf4 pyferret basemap-data-hires xlsxwriter cmocean rpy2 gdal windspharm ​ > somewhere/​extra_packages.txt+
  
-# Install... +<​code>/​usr/​bin/​time mamba install -n cdatm19_nompi_py3 --dry-run ​-c conda-forge ​basemap ​basemap-data-hires ​cartopy ​cmocean ​cmor ferret_datasets ​gdal gdal geopandas glances gsw iris joblib netcdf4 palettable pandas pillow pyferret rpy2 scikit-image scikit-learn seaborn seawater shapely spyder statsmodels ​windspharm ​wrf-python xarray xlsxwriter
-$ conda install -n cdatm19_py3 ​-c conda-forge ​pillow pandas statsmodels seaborn scikit-image seawater gsw netcdf4 pyferret ferret_datasets ​basemap-data-hires ​xlsxwriter ​cmocean ​rpy2 gdal windspharm +
-[...]+
  
-# Use a similar install line if you want to install the same packages ​in the python3 version</​code>​+[... lots of information about what would be done followed by some information about how long it took ...] 
 + 
 +  Summary: 
 + 
 +  Install: 167 packages 
 +  Downgrade: 3 packages 
 + 
 +  Total download: 474 MB 
 + 
 +DryRunExit: Dry run. Exiting. 
 + 
 +8.31user 0.51system 0:​11.45elapsed 77%CPU</​code>​ 
 + 
 +=== Actual installation === 
 + 
 +<​code>/​usr/​bin/​time mamba install -n cdatm19_nompi_py3 -c conda-forge basemap basemap-data-hires cartopy cmocean cmor ferret_datasets gdal gdal geopandas glances gsw iris joblib netcdf4 palettable pandas pillow pyferret rpy2 scikit-image scikit-learn seaborn seawater shapely spyder statsmodels windspharm wrf-python xarray xlsxwriter</​code>​ 
 + 
 +Resulting list of installed packages 
 +  * ''​conda list -n cdatm19_nompi_py3 > cdatm19_nompi_py3_21-03-06.txt''​ 
 +  * no mpi: {{ :​other:​uvcdat:​cdat_conda:​cdatm19_nompi_py3_21-03-06.txt |}} 
 + 
 +=== Extra packages installed with pip ===
  
-List of installed packages: {{ :​other:​uvcdat:​cdat_conda:​lsce-extra_01_install_190304.txt |}} 
  
-Packages installed with //pip// 
   * [[https://​earthsystemcog.org/​projects/​wip/​CMIP6DataRequest|dreqPy]]:​ CMIP6 Data Request Python API   * [[https://​earthsystemcog.org/​projects/​wip/​CMIP6DataRequest|dreqPy]]:​ CMIP6 Data Request Python API
     * ''​conda activate cdatm19_py3''​\\ ''​pip install dreqPy''​     * ''​conda activate cdatm19_py3''​\\ ''​pip install dreqPy''​
Line 201: Line 262:
     * ''​conda activate cdatm19_py3''​\\ ''​pip install ferretmagic''​     * ''​conda activate cdatm19_py3''​\\ ''​pip install ferretmagic''​
  
-The following packages have no dependency problems and were installed ​(or updated) ​//later//+=== Packages with no dependency problems and were added (or updated) later === 
  
   * [[https://​cmor.llnl.gov/​|CMOR]]:​ CMOR (//Climate Model Output Rewriter//) is used to produce CF-compliant netCDF files   * [[https://​cmor.llnl.gov/​|CMOR]]:​ CMOR (//Climate Model Output Rewriter//) is used to produce CF-compliant netCDF files
Line 268: Line 330:
 ===== Extra packages list ===== ===== Extra packages list =====
  
-  * [[https://​python-pillow.org/​|pillow]]: the friendly PIL (//Python Imaging Library//) fork +FIXMEneed to clean the list below, and move the packages to the ordered list further down 
-  * [[http://​pandas.pydata.org/​|pandas]]:​ Python Data Analysis Library +
-  * [[http://​statsmodels.sourceforge.net/​|statsmodels]]:​ a Python module that allows users to explore data, estimate statistical models, and perform statistical tests +
-  * [[https://​seaborn.pydata.org/​|seaborn]]:​ statistical data visualization +
-  * [[http://​scikit-image.org/​|scikit-image]]:​ image processing in Python +
-  * [[http://​pythonhosted.org/​seawater/​|seawater]]:​ Python re-write of the CSIRO seawater toolbox +
-  * [[https://​pypi.python.org/​pypi/​gsw/​|gsw]]:​ Python implementation of the Thermodynamic Equation Of Seawater +
-  * [[http://​www.ifremer.fr/​vacumm/​|vacumm]]:​ Validation, Analysis, Comparison - Utilities written in Python to validate and analyze Multi-Model outputs, and compare them to observations+
   * [[http://​unidata.github.io/​netcdf4-python/​|netcdf4]]:​ a Python interface to the netCDF C library   * [[http://​unidata.github.io/​netcdf4-python/​|netcdf4]]:​ a Python interface to the netCDF C library
-  * [[http://​ferret.pmel.noaa.gov/​Ferret/​documentation/​pyferret|pyferret]]:​ Ferret encapsulated in Python 
   * [[https://​github.com/​PBrockmann/​ipython_ferretmagic|ipython_ferretmagic]]:​ IPython notebook extension for ferret   * [[https://​github.com/​PBrockmann/​ipython_ferretmagic|ipython_ferretmagic]]:​ IPython notebook extension for ferret
   * [[https://​github.com/​conda-forge/​basemap-data-hires-feedstock|basemap-data-hires]]:​ high resolution data for ''​basemap''​   * [[https://​github.com/​conda-forge/​basemap-data-hires-feedstock|basemap-data-hires]]:​ high resolution data for ''​basemap''​
Line 288: Line 343:
   * [[http://​toblerity.org/​shapely/​project.html|shapely]]:​ a Python wrapper for GEOS for algebraic manipulation of geometry (manipulation and analysis of geometric objects in the Cartesian plane)   * [[http://​toblerity.org/​shapely/​project.html|shapely]]:​ a Python wrapper for GEOS for algebraic manipulation of geometry (manipulation and analysis of geometric objects in the Cartesian plane)
   * [[http://​scitools.org.uk/​cartopy/​|cartopy]]:​ a Python package designed for geospatial data processing in order to produce maps and other geospatial data analyses   * [[http://​scitools.org.uk/​cartopy/​|cartopy]]:​ a Python package designed for geospatial data processing in order to produce maps and other geospatial data analyses
-  * [[https://​rpy2.readthedocs.io/​en/​version_2.8.x/​|rpy2]]:​ providing simple and robust access to R from within Python 
-  * [[http://​matplotlib.org/​cmocean/​|cmocean]]:​ beautiful colormaps for oceanography 
-  * [[http://​www.gdal.org/​|OSGeo/​GDAL]]:​ Geospatial Data Abstraction Library. GDAL is a translator library for raster and vector geospatial data formats 
-    * [[https://​pcjericks.github.io/​py-gdalogr-cookbook/​|Python GDAL/OGR Cookbook]] 
   * [[https://​github.com/​stefraynaud/​spanlib|spanlib]]:​ Spectral Analysis Library   * [[https://​github.com/​stefraynaud/​spanlib|spanlib]]:​ Spectral Analysis Library
   * [[https://​ajdawson.github.io/​windspharm/​latest/​|windspharm]]:​ spherical harmonic wind analysis in Python   * [[https://​ajdawson.github.io/​windspharm/​latest/​|windspharm]]:​ spherical harmonic wind analysis in Python
   * [[https://​climaf.readthedocs.io/​en/​latest/​|CliMAF]]:​ a Climate Model Assessment Framework   * [[https://​climaf.readthedocs.io/​en/​latest/​|CliMAF]]:​ a Climate Model Assessment Framework
 +
 +<WRAP center round tip 60%>
 +The extra packages below are (more or less) listed in package name alphabetical order. These are the extra packages that we explicitly installed, but there are many more packages installed (e.g. {{ :​other:​uvcdat:​cdat_conda:​cdatm19_nompi_py3_21-03-06.txt |}})!
 +
 +Initialize the environment and type ''​conda list''​ if you need an up-to-date list, including the packages'​ version and where they came from (conda-forge for most packages
 +</​WRAP>​
 +
 +
 +  * [[https://​github.com/​Try2Code/​cdo-bindings|python-cdo]]:​ Python scripting interface of [[https://​code.mpimet.mpg.de/​projects/​cdo/​wiki/​Cdo#​Documentation|cdo]]
 +    * ''​python-cdo''​ will install the ''​cdo''​ package (providing the ''​cdo ''​executable) as a dependency
 +    * see also [[https://​code.mpimet.mpg.de/​projects/​cdo/​wiki/​Cdo%7Brbpy%7D#​Python|Using CDO from python or ruby]]
 +  * [[http://​matplotlib.org/​cmocean/​|cmocean]]:​ beautiful colormaps for oceanography
 +  * [[http://​ferret.pmel.noaa.gov/​Ferret/​documentation/​pyferret|pyferret]] and ''​ferret_datasets'':​ Ferret encapsulated in Python
 +  * OSGeo/​[[http://​www.gdal.org/​|gdal]]:​ Geospatial Data Abstraction Library. GDAL is a translator library for raster and vector geospatial data formats
 +    * [[https://​pcjericks.github.io/​py-gdalogr-cookbook/​|Python GDAL/OGR Cookbook]]
 +  * [[https://​github.com/​TEOS-10/​GSW-python|gsw]]:​ Python implementation of the Thermodynamic Equation Of Seawater
 +    * see also //​seawater//​
   * [[https://​joblib.readthedocs.io/​en/​latest/​|joblib]]:​ running Python functions as pipeline jobs   * [[https://​joblib.readthedocs.io/​en/​latest/​|joblib]]:​ running Python functions as pipeline jobs
 +  * [[https://​opencv.org/​|OpenCV]]:​ OpenCV (Open Source Computer Vision Library) is an open-source library that includes several hundreds of computer vision algorithms. See also [[https://​www.geeksforgeeks.org/​opencv-python-tutorial/​|OpenCV Python Tutorial]] and **scikit-image**
   * [[https://​jiffyclub.github.io/​palettable/​|Palettable]]:​ Color palettes for Python   * [[https://​jiffyclub.github.io/​palettable/​|Palettable]]:​ Color palettes for Python
 +  * [[http://​pandas.pydata.org/​|pandas]]:​ Python Data Analysis Library
 +    * [[https://​geopandas.org/​|geopandas]]:​ an open source project to make working with geospatial data in python easier
 +  * [[https://​peakutils.readthedocs.io/​|PeakUtils]]:​ utilities related to the detection of peaks on 1D data
 +  * [[https://​python-pillow.org/​|pillow]]:​ the friendly PIL (//Python Imaging Library//) fork
 +  * [[https://​rpy2.github.io/​|rpy2]]:​ an interface to R running embedded in a Python process
 +  * [[http://​scikit-image.org/​|scikit-image]]:​ image processing in Python
 +  * [[https://​scikit-learn.org/​stable/​index.html|scikit-learn]]:​ Machine Learning in Python
 +  * [[https://​seaborn.pydata.org/​|seaborn]]:​ statistical data visualization
 +  * [[http://​pythonhosted.org/​seawater/​|seawater]]:​ Python re-write of the CSIRO seawater toolbox
 +    * see also //gsw//
 +  * [[http://​statsmodels.sourceforge.net/​|statsmodels]]:​ a Python module that allows users to explore data, estimate statistical models, and perform statistical tests
 +  * [[https://​xoa.readthedocs.io/​en/​latest/​|xoa]]:​ xarray-based ocean analysis library
 +    * ''​xoa''​ is the successor of [[http://​www.ifremer.fr/​vacumm/​|vacumm]] (vacumm does **not** support Python3)
  
 ==== Removed packages ==== ==== Removed packages ====
Line 302: Line 384:
   * NO removed packages!   * NO removed packages!
  
 +==== Packages NOT supported in Python3 ====
 +
 +  * [[http://​www.ifremer.fr/​vacumm/​|vacumm]]:​ Validation, Analysis, Comparison - Utilities written in Python to validate and analyze Multi-Model outputs, and compare them to observations
 +    * Check the ''​xoa''​ replacement package
 ===== Environments summary ===== ===== Environments summary =====
  
other/uvcdat/cdat_conda/cdat_8_2_1.txt · Last modified: 2024/07/02 10:44 by jypeter