User Tools

Site Tools


other:python:jyp_steps

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
other:python:jyp_steps [2023/12/14 10:44]
jypeter [Pandas] Improved
other:python:jyp_steps [2024/03/07 10:15] (current)
jypeter Added a Protocol Buffers section to the file formats
Line 165: Line 165:
 ===== Using NetCDF files with Python ===== ===== Using NetCDF files with Python =====
  
-<note tip>​People using CMIPn and model data on the IPSL servers can easily search and process NetCDF files using: 
-  * the [[https://​climaf.readthedocs.io/​|Climate Model Assessment Framework (CliMAF)]] environment 
-  * and the [[https://​github.com/​jservonnat/​C-ESM-EP/​wiki|CliMAF Earth System Evaluation Platform (C-ESM-EP)]] 
-</​note>​ 
  
-  ​There is a good chance that your input array data will be stored in a  [[other:​newppl:​starting#​netcdf_and_related_conventions|NetCDF]] ​file.+==== What is NetCDF? ==== 
 + 
 +  ​If you are working with climate model output data, there is a good chance that your input array data will be stored in a NetCDF file! 
 + 
 +  * Read the [[other:​newppl:​starting#​netcdf_and_related_conventions|NetCDF ​and related Conventions]] for more information
  
   * There may be different ways of dealing with NetCDF files, depending on which [[other:​python:​starting#​some_python_distributions|python distribution]] you have access to   * There may be different ways of dealing with NetCDF files, depending on which [[other:​python:​starting#​some_python_distributions|python distribution]] you have access to
  
-==== cdms2 ==== 
  
-Summary: cdms2 can read/write netCDF files (and read //grads// dat+ctl files) ​and provides a higher level interface than netCDF4. cdms2 is available in the [[other:python:​starting#​cdat|CDAT distribution]]and can theoretically be installed independently of CDAT (e.g. it will be installed when you install ​[[https://cmor.llnl.gov/mydoc_cmor3_conda/|CMOR in conda)]]. When you can use cdms2, you also have access to //cdtime//, that is very useful for handling time axis data.+==== CliMAF ​and C-ESM-EP ==== 
 + 
 +People using **//CMIPn// and model data on the IPSL servers** can easily search and process NetCDF files using: 
 + 
 +  * the [[https://​climaf.readthedocs.io/​|Climate Model Assessment Framework (CliMAF)]] environment 
 + 
 +  * and the [[https://github.com/​jservonnat/C-ESM-EP/wiki|CliMAF Earth System Evaluation Platform (C-ESM-EP)]]
  
-How to get started: 
-  - read [[http://​www.lsce.ipsl.fr/​Phocea/​file.php?​class=page&​file=5/​pythonCDAT_jyp_2sur2_070306.pdf|JYP'​s cdms tutorial]], starting at page 54 
-    - the tutorial is in French (soooorry!) 
-    - you have to replace //cdms// with **cdms2**, and //MV// with **MV2** (sooorry about that, the tutorial was written when CDAT was based on //Numeric// instead of //numpy// to handle array data) 
-  - read the [[http://​cdms.readthedocs.io/​en/​docstanya/​index.html|official cdms documentation]] (link may change) 
  
 ==== xarray ==== ==== xarray ====
  
-Summary: ​[[http://xarray.pydata.org/​en/​stable/​|xarray]] ​is an open source project and Python package that makes working with labelled multi-dimensional arrays simple, efficient, and fun! [...] It is particularly tailored to working with netCDF files+[[https://docs.xarray.dev/|xarray]] makes working with labelled multi-dimensional arrays ​in Python ​simple, efficient, and fun! [...] It is particularly tailored to working with netCDF files
  
 === Some xarray related resources === === Some xarray related resources ===
  
-Note: more packages (than listed below) may be listed in the [[other:​uvcdat:​cdat_conda:​cdat_8_2_1#​extra_packages_list|Extra packages list]]+Note: more packages (than listed below) may be listed in the [[other:​uvcdat:​cdat_conda:​cdat_8_2_1#​extra_packages_list|Extra packages list]] ​page
  
-  * [[https://​xcdat.readthedocs.io/​|xcdat]]: xarray extended with Climate Data Analysis Tools+  ​* [[https://​docs.xarray.dev/​en/​stable/​generated/​xarray.tutorial.load_dataset.html|xarray test datasets]] 
 + 
 +  * **[[https://​xcdat.readthedocs.io/​|xCDAT]]: ''​xarray'' ​extended with Climate Data Analysis Tools**
  
   * [[https://​xoa.readthedocs.io/​en/​latest/​|xoa]]:​ xarray-based ocean analysis library   * [[https://​xoa.readthedocs.io/​en/​latest/​|xoa]]:​ xarray-based ocean analysis library
  
   * [[https://​uxarray.readthedocs.io/​|uxarray]]:​ provide xarray styled functionality for unstructured grid datasets following [[https://​ugrid-conventions.github.io/​ugrid-conventions/​|UGRID Conventions]]   * [[https://​uxarray.readthedocs.io/​|uxarray]]:​ provide xarray styled functionality for unstructured grid datasets following [[https://​ugrid-conventions.github.io/​ugrid-conventions/​|UGRID Conventions]]
- 
  
  
 ==== netCDF4 ==== ==== netCDF4 ====
  
-Summary: //netCDF4 can read/write netCDF files and is available in most python ​distributions/​/+[[http://unidata.github.io/netcdf4-python/|netCDF4]] is a Python interface to the netCDF C library
  
-Where: [[http://​unidata.github.io/​netcdf4-python/​]] 
  
-===== CDAT-related resources =====+==== cdms2 ====
  
-Some links, in case they can't be found easily on the [[https://cdat.llnl.gov|CDAT]] web site...+<note important>​ 
 +  * ''​cdms2''​ is unfortunately not maintained anymore and is slowly being **phased out in favor of a combination of [[#​xarray|xarray]] and [[https://xcdat.readthedocs.io/|xCDAT]]**
  
-  * [[https://cdat.llnl.gov/tutorials.html|Tutorials in ipython notebooks]] +  * ''​cdms2''​ will [[https://github.com/CDAT/​cdms/​issues/​449|not be compatible with numpy after numpy 1.23.5]] :-( 
-  ​* ​[[http://cdat-vcs.readthedocs.io/​en/​latest/|VCSVisualization Control System]] +</​note>​ 
-    * [[https://github.com/CDAT/vcs/​issues/​238|Colormaps ​in vcs examples]] + 
-  ​[[https://github.com/CDAT/cdat-site/blob/master/eztemplate.md|EzTemplate Documentation]]+[[https://cdms.readthedocs.io/​en/​docstanya/|cdms2]] can read/write netCDF files (and read //grads// dat+ctl files) and provides a higher level interface than netCDF4. ''​cdms2''​ is available in the [[other:python:​starting#​cdat|CDAT distribution]], and can theoretically be installed independently of CDAT (e.g. it will be installed when you install ​[[https://cmor.llnl.gov/mydoc_cmor3_conda/|CMOR in conda)]]. When you can use cdms2, you also have access to //cdtime//, that is very useful for handling time axis data. 
 + 
 +How to get started: 
 +  ​- read [[http://www.lsce.ipsl.fr/Phocea/​file.php?​class=page&​file=5/​pythonCDAT_jyp_2sur2_070306.pdf|JYP'​s cdms tutorial]], starting at page 54 
 +    - the tutorial is in French (soooorry!) 
 +    - you have to replace //cdms// with **cdms2**, and //MV// with **MV2** (sooorry about that, the tutorial was written when CDAT was based on //Numeric// instead of //numpy// to handle array data) 
 +  ​read the [[http://cdms.readthedocs.io/en/​docstanya/​index.html|official cdms documentation]] (link may change)
  
 ===== Matplotlib ===== ===== Matplotlib =====
Line 315: Line 322:
  
 ===== Data analysis ===== ===== Data analysis =====
 +
 +==== EDA (Exploratory Data Analysis) ? ====
 +
 +<note tip>
 +The //EDA concept// seems to apply to **time series** (and tabular data), which is not exactly the case of full climate model output data</​note>​
 +
 +  * [[https://​www.geeksforgeeks.org/​what-is-exploratory-data-analysis/​|What is Exploratory Data Analysis ?]]
 +    * //The method of studying and exploring record sets to apprehend their predominant traits, discover patterns, locate outliers, and identify relationships between variables. EDA is normally carried out as a preliminary step before undertaking extra formal statistical analyses or modeling.//
 +
 +  * [[https://​medium.com/​codex/​automate-the-exploratory-data-analysis-eda-to-understand-the-data-faster-not-better-2ed6ff230eed|Automate the exploratory data analysis (EDA) to understand the data faster and easier]]: a nice comparison of some Python libraries listed below ([[#​ydata_profiling|YData Profiling]],​ [[#​d-tale|D-Tale]],​ [[#​sweetviz|sweetviz]],​ [[#​autoviz|AutoViz]])
 +
 +  * [[https://​www.geeksforgeeks.org/​exploratory-data-analysis-in-python/​|EDA in Python]]
 +
 +
 +==== Easy to use datasets ====
 +
 +If you need standard datasets for testing, example, demos, ...
 +
 +  * [[https://​docs.xarray.dev/​en/​stable/​generated/​xarray.tutorial.load_dataset.html|Tutorial datasets]] from [[#​xarray|xarray]] (requires internet)
 +    * Example: [[https://​docs.xarray.dev/​en/​stable/​examples/​visualization_gallery.html|Using the 'air temperature'​ dataset]]
 +
 +  * [[https://​scikit-learn.org/​stable/​datasets.html|Toy,​ real-world and generated datasets]] from [[#​scikit-learn]]
 +    * Example: [[https://​lectures.scientific-python.org/​packages/​scikit-learn/​index.html#​a-simple-example-the-iris-dataset|using the '​iris'​ dataset]]
 +
 +  * [[https://​scikit-image.org/​docs/​stable/​api/​skimage.data.html|Test images and datasets]] from [[#​scikit-image]]
 +    * Example: [[https://​lectures.scientific-python.org/​packages/​scikit-image/​index.html#​data-types|Using the '​camera'​ dataset]]
 +
 +  * [[https://​esgf-node.ipsl.upmc.fr/​search/​cmip6-ipsl/​|CMIP6 data]] on ESGF
 +    * Example : ''​orog_fx_IPSL-CM6A-LR_piControl_r1i1p1f1_gr.nc'':​
 +      * [[http://​vesg.ipsl.upmc.fr/​thredds/​fileServer/​cmip6/​CMIP/​IPSL/​IPSL-CM6A-LR/​piControl/​r1i1p1f1/​fx/​orog/​gr/​v20200326/​orog_fx_IPSL-CM6A-LR_piControl_r1i1p1f1_gr.nc|HTTP]] download link
 +      * [[http://​vesg.ipsl.upmc.fr/​thredds/​dodsC/​cmip6/​CMIP/​IPSL/​IPSL-CM6A-LR/​piControl/​r1i1p1f1/​fx/​orog/​gr/​v20200326/​orog_fx_IPSL-CM6A-LR_piControl_r1i1p1f1_gr.nc.dods|OpenDAP]] download link
 +
 +  * [[https://​github.com/​xCDAT/​xcdat/​issues/​277|xCDAT test data GH discussion]]
  
  
Line 330: Line 370:
   * Some tutorials:   * Some tutorials:
     * [[http://​pandas.pydata.org/​docs/​user_guide/​10min.html|10 minutes to pandas]]     * [[http://​pandas.pydata.org/​docs/​user_guide/​10min.html|10 minutes to pandas]]
-    * The [[https://​lectures.scientific-python.org/​packages/​statistics/​index.html|Statistics in Python]] tutorial that combines Pandas, [[http://statsmodels.sourceforge.net/​|Statsmodels]] and [[http://​seaborn.pydata.org/​|Seaborn]]+    * The [[https://​lectures.scientific-python.org/​packages/​statistics/​index.html|Statistics in Python]] tutorial that combines Pandas, [[#statsmodels|statsmodels]] and [[http://​seaborn.pydata.org/​|Seaborn]]
     * More [[http://​pandas.pydata.org/​docs/​getting_started/​tutorials.html|Community tutorials]]...     * More [[http://​pandas.pydata.org/​docs/​getting_started/​tutorials.html|Community tutorials]]...
  
Line 336: Line 376:
 ==== statsmodels ==== ==== statsmodels ====
  
-[[https://​www.statsmodels.org/​|statsmodels ]] is a Python module that provides classes and functions for the estimation of many different statistical models, as well as for conducting statistical tests, and statistical data exploration.+[[https://​www.statsmodels.org/​|statsmodels]] is a Python module that provides classes and functions for the estimation of many different statistical models, as well as for conducting statistical tests, and statistical data exploration. 
 + 
 +Note: check the example in the [[https://​lectures.scientific-python.org/​packages/​statistics/​index.html|Statistics in Python]] tutorial 
  
 ==== scikit-learn ==== ==== scikit-learn ====
  
-[[http://​scikit-learn.org/​|scikit-learn]] is an open source ​machine learning ​library that supports ​supervised and unsupervised learning. ​It also provides ​various ​tools for model fitting, data preprocessing,​ model selection and evaluationand many other utilities.+[[http://​scikit-learn.org/​|scikit-learn]] is a Python library for machine learning, and is one of the most widely used tools for supervised and unsupervised ​machine ​learning. ​Scikit–learn ​provides ​an easy-to-use,​ consistent interface to a large collection of machine learning models, as well as tools for model evaluation and data preparation 
 + 
 +Note: check the example in [[https://​lectures.scientific-python.org/​packages/​scikit-learn/​index.html|scikit-learn:​ machine learning in Python]]
  
  
Line 346: Line 391:
  
 [[https://​scikit-image.org/​|scikit-image]] is a collection of algorithms for image processing in Python [[https://​scikit-image.org/​|scikit-image]] is a collection of algorithms for image processing in Python
 +
 +Note: check the example in [[https://​lectures.scientific-python.org/​packages/​scikit-image/​index.html|scikit-image:​ image processing]]
 +
 +
 +==== YData Profiling ====
 +
 +[[https://​docs.profiling.ydata.ai/​|YData Profiling]]:​ a leading package for data profiling, that automates and standardizes the generation of detailed reports, complete with statistics and visualizations.
 +
 +
 +==== D-Tale ====
 +
 +[[https://​github.com/​man-group/​dtale|D-Tale]] brings you an easy way to view & analyze Pandas data structures. It integrates seamlessly with ipython notebooks & python/​ipython terminals.
 +
 +
 +==== Sweetviz ====
 +
 +[[https://​github.com/​fbdesignpro/​sweetviz|Sweetviz]] is pandas based Python library that generates beautiful, high-density visualizations to kickstart EDA (Exploratory Data Analysis) with just two lines of code.
 +
 +
 +==== AutoViz ====
 +
 +[[https://​github.com/​AutoViML/​AutoViz|AutoViz]]:​ the One-Line Automatic Data Visualization Library. Automatically Visualize any dataset, any size with a single line of code
  
  
 =====  Data file formats =====  =====  Data file formats ===== 
  
-We list here some resources about non-NetCDF data formats that can be useful+  * We list below some resources about **non-NetCDF data formats** that can be useful 
 + 
 +  * Check the [[#​using_netcdf_files_with_python|Using NetCDF files with Python]] section otherwise
  
 ==== The shelve package ==== ==== The shelve package ====
Line 392: Line 461:
   * [[https://​github.com/​LibraryOfCongress/​bagger|Bagger]] (BagIt GUI)   * [[https://​github.com/​LibraryOfCongress/​bagger|Bagger]] (BagIt GUI)
   * [[https://​github.com/​LibraryOfCongress/​bagit-python|bagit-python]]   * [[https://​github.com/​LibraryOfCongress/​bagit-python|bagit-python]]
 +
 +==== Protocol Buffers ====
 +
 +//Protocol Buffers are (Google'​s) language-neutral,​ platform-neutral extensible mechanisms for serializing structured data//
 +
 +  * https://​protobuf.dev/​
 +  * [[https://​protobuf.dev/​getting-started/​pythontutorial/​|Protocol Buffer Basics: Python]]
 +    * ''​mamba install protobuf''​
  
 ===== Quick Reference and cheat sheets ===== ===== Quick Reference and cheat sheets =====
Line 499: Line 576:
  
 You can do a lot more with python! But if you have read at least a part of this page, you should be able to find and use the modules you need. Make sure you do not reinvent the wheel! Use existing packages when possible, and make sure to report bugs or errors in the documentations when you find some You can do a lot more with python! But if you have read at least a part of this page, you should be able to find and use the modules you need. Make sure you do not reinvent the wheel! Use existing packages when possible, and make sure to report bugs or errors in the documentations when you find some
 +
 +
 +===== Out-of-date stuff =====
 +
 +
 +==== CDAT-related resources ====
 +
 +Some links, in case they can't be found easily on the [[https://​cdat.llnl.gov|CDAT]] web site...
 +
 +  * [[https://​cdat.llnl.gov/​tutorials.html|Tutorials in ipython notebooks]]
 +  * [[http://​cdat-vcs.readthedocs.io/​en/​latest/​|VCS:​ Visualization Control System]]
 +    * [[https://​github.com/​CDAT/​vcs/​issues/​238|Colormaps in vcs examples]]
 +  * [[https://​github.com/​CDAT/​cdat-site/​blob/​master/​eztemplate.md|EzTemplate Documentation]]
 +
  
 /* standard page footer */ /* standard page footer */
other/python/jyp_steps.1702550652.txt.gz · Last modified: 2023/12/14 10:44 by jypeter