As can be expected, there is a lot of online python documentation available, and it's easy to get lost. You can always use google to find an answer to your problem, and you will probably end up looking at lots of answers on Stack Overflow or a similar site. But it's always better to know where you can find some good documentation… and to spend some time to read the documentation
This page tries to list some python for the scientist related resources, in a suggested reading order. Do not print anything (or at least not everything), but it's a good idea to download all the pdf files in the same place, so that you can easily open and search the documents
You can start using python by reading the Bien démarrer avec python tutorial that was used during a 2013 IPSL python class:
Once you have done your first steps, you should read Plus loin avec Python (start at page 39, the previous pages are an old version of what was covered in Part 1 above)
os.remove(file_name)
instead of rm $file_name
)You can also look at the Useful python stuff page
You do not need to read all the python documentation at this step, but it is really well made and you should at least have a look at it. The Tutorial is very good, and you should have a look at the table of content of the Python Standard Library. There is a lot in the default library that can make your life easier
Summary: One document to learn numerics, science, and data with Python
Note: this used to be called Scipy Lecture Notes
This is a really nice and useful document that is regularly updated and used for the EuroScipy tutorials.
This document will teach you lots of things about python, numpy and matplotlib, debugging and optimizing scripts, and about using python for statistics, image processing, machine learning, washing dishes (this is just to check if you have read this page), etc…
Summary: Python provides ordered objects (e.g. lists, strings, basic arrays, …) and some math operators, but you can't do real heavy computation with these. Numpy makes it possible to work with multi-dimensional data arrays, and using array syntax and masks (instead of explicit nested loops and tests) and the apropriate numpy functions will allow you to get performance similar to what you would get with a compiled program! Scipy adds more scientific functions
Where: html and pdf documentation
0
and that the last element of an array is at index -1
!'This document by JY is awesome!'[::-1]
and 'This document by JY is awesome!'[slice(None, None, -1)]
) That is not a problem when you only read the values, but if you change the values of the View, you change the values of the first array (and vice-versa)! If that is not what want, do not forget to make a copy of the data before working on it!
Views are a good thing most of the time, so only make a copy of your data when needed, because otherwise copying a big array will just be a waste of CPU and computer memory. Anyway, it is always better to understand what you are doing…
Check the example below and the copies and views part of the quickstart tutorial.
>>> import numpy as np >>> a = np.arange(30).reshape((3,10)) >>> a array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [10, 11, 12, 13, 14, 15, 16, 17, 18, 19], [20, 21, 22, 23, 24, 25, 26, 27, 28, 29]]) >>> b = a[1, :] >>> b array([10, 11, 12, 13, 14, 15, 16, 17, 18, 19]) >>> b[3:7] = 0 >>> b array([10, 11, 12, 0, 0, 0, 0, 17, 18, 19]) >>> a array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [10, 11, 12, 0, 0, 0, 0, 17, 18, 19], [20, 21, 22, 23, 24, 25, 26, 27, 28, 29]]) >>> a[:, 2:4] = -1 >>> a array([[ 0, 1, -1, -1, 4, 5, 6, 7, 8, 9], [10, 11, -1, -1, 0, 0, 0, 17, 18, 19], [20, 21, -1, -1, 24, 25, 26, 27, 28, 29]]) >>> b array([10, 11, -1, -1, 0, 0, 0, 17, 18, 19]) >>> c = a[1, :].copy() >>> c array([10, 11, -1, -1, 0, 0, 0, 17, 18, 19]) >>> c[:] = 9 >>> c array([9, 9, 9, 9, 9, 9, 9, 9, 9, 9]) >>> b array([10, 11, -1, -1, 0, 0, 0, 17, 18, 19]) >>> a array([[ 0, 1, -1, -1, 4, 5, 6, 7, 8, 9], [10, 11, -1, -1, 0, 0, 0, 17, 18, 19], [20, 21, -1, -1, 24, 25, 26, 27, 28, 29]])
You can also check the numpy section of the Useful python stuff page
np.ma.some_function()
rather than just np.some_function()
People using CMIPn and model data on the IPSL servers can easily search and process NetCDF files using:
xarray makes working with labelled multi-dimensional arrays in Python simple, efficient, and fun! […] It is particularly tailored to working with netCDF files
Note: more packages (than listed below) may be listed in the Extra packages list page
xarray
extended with Climate Data Analysis ToolsnetCDF4 is a Python interface to the netCDF C library
cdms2 can read/write netCDF files (and read grads dat+ctl files) and provides a higher level interface than netCDF4. cdms2
is available in the CDAT distribution, and can theoretically be installed independently of CDAT (e.g. it will be installed when you install CMOR in conda). When you can use cdms2, you also have access to cdtime, that is very useful for handling time axis data.
How to get started:
Note: Plotting maps with matplotlib+cartopy (examples provided by JYP)
Summary: there are lots of python libraries that you can use for plotting, but Matplotlib has become a de facto standard
Where: Matplotlib web site
Help on stack overflow: matplotlib help
colorspace
softwareSummary: Basemap is an extension of Matplotlib that you can use for plotting maps, using different projections
Where: Basemap web site
Help on stack overflow: basemap help
How to use basemap?
Summary:
Where: Cartopy and Iris web sites
Examples:
Help on stack overflow: Cartopy help - Iris help
If you need standard datasets for testing, example, demos, …
Summary: pandas is a fast, powerful, flexible and easy to use open source data analysis and manipulation tool
Where: Pandas web site
JYP's comment: pandas is supposed to be quite good for loading, processing and plotting time series, without writing custom code. It is very convenient for processing tables in xlsx files (or csv, etc…). You should at least have a quick look at:
statsmodels is a Python module that provides classes and functions for the estimation of many different statistical models, as well as for conducting statistical tests, and statistical data exploration.
Note: check the example in the Statistics in Python tutorial
scikit-learn is a Python library for machine learning, and is one of the most widely used tools for supervised and unsupervised machine learning. Scikit–learn provides an easy-to-use, consistent interface to a large collection of machine learning models, as well as tools for model evaluation and data preparation
Note: check the example in scikit-learn: machine learning in Python
scikit-image is a collection of algorithms for image processing in Python
Note: check the example in scikit-image: image processing
YData Profiling: a leading package for data profiling, that automates and standardizes the generation of detailed reports, complete with statistics and visualizations.
D-Tale brings you an easy way to view & analyze Pandas data structures. It integrates seamlessly with ipython notebooks & python/ipython terminals.
Sweetviz is pandas based Python library that generates beautiful, high-density visualizations to kickstart EDA (Exploratory Data Analysis) with just two lines of code.
AutoViz: the One-Line Automatic Data Visualization Library. Automatically Visualize any dataset, any size with a single line of code
The built-in shelve package, can be easily used for storing data (python objects like lists, dictionaries, numpy arrays that are not too big, …) on disk and retrieving them later
Use case:
shelve
, or update the resultsshelve
. This way you don't have to wait for the pre-processing step to finish each time you want to improve your plot(s)Warning:
More and more applications use json files as configuration files or as a mean to use text files to exchange data (through serialization/deserialization ).
json files look basically like a list of (nested) python dictionaries that would have been dumped to a text file
/home/users/jypeter/CDAT/Progs/Devel/beaugendre/nc2json.py
cat file.json | python -m json.tool | less
Resources for Linked PaleoData:
BagIt, a set of hierarchical file layout conventions for storage and transfer of arbitrary digital content.
Protocol Buffers are (Google's) language-neutral, platform-neutral extensible mechanisms for serializing structured data
mamba install protobuf
Check the page about useful python stuff that has not been sorted yet
There is only so much you can do with staring at your code in your favorite text editor, and adding print
lines in your code (or using logging instead of print
). The next step is to use the python debugger!
python -m pdb my_script.py
run
(or r) to go to the first line of the scriptcontinue
(or c) to execute the script to the end, or till the first breakpoint or error is reachedwhere
(or w) to check the call stack that led to the current stop. Use up
and down
to navigate through the call stack and examine the values of the functions' parametersbreak NNN
to stop at line NNNtype(var)
and print var
to check the type and values of variables. You can also change the variables' values on the fly!run
(or r) to restart the scriptnext
and step
to execute some parts of the script line by line. If a code line calls a function:next
(or n) will execute a function and stop on the next linestep
(or s) will stop at the first line inside the function help
in the debugger for using the built-in help
Depending on the distribution, the editor and the programming environment you use, you may have access to a graphical version of the debugger. UV-CDAT users can use pydebug my_script.py
Misc notes, resources and links to organize later
IDE = Integrated Development Environment
There are lots of ways to use Python and develop scripts, from using a lightweight approach (your favorite text editor with builtin python syntax highlighting, e.g. emacs and python -i myscript.py
) to a full-fledged IDE. You'll find below some IDE related links
You can already get a very efficient script by checking the following:
If your script is still not fast enough, there is a lot you can do to improve it, without resorting to parallelization (that may introduce extra bugs rather that extra performance). See the sections below
Hint: before optimizing your script, you should spent some time profiling it, in order to only spend time improving the slow parts of your script
It is still safe to use Python 2.7, but you should consider upgrading to Python 3, unless some key modules you need are not compatible (yet) with Python 3
You should start writing code that will, when possible, work both in Python 2 and Python 3
Some interesting reading:
print
is now a function. Use print('Hello')
<>
any longer! Use !=
You can do a lot more with python! But if you have read at least a part of this page, you should be able to find and use the modules you need. Make sure you do not reinvent the wheel! Use existing packages when possible, and make sure to report bugs or errors in the documentations when you find some
Some links, in case they can't be found easily on the CDAT web site…
[ PMIP3 Wiki Home ] - [ Help! ] - [ Wiki syntax ]