As can be expected, there is a lot of online python documentation available, and it's easy to get lost. You can always use google to find an answer to your problem, and you will probably end up looking at lots of answers on Stack Overflow or a similar site. But it's always better to know where you can find some good documentation… and to spend some time to read the documentation
This page tries to list some python for the scientist related resources, in a suggested reading order. Do not print anything (or at least not everything), but it's a good idea to download all the pdf files in the same place, so that you can easily open and search the documents
You can start using python by reading the Bien démarrer avec python tutorial that was used during a 2013 IPSL python class:
Once you have done your first steps, you should read Plus loin avec Python (start at page 39, the previous pages are an old version of what was covered in Part 1 above)
You do not need to read all the python documentation at this step, but it is really well made and you should at least have a look at it. The Tutorial is very good, and you should have a look at the table of content of the Python Standard Library. There is a lot in the default library that can make your life easier
Summary: Python provides ordered objects (e.g. lists, strings, basic arrays, …) and some math operators, but you can't do real heavy computation with these. Numpy makes it possible to work with multi-dimensional data arrays, and using array syntax and masks (instead of explicit nested loops and tests) and the apropriate numpy functions will allow you to get performance similar to what you would get with a compiled program! Scipy adds more scientific functions
Where: html and pdf documentation
0and that the last element of an array is at index
'This document by JY is awesome!'[::-1]and
'This document by JY is awesome!'[slice(None, None, -1)])
That is not a problem when you only read the values, but if you change the values of the View, you change the values of the first array (and vice-versa)! If that is not what want, do not forget to make a copy of the data before working on it!
Views are a good thing most of the time, so only make a copy of your data when needed, because otherwise copying a big array will just be a waste of CPU and computer memory. Anyway, it is always better to understand what you are doing…
Check the example below and the copies and views part of the quickstart tutorial.
>>> import numpy as np >>> a = np.arange(30).reshape((3,10)) >>> a array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [10, 11, 12, 13, 14, 15, 16, 17, 18, 19], [20, 21, 22, 23, 24, 25, 26, 27, 28, 29]]) >>> b = a[1, :] >>> b array([10, 11, 12, 13, 14, 15, 16, 17, 18, 19]) >>> b[3:7] = 0 >>> b array([10, 11, 12, 0, 0, 0, 0, 17, 18, 19]) >>> a array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [10, 11, 12, 0, 0, 0, 0, 17, 18, 19], [20, 21, 22, 23, 24, 25, 26, 27, 28, 29]]) >>> a[:, 2:4] = -1 >>> a array([[ 0, 1, -1, -1, 4, 5, 6, 7, 8, 9], [10, 11, -1, -1, 0, 0, 0, 17, 18, 19], [20, 21, -1, -1, 24, 25, 26, 27, 28, 29]]) >>> b array([10, 11, -1, -1, 0, 0, 0, 17, 18, 19]) >>> c = a[1, :].copy() >>> c array([10, 11, -1, -1, 0, 0, 0, 17, 18, 19]) >>> c[:] = 9 >>> c array([9, 9, 9, 9, 9, 9, 9, 9, 9, 9]) >>> b array([10, 11, -1, -1, 0, 0, 0, 17, 18, 19]) >>> a array([[ 0, 1, -1, -1, 4, 5, 6, 7, 8, 9], [10, 11, -1, -1, 0, 0, 0, 17, 18, 19], [20, 21, -1, -1, 24, 25, 26, 27, 28, 29]])
You can also check the numpy section of the Useful python stuff page
np.ma.some_function()rather than just
Summary: cdms2 can read/write netCDF files (and read grads dat+ctl files) and provides a higher level interface than netCDF4. cdms2 is available in the CDAT distribution, and can theoretically be installed independently of CDAT (e.g. it will be installed when you install CMOR in conda). When you can use cdms2, you also have access to cdtime, that is very useful for handling time axis data.
How to get started:
Summary: xarray is an open source project and Python package that makes working with labelled multi-dimensional arrays simple, efficient, and fun! […] It is particularly tailored to working with netCDF files
Note: more packages (than listed below) may be listed in the Extra packages list
Summary: netCDF4 can read/write netCDF files and is available in most python distributions
Some links, in case they can't be found easily on the CDAT web site…
Note: Plotting maps with matplotlib+cartopy (examples provided by JYP)
Summary: there are lots of python libraries that you can use for plotting, but Matplotlib has become a de facto standard
Where: Matplotlib web site
Help on stack overflow: matplotlib help
Summary: Basemap is an extension of Matplotlib that you can use for plotting maps, using different projections
Where: Basemap web site
Help on stack overflow: basemap help
How to use basemap?
Where: Cartopy and Iris web sites
Help on stack overflow: Cartopy help - Iris help
We list here some resources about non-NetCDF data formats that can be useful
The built-in shelve package, can be easily used for storing data (python objects like lists, dictionaries, numpy arrays that are not too big, …) on disk and retrieving them later
shelve, or update the results
shelve. This way you don't have to wait for the pre-processing step to finish each time you want to improve your plot(s)
More and more applications use json files as configuration files or as a mean to use text files to exchange data (through serialization/deserialization ).
json files look basically like a list of (nested) python dictionaries that would have been dumped to a text file
cat file.json | python -m json.tool | less
Resources for Linked PaleoData:
BagIt, a set of hierarchical file layout conventions for storage and transfer of arbitrary digital content.
Summary: pandas is a library providing high-performance, easy-to-use data structures and data analysis tools
Where: Pandas web site
JYP's comment: pandas is supposed to be quite good for loading, processing and plotting time series, without writing custom code. It is very convenient for processing tables in xlsx files (or csv, etc…). You should at least have a quick look at:
Summary: One document to learn numerics, science, and data with Python
This is a really nice and useful document that is regularly updated and used for the EuroScipy tutorials.
This document will teach you even more things about python, numpy and matplotlib, debugging and optimizing scripts, and about using python for statistics, image processing, machine learning, washing dishes (this is just to check if you have read this page), etc…
statsmodels is a Python module that provides classes and functions for the estimation of many different statistical models, as well as for conducting statistical tests, and statistical data exploration.
scikit-learn is an open source machine learning library that supports supervised and unsupervised learning. It also provides various tools for model fitting, data preprocessing, model selection and evaluation, and many other utilities.
scikit-image is a collection of algorithms for image processing in Python
Check the page about useful python stuff that has not been sorted yet
There is only so much you can do with staring at your code in your favorite text editor, and adding
python -m pdb my_script.py
run(or r) to go to the first line of the script
continue(or c) to execute the script to the end, or till the first breakpoint or error is reached
where(or w) to check the call stack that led to the current stop. Use
downto navigate through the call stack and examine the values of the functions' parameters
break NNNto stop at line NNN
print varto check the type and values of variables. You can also change the variables' values on the fly!
run(or r) to restart the script
stepto execute some parts of the script line by line. If a code line calls a function:
next(or n) will execute a function and stop on the next line
step(or s) will stop at the first line inside the function
helpin the debugger for using the built-in help
Depending on the distribution, the editor and the programming environment you use, you may have access to a graphical version of the debugger. UV-CDAT users can use
Misc notes, resources and links to organize later
IDE = Integrated Development Environment
There are lots of ways to use Python and develop scripts, from using a lightweight approach (your favorite text editor with builtin python syntax highlighting, e.g. emacs and
python -i myscript.py) to a full-fledged IDE. You'll find below some IDE related links
You can already get a very efficient script by checking the following:
If your script is still not fast enough, there is a lot you can do to improve it, without resorting to parallelization (that may introduce extra bugs rather that extra performance). See the sections below
Hint: before optimizing your script, you should spent some time profiling it, in order to only spend time improving the slow parts of your script
It is still safe to use Python 2.7, but you should consider upgrading to Python 3, unless some key modules you need are not compatible (yet) with Python 3
You should start writing code that will, when possible, work both in Python 2 and Python 3
Some interesting reading:
<>any longer! Use
You can do a lot more with python! But if you have read at least a part of this page, you should be able to find and use the modules you need. Make sure you do not reinvent the wheel! Use existing packages when possible, and make sure to report bugs or errors in the documentations when you find some
[ PMIP3 Wiki Home ] - [ Help! ] - [ Wiki syntax ]