Numerical Libraries

  • dask: “Dask is a flexible parallel computing library for analytic computing.”
  • DynTS: Timeseries analysis for econometrics.
  • Multipack: Package of numerical routines (e.g. nonlinear equation solvers, spline routines, etc.) written in Fortran that are callable in Python
  • Numba: “With a few annotations, array-oriented and math-heavy Python code can be just-in-time compiled to native machine instructions….”
  • NumPy: Python’s standard array manipulation package
  • Pandas (Python Data Analysis Library): Powerful tool for array manipulation. Has integrated indexing, data alignment and merge, missing value handling, etc.
  • ParaView: Analysis and visualization package for very large datasets.
  • petsc4py: Python bindings to PETSc (Portable, Extensible Toolkit for Scientific Computation).
  • patsy: “Describing statistical models in Python using symbolic formulas”
  • properscoring: “Proper scoring rules for evaluating probabilistic forecasts in Python.”
  • PyClimate: Package of functions of netCDF operations, EOF and SVD analysis, and linear digital filtering
  • PyMat: A Python-Matlab interface
  • PyMC: Markov chain Monte Carlo module
  • PyperR: Python interface to the R statistics programming language using PIPE
  • PyTables: Hierarchical datasets with Python
  • RPy: Python interface to the R statistics programming language
  • SAGE: A free distribution of Python and numerous mathematical and scientific libraries
  • ScientificPython
  • scikit-learn: Modules for data mining and machine learning
  • SciKits: Numerical, mathematical, statistical, engineering, etc. modules to complement SciPy
  • SciPy: A collection of scientific programming tools implemented into a single module
  • SymPy: Library for symbolic mathematics.
  • Transcendental: A collection of statistical functions
  • wmtsa-python: “Discrete wavelet methods for time series analysis using python.”
  • xarray: Provides “a pandas-like and pandas-compatible toolkit for analytics on multi-dimensional arrays” adapting “the Common Data Model for self-describing scientific data in widespread use in the Earth sciences.”