Template Numerical Library version develop:6514e4815
Template Numerical Library

TNL logo

TNL is a collection of building blocks that facilitate the development of efficient numerical solvers. It is implemented in C++ using modern programming paradigms in order to provide flexible and user friendly interface. TNL provides native support for modern hardware architectures such as multicore CPUs, GPUs, and distributed systems, which can be managed via a unified interface.

Similarly to the STL, features provided by the TNL can be grouped into several modules:

TODO: Each topic in this list should have a separate page or tutorial.

  • Core concepts. The main concepts used in TNL are the memory space, which represents the part of memory where given data is allocated, and the execution model, which represents the way how given (typically parallel) algorithm is executed. For example, data can be allocated in the main system memory, in the GPU memory, or using the CUDA Unified Memory which can be accessed from the host as well as from the GPU. On the other hand, algorithms can be executed using either the host CPU or an accelerator (GPU), and for each there are many ways to manage parallel execution. The usage of memory spaces is abstracted with allocators and the execution model is represented by devices. See the Core concepts page for details.
  • Containers. TNL provides generic containers such as array, multidimensional array or array views, which abstract data management and execution of common operations on different hardware architectures.
  • Linear algebra. TNL provides generic data structures and algorithms for linear algebra, such as vectors, sparse matrices, Krylov solvers and preconditioners.
    • Sparse matrix formats: CSR, Ellpack, Sliced Ellpack, tridiagonal, multidiagonal
    • Krylov solvers: CG, BiCGstab, GMRES, CWYGMRES, TFQMR
    • Preconditioners: Jacobi, ILU(0) (CPU only), ILUT (CPU only)
  • Meshes. TNL provides data structures for the representation of structured or unstructured numerical meshes.
  • Solvers for differential equations. TNL provides a framework for the development of ODE or PDE solvers.
  • Image processing. TNL provides structures for the representation of image data. Imports and exports from several file formats such as DICOM, PNG, and JPEG are provided using external libraries (see below).

See also Comparison with other libraries.

TNL also provides several optional components:

  • TNL header files in the src/TNL directory.
  • Various pre-processing and post-processing tools in the src/Tools directory.
  • Python bindings and scripts in the src/Python directory.
  • Examples of various numerical solvers in the src/Examples directory.
  • Benchmarks in the src/Benchmarks directory.

These components can be individually enabled or disabled and installed by a convenient install script. See the Installation section for details.

Installation

You can either download the stable version or directly clone the git repository via HTTPS:

git clone https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev.git

or via SSH:

git clone gitlab@mmg-gitlab.fjfi.cvut.cz:tnl/tnl-dev.git

Since TNL is a header-only library, no installation is necessary to actually use the library in your project. See the Usage section for details.

You may also want to install some optional dependencies and/or compile and install various tools and examples. See the following section for details.

Dependencies

In order to use TNL, you need to install a compatible compiler, a parallel computing platform, and (optionally) some libraries.

  • Supported compilers: You need a compiler which supports the C++14 standard, for example GCC 5.0 or later or Clang 3.4 or later.
  • Parallel computing platforms: TNL can be used with one or more of the following platforms:
    • OpenMP – for computations on shared-memory multiprocessor platforms.
    • CUDA 9.0 or later – for computations on Nvidia GPUs.
    • MPI – TNL can a library implementing the MPI-3 standard for distributed computing (e.g. OpenMPI). For distributed CUDA computations, the library must be CUDA-aware.
  • Libraries: Various libraries are needed to enable optional features or enhance the functionality of some TNL components. Make sure that all relevant packages are installed and use the appropriate flags when compiling your project.

    Library Affected components Compiler flags Notes
    zlib XML-based mesh readers and writers -DHAVE_ZLIB -lz
    TinyXML2 XML-based mesh readers -DHAVE_TINYXML2 -ltinyxml2 If TinyXML2 is not found as a system library, the install script will download, compile and install TinyXML2 along with TNL.
    Metis tnl-decompose-mesh Only used for the compilation of the tnl-decompose-mesh tool.
    libpng Image processing classes -DHAVE_PNG_H -lpng
    libjpeg Image processing classes -DHAVE_JPEG_H -ljpeg
    DCMTK Image processing classes -DHAVE_DCMTK_H -ldcm...
  • Other language toolchains/interpreters:
    • Python – install an interpreter for using the Python scripts from TNL and the corresponding development package (depending on your operating system) for building the Python bindings.

Optional components

TNL provides several optional components such as pre-processing and post-processing tools which can be compiled and installed by executing the install script:

./install

CMake 3.13 or later is required for the compilation.

The script compiles and/or installs all optional components into the ~/.local/ directory, and compiles and executes all unit tests from the src/UnitTests directory.

Individual components can be disabled and the installation prefix can be changed by passing command-line arguments to the install script. Run ./install --help for details.

Usage

TNL can be used with various build systems if you configure the compiler flags as explained below. See also an example project providing a simple Makefile.

C++ compiler flags

  • Enable the C++14 standard: -std=c++14
  • Configure the include path: -I /path/to/include
    • If you installed TNL with the install script, the include path is <prefix>/include, where <prefix> is the installation path (it is ~/.local by default).
    • If you want to include from the git repository directly, you need to specify two include paths: <git_repo>/src and <git_repo/src/3rdparty, where <git_repo> is the path where you have cloned the TNL git repository.
    • Instead of using the -I flag, you can set the CPATH environment variable to a colon-delimited list of include paths. Note that this may affect the build systems of other projects as well. For example:
      export CPATH="$HOME/.local/include:$CPATH"
      
  • Enable optimizations: -O3 -DNDEBUG (you can also add -march=native -mtune=native to enable CPU-specific optimizations).
  • Of course, there are many other useful compiler flags. See, for example, our CMakeLists.txt file for flags that we use when developing TNL (there are flags for e.g. hiding some useless compiler warnings).

Compiler flags for parallel computing

To enable parallel computing platforms in TNL, additional compiler flags are needed. They can be enabled by defining a corresponding C preprocessor macro which has the form HAVE_<PLATFORM>, i.e.:

  • -D HAVE_OPENMP enables OpenMP (also -fopenmp is usually needed to enable OpenMP support in the compiler)
  • -D HAVE_CUDA enables CUDA (the compiler must actually support CUDA, use e.g. nvcc or clang++)
    • For nvcc, the following experimental flags are also required: --expt-relaxed-constexpr --expt-extended-lambda
  • -D HAVE_MPI enables MPI (use a compiler wrapper such as mpicxx or link manually against the MPI libraries)

Environment variables

If you installed some TNL tools or examples using the install script, we recommend you to configure several environment variables for convenience. If you used the default installation path ~/.local/:

  • export PATH=$PATH:$HOME/.local/bin
  • If TinyXML2 was installed by the install script and not as a system package, also export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HOME/.local/lib

These commands can be added to the initialization scripts of your favourite shell, e.g. .bash_profile.