tnl-dev issueshttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues2018-09-19T09:31:03Zhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/3Delete TypeInfo.h2018-09-19T09:31:03ZJakub KlinkovskýDelete TypeInfo.hThe [TypeInfo.h](https://jlk.fjfi.cvut.cz/gitlab/mmg/tnl-dev/blob/develop/src/TNL/TypeInfo.h) file is completely useless, use [std::numeric_limits](http://en.cppreference.com/w/cpp/types/numeric_limits) instead.The [TypeInfo.h](https://jlk.fjfi.cvut.cz/gitlab/mmg/tnl-dev/blob/develop/src/TNL/TypeInfo.h) file is completely useless, use [std::numeric_limits](http://en.cppreference.com/w/cpp/types/numeric_limits) instead.Tomáš OberhuberTomáš Oberhuberhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/5CuBLAS and Cusparse2018-09-04T14:28:02ZVít HanousekCuBLAS and CusparseIf detection of Cublas does not found cublas, compilation faild.
Detection of Cublas and CuSparse depend on /usr/local/cuda, probably -> extremly wrong...If detection of Cublas does not found cublas, compilation faild.
Detection of Cublas and CuSparse depend on /usr/local/cuda, probably -> extremly wrong...https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/6Binary VTK output2020-07-30T12:54:43ZJakub KlinkovskýBinary VTK outputFound this in deal.II: https://github.com/dealii/dealii/blob/1cf12e90abedcf5272eb4c4d916a059c36c7e0c7/source/base/data_out_base.cc#L1269-L1279
```cpp
if (flags.data_binary)
{
stream.write(reinterpret_cast<const char *>...Found this in deal.II: https://github.com/dealii/dealii/blob/1cf12e90abedcf5272eb4c4d916a059c36c7e0c7/source/base/data_out_base.cc#L1269-L1279
```cpp
if (flags.data_binary)
{
stream.write(reinterpret_cast<const char *>(values.data()),
values.size() * sizeof(data));
}
else
{
for (unsigned int i = 0; i < values.size(); ++i)
stream << '\t' << values[i];
stream << '\n';
}
```Jakub KlinkovskýJakub Klinkovskýhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/7Fixed Bug in DistributedGridSynchronizer2018-12-24T13:10:49ZVít HanousekFixed Bug in DistributedGridSynchronizerAfter Tomas's refactorization in commit:
https://jlk.fjfi.cvut.cz/gitlab/mmg/tnl-dev/commit/b373967c9f8d108cd12b61a39d502ecbad8403c1#4e57955fa29093af7da385587705802a7e1c7f81
Test GPUDistributedGridIOTest cannot be compiled (nvcc).
It ...After Tomas's refactorization in commit:
https://jlk.fjfi.cvut.cz/gitlab/mmg/tnl-dev/commit/b373967c9f8d108cd12b61a39d502ecbad8403c1#4e57955fa29093af7da385587705802a7e1c7f81
Test GPUDistributedGridIOTest cannot be compiled (nvcc).
It was fixed by partial revert of refactorization in commit:
https://jlk.fjfi.cvut.cz/gitlab/mmg/tnl-dev/commit/68d3e703452944a5d08d1cb0dc8336eb02e90722
Issue is to find why nvcc has in this Test (only) problem with Tomas's code.https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/9double free in TNL::Config::ParameterContainer::~ParameterContainer2019-04-06T13:40:32ZJakub Klinkovskýdouble free in TNL::Config::ParameterContainer::~ParameterContainerThe implementation of the [ParameterContainer](https://jlk.fjfi.cvut.cz/gitlab/mmg/tnl-dev/blob/develop/src/TNL/Config/ParameterContainer.h#L43-104) class is very poor, because it has an implicit `operator=` and a data member which invol...The implementation of the [ParameterContainer](https://jlk.fjfi.cvut.cz/gitlab/mmg/tnl-dev/blob/develop/src/TNL/Config/ParameterContainer.h#L43-104) class is very poor, because it has an implicit `operator=` and a data member which involves raw pointers (`Containers::List< tnlParameterBase* >`), which may lead to double-free.
Test case:
```
#include <TNL/Devices/Host.h>
#include <TNL/Config/ConfigDescription.h>
#include <TNL/Config/ParameterContainer.h>
using namespace TNL;
int
main( int argc, char* argv[] )
{
Config::ParameterContainer parameters;
Config::ConfigDescription conf_desc;
Devices::Host::configSetup( conf_desc );
conf_desc.addEntry< String >( "Dummy", "Dummy parameter.", "" );
if( ! parseCommandLine( argc, argv, conf_desc, parameters ) ) {
conf_desc.printUsage( argv[ 0 ] );
return EXIT_FAILURE;
}
// create a copy of the parameter container
Config::ParameterContainer parametersCopy( parameters );
return EXIT_SUCCESS;
}
```
Compile and run:
```
$ clang++ -std=c++11 -O0 -g -I ~/.local/include -L ~/.local/lib -ltnl test_parameter_container.cpp -o test_parameter_container
$ ./test_parameter_container
double free or corruption (!prev)
Aborted (core dumped)
```
Backtrace:
```
(gdb) where
#0 0x00007f58ffc61d7f in raise () from /usr/lib/libc.so.6
#1 0x00007f58ffc4c672 in abort () from /usr/lib/libc.so.6
#2 0x00007f58ffca4878 in __libc_message () from /usr/lib/libc.so.6
#3 0x00007f58ffcab18a in malloc_printerr () from /usr/lib/libc.so.6
#4 0x00007f58ffcacc5c in _int_free () from /usr/lib/libc.so.6
#5 0x00007f5900165862 in TNL::Config::ParameterContainer::~ParameterContainer() ()
from /home/lahwaacz/.local/lib/libtnl.so.0.1
#6 0x00000000004025db in main (argc=1, argv=0x7fffff0ab188) at test_parameter_container.cpp:26
```Tomáš OberhuberTomáš Oberhuberhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/10MPI todo list2020-03-28T09:16:19ZJakub KlinkovskýMPI todo listGeneral:
- [x] runtime detection of CUDA-aware MPI
- [x] detection of MPI in cmake using FindMPI (and make it work nice with CUDA...)
- [ ] check that MPI implementation is thread-safe: https://stackoverflow.com/a/11074516
- ~~build con...General:
- [x] runtime detection of CUDA-aware MPI
- [x] detection of MPI in cmake using FindMPI (and make it work nice with CUDA...)
- [ ] check that MPI implementation is thread-safe: https://stackoverflow.com/a/11074516
- ~~build config tags for communicators?~~ (no, communicators will be eventually polymorphic types and their instances will be passed to the data structures)
JK:
- see how `DistributedMesh` is used - correctly it should have a `Communicator` template parameter, but it would propagate to `Grid` (due to its `distrGrid` pointer) and specializations involving grids
- `PDEProblem.h` has `CommunicatorType` and `DistributedMeshType` (they should be combined)Jakub KlinkovskýJakub Klinkovskýhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/11Refactor VectorFieldVTKWriter and VectorFieldGnuplotWriter2019-11-08T14:49:40ZJakub KlinkovskýRefactor VectorFieldVTKWriter and VectorFieldGnuplotWriterThe [VectorFieldVTKWriter](https://jlk.fjfi.cvut.cz/gitlab/mmg/tnl-dev/blob/develop/src/TNL/Functions/VectorFieldVTKWriter.h) and [VectorFieldGnuplotWriter](https://jlk.fjfi.cvut.cz/gitlab/mmg/tnl-dev/blob/develop/src/TNL/Functions/Vecto...The [VectorFieldVTKWriter](https://jlk.fjfi.cvut.cz/gitlab/mmg/tnl-dev/blob/develop/src/TNL/Functions/VectorFieldVTKWriter.h) and [VectorFieldGnuplotWriter](https://jlk.fjfi.cvut.cz/gitlab/mmg/tnl-dev/blob/develop/src/TNL/Functions/VectorFieldGnuplotWriter.h) classes should be refactored analogically to [MeshFunctionVTKWriter](https://jlk.fjfi.cvut.cz/gitlab/mmg/tnl-dev/blob/develop/src/TNL/Functions/MeshFunctionVTKWriter.h) and [MeshFunctionGnuplotWriter](https://jlk.fjfi.cvut.cz/gitlab/mmg/tnl-dev/blob/develop/src/TNL/Functions/MeshFunctionGnuplotWriter.h).Jakub KlinkovskýJakub Klinkovskýhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/12tnl-grid-setup does not check command-line arguments correctly2020-03-02T13:28:29ZJakub Klinkovskýtnl-grid-setup does not check command-line arguments correctlyRunning `tnl-grid-setup --dimensions 2` segfaults:
```
Setting real type to ... double
Setting index type to ... int
The program attempts to get unknown parameter size-x
Aborting the program.
Aborted (core dumped)
```Running `tnl-grid-setup --dimensions 2` segfaults:
```
Setting real type to ... double
Setting index type to ... int
The program attempts to get unknown parameter size-x
Aborting the program.
Aborted (core dumped)
```Tomáš OberhuberTomáš Oberhuberhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/13Move folder DistributedContainers to Containers.2018-12-26T10:59:48ZTomáš OberhuberMove folder DistributedContainers to Containers.Jakub KlinkovskýJakub Klinkovskýhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/14Rename Timer::stop() to Timer::pause()2019-03-04T20:49:05ZTomáš OberhuberRename Timer::stop() to Timer::pause()Changing the method name from stop() to pause() might be more intuitive.Changing the method name from stop() to pause() might be more intuitive.https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/15Fix neighbors directions in distributed grid2019-01-25T08:59:54ZTomáš OberhuberFix neighbors directions in distributed gridChange directions for referring neighbors in distributed grid as:
- x -> Left, Right
- y -> Bottom,Top
- z -> Rear, FrontChange directions for referring neighbors in distributed grid as:
- x -> Left, Right
- y -> Bottom,Top
- z -> Rear, FrontVít HanousekVít Hanousekhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/18Templated constructor of String should be deleted2018-12-20T20:44:06ZJakub KlinkovskýTemplated constructor of String should be deletedThe following discussions from !10 should be addressed:
- [x] @klinkovsky started a [discussion](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/merge_requests/10#note_124): (+1 comment)
> This should be a free function, not a ...The following discussions from !10 should be addressed:
- [x] @klinkovsky started a [discussion](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/merge_requests/10#note_124): (+1 comment)
> This should be a free function, not a constructor (note that there is already `convertToString` which is overloaded for `bool`). At the very least the constructor should be explicit.
- [x] @klinkovsky started a [discussion](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/merge_requests/10#note_125): (+1 comment)
> The recursion should be avoided by deleting the templated constructor and using the free function `convertToString` explicitly instead. It can be renamed to `toString` or something like that to make it shorter, c.f. [std::to_string](https://en.cppreference.com/w/cpp/string/basic_string/to_string).Jakub KlinkovskýJakub Klinkovskýhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/19Installation problm with RPATH2018-12-13T09:28:45ZMatouš FenclInstallation problm with RPATHAn error of RPATH occurred during installation of TNL after software update:
```
...
-- Installing: /home/user/.local/bin/tnl-cuda-arch
CMake Error at src/Tools/cmake_install.cmake:50 (file):
file RPATH_CHANGE could not write new R...An error of RPATH occurred during installation of TNL after software update:
```
...
-- Installing: /home/user/.local/bin/tnl-cuda-arch
CMake Error at src/Tools/cmake_install.cmake:50 (file):
file RPATH_CHANGE could not write new RPATH:
to the file:
/home/user/.local/bin/tnl-cuda-arch
The current RUNPATH is:
/home/user/.openmpi/lib
which does not contain:
/home/user/Documents/tnl-dev-develop/Release/lib:
as was expected.
Call Stack (most recent call first):
src/cmake_install.cmake:45 (include)
cmake_install.cmake:48 (include)
Makefile:73: recipe for target 'install' failed
make: *** [install] Error 1
```
Software after update:
Ubuntu 18.04
gcc (Ubuntu 7.3.0-27ubuntu1~18.04) 7.3.0
cmake version 3.13.0-rc3
NVIDIA-SMI 415.13 Driver Version: 415.13 CUDA Version: 10.0
mpirun (Open MPI) 3.1.3Jakub KlinkovskýJakub Klinkovskýhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/20Using RealType (int, long, float, double, ...) when dividing2018-12-13T21:48:44ZLukáš Matthew ČejkaUsing RealType (int, long, float, double, ...) when dividingIn [ChunkedEllpack_imlp.h](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/blob/develop/src/TNL/Matrices/ChunkedEllpack_impl.h) on lines 177 - 186:
```c
/****
* Compute the chunk size
*/
IndexType maxChunkInSlice( 0 );
...In [ChunkedEllpack_imlp.h](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/blob/develop/src/TNL/Matrices/ChunkedEllpack_impl.h) on lines 177 - 186:
```c
/****
* Compute the chunk size
*/
IndexType maxChunkInSlice( 0 );
for( IndexType i = sliceBegin; i < sliceEnd; i++ )
maxChunkInSlice = max( maxChunkInSlice,
ceil( ( RealType ) rowLengths[ i ] /
( RealType ) this->rowToChunkMapping[ i ] ) );
TNL_ASSERT( maxChunkInSlice > 0,
std::cerr << " maxChunkInSlice = " << maxChunkInSlice << std::endl );
```
Casting to RealType is used in the ceil function and then the variables are divided.
In the case of RealType being int, this will cause **integer division**, and could possibly return 0, even though ceil() is type double.
Should the variables be cast to Double instead to avoid this?https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/21tnl-benchmark-blas produces wrong log2018-12-14T11:30:17ZTomáš Oberhubertnl-benchmark-blas produces wrong logThe log file generated by tnl-benchmark-blas has wrong format in case of SpMV tests.The log file generated by tnl-benchmark-blas has wrong format in case of SpMV tests.https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/22Update String documentation2019-03-04T20:49:29ZTomáš OberhuberUpdate String documentationI have added flag skipEmpty to String::split, it needs to be mentioned in the documentation.I have added flag skipEmpty to String::split, it needs to be mentioned in the documentation.Nina DžugasováNina Džugasováhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/23Probably useless parameter in benchmarkArrayOperations2018-12-20T19:51:12ZTomáš OberhuberProbably useless parameter in benchmarkArrayOperationsThe parameter ```loops``` in Benchmarks/array-operations.h:26
```C++
benchmarkArrayOperations( Benchmark & benchmark,
const int & loops,
const long & size )
```
seems to be useless.The parameter ```loops``` in Benchmarks/array-operations.h:26
```C++
benchmarkArrayOperations( Benchmark & benchmark,
const int & loops,
const long & size )
```
seems to be useless.Jakub KlinkovskýJakub Klinkovskýhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/24Add scalar multiplicator parameter to vectorProduct in other sparse matrix fo...2020-03-01T09:48:38ZJakub KlinkovskýAdd scalar multiplicator parameter to vectorProduct in other sparse matrix formatsIn https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/commit/30b25f639627af6c9b5055a901a5fc599e2b46b2, the `multiplicator` parameter was added to the `vectorProduct` method in the Ellpack format to be able to compute `outVector = multipl...In https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/commit/30b25f639627af6c9b5055a901a5fc599e2b46b2, the `multiplicator` parameter was added to the `vectorProduct` method in the Ellpack format to be able to compute `outVector = multiplicator * matrix * inVector` in one step. The parameter should be added to other sparse matrix formats as well.Lukáš Matthew ČejkaLukáš Matthew Čejkahttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/25TYPED_TEST_CASE deprecated according to Google Test2019-03-28T08:59:19ZLukáš Matthew ČejkaTYPED_TEST_CASE deprecated according to Google TestUpon compilation of [Unit Tests](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/tree/develop/src/UnitTests/Matrices) that use TYPED_TEST_CASE, Google Test throws a series of warnings stating that TYPED_TEST_CASE is deprecated and sho...Upon compilation of [Unit Tests](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/tree/develop/src/UnitTests/Matrices) that use TYPED_TEST_CASE, Google Test throws a series of warnings stating that TYPED_TEST_CASE is deprecated and should be replaced with TYPE_TEST_SUITE.
Should TYPED_TEST_CASE be replaced with TYPED_TEST_SUITE?Lukáš Matthew ČejkaLukáš Matthew Čejkahttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/26Code revision2021-12-08T23:24:11ZTomáš OberhuberCode revisionTODO list for the code revison:
* renaming _impl.h files to .hpp
* change boolean return type to exception throwing
- [x] fix exception catching by-reference: #29
- [x] use `ASSERT_NO_THROW( map.save( "multimap-test.tnl" ) );` ins...TODO list for the code revison:
* renaming _impl.h files to .hpp
* change boolean return type to exception throwing
- [x] fix exception catching by-reference: #29
- [x] use `ASSERT_NO_THROW( map.save( "multimap-test.tnl" ) );` instead of `ASSERT_TRUE( map.save( "multimap-test.tnl" ) );` etc. in tests (see https://github.com/google/googletest/blob/master/googletest/docs/advanced.md#exception-assertions)
* change boolean flag style parameters into enum class
* use operators << and >> insteal of File.read and File.write where it makes sense
* [x] add exception `NotImplementedError` and use it instead of codes like this:
```cpp
std::cerr << "Type conversion during saving is not implemented for MIC." << std::endl;
abort();
```
Or this:
```cpp
TNL_ASSERT( false, std::cerr << "TODO: implement" );
```
* [x] Documentation: indicate which functions/methods throw which exceptions
* Remove useless types from the public interface of classes:
- [x] `ThisType` - useless for the outside of the class, and the implementation of methods can simply use the class name
* [x] switch to C++14
----
- [x] String
- [x] Timer
- [x] Object
- [x] change bool save and load to void
- [x] File
- [x] Update examples, add data conversion.
- [x] use TransferBufferSize from Devices::Cuda
- [x] Remove `File::Mode` enum, use [std::ios_base::openmode](https://en.cppreference.com/w/cpp/io/ios_base/openmode) directly
- [x] Remove specific exceptions (`ArrayWrongSize`, `MeshFunctionDataMismatch`, `NotTNLFile`, `ObjectTypeDetectionFailure`, `ObjectTypeMismatch`) - all `save`/`load` methods should throw only `FileSerializationError`/`FileDeserializationError`
- [x] FileName
- [x] Array
- [x] Add method for setting array elements using lambda function
- [x] Copy constructor have to make deep copy - DOES NOT WORK with MultiMap yet
- [x] Add copy constructors from `std::list`, `std::vector` and `std::initializer_list`
- [x] Replace bind methods with ArrayView - after refactoring MeshFunction
- [x] Avoid binding in `Array( const Array&, const Index begin, const Index size );`
- [x] Remove `boundLoad` (used only in `Array` and derived objects, loading via `ArrayView` can be used instead: `array.getView().load( file );`)
- [x] Use `operator<<` and `operator>>` instead of `save` and `load` methods
- [x] Delete `operator bool ()`
- [x] ArrayView
- [x] what about `__cuda_callable__` ArrayView assignment?
- [x] Use `operator<<` and `operator>>` instead of `save` and `load` methods
- [x] StaticVector
- [x] Replace all for loops with static loops, i.e. templated for
- [x] u * v should not be dot product but element-wisi multiplication
- [x] use (u,v) for dot product
- [x] Vector
- [x] `Vector` should have the same serialization type as `Array` so that arrays can be loaded into vectors and vice versa
- [x] Delete methods for vector operations which are used in linear solvers - the replacement in the solvers must be tested carefuly
- [x] Implement DistributedVectorExpressions and DistributedVectorViewExpressions
- [x] On some places the method getView() is used because DistributedVectorExpressions are not implemented yet (mainly linear solvers and BLAS benchmark) - all occurences can be deleted later
- [x] Add constructor from expression template
- [x] Remove MultiVector and MultiArray - after refactoring Cameo - summer 2019
- [x] Parallel reduction
- [x] Use auto in lambdas to avoid volatileReduction - it does not work with CUDA 10.0
- [x] Rewrite multi-reduction with lambdas
- [x] ConfigDescription
- [x] ParameterContainer
- allow any types (currently only `int`, `double`, `bool`, `String`)
- allow easy instantiation without the command-line parser (e.g. to pass a dict from python)
- exceptions when accessing unknown parameters
- [x] Logger
- moved to https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/98
- [x] Pointers
- moved to https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/97
- [x] Matrices
- [x] fix getSerializationType the same way as in Array
- [x] rename the method setCompressedRowLengths to setRowCapacities
- [x] implement sparse matrices using Segments
- [x] implement CSR matrix and Ellpack matrix using the Segments, use existing unit tests for debugging
- [ ] implement set/addRow using VectorView and lambda functions
- [x] implement "Lambda matrix" - the element values are given by a lambda function
- [x] implement constructor from initializer list to DenseMatrix
- [x] implement method for setting elements from initializer list to sparse matrix
- [x] implement constructor and method for setting elements from std::map similar to initializer list to sparse matrix
- [ ] update dense matrix multiplication with the new dense matrix implementation
- [ ] update dense matrix transposition with the new dense matrix implementation
- [x] move methods for matrux coloring outside the matrices
- [ ] finish DenseMatrixView::getRowVectorProduct
- [x] implement opertor == for matrices
- [x] Add constructor of matrix views from vector views
- [x] Allocators
- moved to !33https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/27Fix getType for all matrix formats to print in a consistent form2019-03-05T19:12:56ZLukáš Matthew ČejkaFix getType for all matrix formats to print in a consistent form`getType()` is **inconsistent** across different matrix formats.
For example:
* **CSR** `getType()` gives: `Matrices::CSR< double, Devices::Host >`
* **Ellpack** `getType()` gives: `Matrices::Ellpack< double, Devices::Host, int >`
* ...`getType()` is **inconsistent** across different matrix formats.
For example:
* **CSR** `getType()` gives: `Matrices::CSR< double, Devices::Host >`
* **Ellpack** `getType()` gives: `Matrices::Ellpack< double, Devices::Host, int >`
* **Sliced Ellpack** `getType()` gives: `Matrices::SlicedEllpack< double, Devices::Host >`
Is the correct form
`Matrices::FORMAT_NAME< RealType, Devices::DEVICE_TYPE, IndexType >` ?Lukáš Matthew ČejkaLukáš Matthew Čejkahttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/28Saving data from MeshFunction CPU MPI doesn't work for bigger dimension2019-09-23T20:12:02ZMatouš FenclSaving data from MeshFunction CPU MPI doesn't work for bigger dimensionMeshFunctionPointer 3D on CPU with MpiIO doesn't save the right data for big mesh (256^3).
Everything works on CPU for meshes 16^3,32^3,...,128^3. Calculation is fine and saved data are the same as the program (hamilton-jacobi branch) ca...MeshFunctionPointer 3D on CPU with MpiIO doesn't save the right data for big mesh (256^3).
Everything works on CPU for meshes 16^3,32^3,...,128^3. Calculation is fine and saved data are the same as the program (hamilton-jacobi branch) calculates. Only the biggest mesh I use is making me troubles. Those values of 256^3 calculation are fine when I see them in console but those values doesn't save into file. The MPI devides the original mesh by default in "z" direction and I use 4 processes for calculation (2 cores on laptop).
Both calculation and saving values works on GPU with and without MPI!Tomáš OberhuberTomáš Oberhuberhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/29Exceptions should be caught by reference2019-04-19T10:14:16ZJakub KlinkovskýExceptions should be caught by referenceAll exceptions should be caught by reference, not by value. Bug in https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/commit/5fb2f09ff3baf7ea433a586e779a5abd731a9320:
```
In file included from ../src/Tools/tnl-diff.cpp:11:
../src/Tools/t...All exceptions should be caught by reference, not by value. Bug in https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/commit/5fb2f09ff3baf7ea433a586e779a5abd731a9320:
```
In file included from ../src/Tools/tnl-diff.cpp:11:
../src/Tools/tnl-diff.h: In instantiation of ‘bool processFiles(const TNL::Config::ParameterContainer&) [with Mesh = TNL::Meshes::Grid<1, float, TNL::Devices::{anonymous}::Host, int>]’:
../src/Tools/tnl-diff.h:653:75: required from ‘bool resolveGridIndexType(const std::vector<TNL::String>&, const TNL::Config::ParameterContainer&) [with int Dim = 1; Real = float]’
../src/Tools/tnl-diff.h:665:48: required from ‘bool resolveGridRealType(const std::vector<TNL::String>&, const TNL::Config::ParameterContainer&) [with int Dim = 1]’
../src/Tools/tnl-diff.cpp:70:69: required from here
../src/Tools/tnl-diff.h:630:4: warning: catching polymorphic type ‘class std::ios_base::failure’ by value [-Wcatch-value=]
catch( std::ios_base::failure exception )
^~~~~
../src/Tools/tnl-diff.h: In instantiation of ‘bool processFiles(const TNL::Config::ParameterContainer&) [with Mesh = TNL::Meshes::Grid<1, float, TNL::Devices::{anonymous}::Host, long int>]’:
../src/Tools/tnl-diff.h:655:80: required from ‘bool resolveGridIndexType(const std::vector<TNL::String>&, const TNL::Config::ParameterContainer&) [with int Dim = 1; Real = float]’
../src/Tools/tnl-diff.h:665:48: required from ‘bool resolveGridRealType(const std::vector<TNL::String>&, const TNL::Config::ParameterContainer&) [with int Dim = 1]’
../src/Tools/tnl-diff.cpp:70:69: required from here
../src/Tools/tnl-diff.h:630:4: warning: catching polymorphic type ‘class std::ios_base::failure’ by value [-Wcatch-value=]
../src/Tools/tnl-diff.h: In instantiation of ‘bool processFiles(const TNL::Config::ParameterContainer&) [with Mesh = TNL::Meshes::Grid<1, double, TNL::Devices::{anonymous}::Host, int>]’:
../src/Tools/tnl-diff.h:653:75: required from ‘bool resolveGridIndexType(const std::vector<TNL::String>&, const TNL::Config::ParameterContainer&) [with int Dim = 1; Real = double]’
../src/Tools/tnl-diff.h:667:49: required from ‘bool resolveGridRealType(const std::vector<TNL::String>&, const TNL::Config::ParameterContainer&) [with int Dim = 1]’
../src/Tools/tnl-diff.cpp:70:69: required from here
```
Other places should be checked and revised as well.Tomáš OberhuberTomáš Oberhuberhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/30CUDA version installation2019-03-29T16:48:58ZMatouš FenclCUDA version installationTNL installation failure.
CUDA: 10.1
NVIDIA driver: 418.43
cmake: 3.14.0
gcc: 7.3.0
error:
```
/home/maty/Documents/tnl/src/TNL/Atomic.h: In instantiation of ‘T TNL::Atomic<T, TNL::Devices::Cuda>::load() const [with T = int]’:
/home/ma...TNL installation failure.
CUDA: 10.1
NVIDIA driver: 418.43
cmake: 3.14.0
gcc: 7.3.0
error:
```
/home/maty/Documents/tnl/src/TNL/Atomic.h: In instantiation of ‘T TNL::Atomic<T, TNL::Devices::Cuda>::load() const [with T = int]’:
/home/maty/Documents/tnl/src/TNL/Atomic.h:160:12: required from ‘TNL::Atomic<T, TNL::Devices::Cuda>::operator T() const [with T = int]’
/home/maty/Documents/tnl/src/TNL/Matrices/DistributedSpMV.h:117:25: required from ‘void TNL::Matrices::DistributedSpMV<Matrix, Communicator>::updateCommunicationPattern(const MatrixType&, TNL::Matrices$
/tmp/tmpxft_000024e2_00000000-5_tnl-benchmark-distributed-spmv.cudafe1.stub.c:34:531: required from here
/home/maty/Documents/tnl/src/TNL/Atomic.h:154:17: error: passing ‘const TNL::Atomic<int, TNL::Devices::Cuda>’ as ‘this’ argument discards qualifiers [-fpermissive]
return ((Atomic*)this)->fetch_add( 0 );
~~~~~~~~~^~~
/home/maty/Documents/tnl/src/TNL/Atomic.h:202:3: note: in call to ‘T TNL::Atomic<T, TNL::Devices::Cuda>::fetch_add(T) [with T = int]’
T fetch_add( T arg )
^~~~~~~~~
CMake Error at tnl-benchmark-distributed-spmv-cuda_generated_tnl-benchmark-distributed-spmv.cu.o.Debug.cmake:279 (message):
Error generating file
/home/maty/Documents/tnl/Debug/src/Benchmarks/DistSpMV/CMakeFiles/tnl-benchmark-distributed-spmv-cuda.dir//./tnl-benchmark-distributed-spmv-cuda_generated_tnl-benchmark-distributed-spmv.cu.o
src/Benchmarks/DistSpMV/CMakeFiles/tnl-benchmark-distributed-spmv-cuda.dir/build.make:543: recipe for target 'src/Benchmarks/DistSpMV/CMakeFiles/tnl-benchmark-distributed-spmv-cuda.dir/tnl-benchmark-dist$
make[2]: *** [src/Benchmarks/DistSpMV/CMakeFiles/tnl-benchmark-distributed-spmv-cuda.dir/tnl-benchmark-distributed-spmv-cuda_generated_tnl-benchmark-distributed-spmv.cu.o] Error 1
CMakeFiles/Makefile2:2676: recipe for target 'src/Benchmarks/DistSpMV/CMakeFiles/tnl-benchmark-distributed-spmv-cuda.dir/all' failed
make[1]: *** [src/Benchmarks/DistSpMV/CMakeFiles/tnl-benchmark-distributed-spmv-cuda.dir/all] Error 2
```
Originally Atomic.h:153 with same error but code:
return const_cast<Atomic*>(this)->fetch_add( 0 );Tomáš OberhuberTomáš Oberhuberhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/32CUDA reduction uses the same buffer as input and output array2019-04-15T14:05:36ZTomáš OberhuberCUDA reduction uses the same buffer as input and output arrayIn `TNL/Containers/Algorithms/Reduction_impl.h:139` whel calling `CudaReductionKernelLauncher` we use the same buffer `deviceAux1` as input and output buffer for the reduction. If the CUDA block with index 0 is not the first one to finis...In `TNL/Containers/Algorithms/Reduction_impl.h:139` whel calling `CudaReductionKernelLauncher` we use the same buffer `deviceAux1` as input and output buffer for the reduction. If the CUDA block with index 0 is not the first one to finish its work, its data can be overwritten by other CUDA blocks. We want to avoid allocating of two buffers. Solution might be to increase the size of `deviceAux1` and split into two buffers. We need to check performance when fixing this!Tomáš OberhuberTomáš Oberhuberhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/34Fix comparison operator of ArrayView/VectorView in gtest.2019-07-16T13:44:23ZTomáš OberhuberFix comparison operator of ArrayView/VectorView in gtest.Gtest does not accept (cannot compile) `operator==` for `ArrayView` or `VectorView`. `EXPECT_EQ( u, v )` must be replaced with `EXPECT_TRUE( u == v )`. See for example `ArrayViewTest.h`, `assignmentOperator` test.Gtest does not accept (cannot compile) `operator==` for `ArrayView` or `VectorView`. `EXPECT_EQ( u, v )` must be replaced with `EXPECT_TRUE( u == v )`. See for example `ArrayViewTest.h`, `assignmentOperator` test.Tomáš OberhuberTomáš Oberhuberhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/35Overriding of RealType in vertical expressions2019-08-10T16:22:17ZTomáš OberhuberOverriding of RealType in vertical expressionsThe original implementation o vector operations allowed to do this:
```
using VectorType = Containers::Vector< bool, Devices::Host >;
VectorType v( 100 );
v.setValue( true);
auto a = v.sum< int >();
```
The sumation would be performed ...The original implementation o vector operations allowed to do this:
```
using VectorType = Containers::Vector< bool, Devices::Host >;
VectorType v( 100 );
v.setValue( true);
auto a = v.sum< int >();
```
The sumation would be performed in bool, by default, which would not give correct result. We can, however, simply change it to `int`. In the new implementation, we state
```
a = sum( v );
```
instead. I would like to be able to write `a = sum< int >( v )` but I did not find any way how to do it.
Solution might be `a = sum( ( int ) v )`.https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/36addEntryEnum does not work for list entry.2019-07-14T11:06:39ZTomáš OberhuberaddEntryEnum does not work for list entry.ConfigDescription crashes when one tries to define entry enum values using adEntryEnum for list entry. For example like this:
```
configDescription.addList< String >( "string-list" );
configDescription.eddEntryEnum< String >( "entry" );...ConfigDescription crashes when one tries to define entry enum values using adEntryEnum for list entry. For example like this:
```
configDescription.addList< String >( "string-list" );
configDescription.eddEntryEnum< String >( "entry" );
```
The problem seems to be on ConfigDescription.h:139 where the entry is being cast to EntryType which is std::vector< String > but not String.Tomáš OberhuberTomáš Oberhuberhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/37Add method ParameterContainer::getList2019-07-14T11:06:49ZTomáš OberhuberAdd method ParameterContainer::getListFetching a list of parameters from the ParameterContainer works as follows now:
```
const auto& list = parameters.getParameter< std::vector< String > >( "list" );
```
It should be replaced with
```
const auto& list = parameters.getLis...Fetching a list of parameters from the ParameterContainer works as follows now:
```
const auto& list = parameters.getParameter< std::vector< String > >( "list" );
```
It should be replaced with
```
const auto& list = parameters.getList< String >( "list" );
```Tomáš OberhuberTomáš Oberhuberhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/39Add variadic template parameter to Reduction for user defined arguments2019-08-11T18:45:06ZTomáš OberhuberAdd variadic template parameter to Reduction for user defined argumentsAdd variadic template parameter to Reduction for user defined arguments. The arguments would be passed to fetcher.Add variadic template parameter to Reduction for user defined arguments. The arguments would be passed to fetcher.Tomáš OberhuberTomáš Oberhuberhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/40Optimize scalar product (reduction) on CPU2019-10-12T10:24:07ZTomáš OberhuberOptimize scalar product (reduction) on CPUBenchmarks show that our implementation of scalar product on CPU is very slow.
```
scalar product 400000 CPU 5.4616 0.00109134 N/A
scalar product 400000 CPU ET 4...Benchmarks show that our implementation of scalar product on CPU is very slow.
```
scalar product 400000 CPU 5.4616 0.00109134 N/A
scalar product 400000 CPU ET 4.96865 0.00119962 0.909742
scalar product 400000 CPU BLAS 17.7799 0.000335237 3.25543
```
Since ET (expression templates) and non-ET version behaves almost the same, it seems that even the original implementation before switching to ET was not optimal. We should check the implementation of scalar product in BLAS and improve it.https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/41Implement argMin and argMax for DistributedVector and DistributedVectorView2019-08-10T16:30:18ZJakub KlinkovskýImplement argMin and argMax for DistributedVector and DistributedVectorView`argMin` and `argMax` are the only operations which are not yet implemented for `DistributedVector` and `DistributedVectorView`.
The relevant tests are disabled, see [VectorUnaryOperationsTest.h:518](https://mmg-gitlab.fjfi.cvut.cz/gitl...`argMin` and `argMax` are the only operations which are not yet implemented for `DistributedVector` and `DistributedVectorView`.
The relevant tests are disabled, see [VectorUnaryOperationsTest.h:518](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/commit/f9f3959e3b406f714f98d22e8cc27c924ab26e14#a43e13a5e5f5b4ee5190413c9fa76ccb0907496e_401_518).Jakub KlinkovskýJakub Klinkovskýhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/42Implement parallel prefix-sum with OpenMP2019-08-17T18:35:46ZJakub KlinkovskýImplement parallel prefix-sum with OpenMPThe specialization of `PrefixSum` for host is only sequential, any parallelization is missing.The specialization of `PrefixSum` for host is only sequential, any parallelization is missing.Jakub KlinkovskýJakub Klinkovskýhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/43Implement distributed prefix-sum with MPI2019-08-17T18:35:47ZJakub KlinkovskýImplement distributed prefix-sum with MPIAny implementation for `DistributedVector` and `DistributedVectorView` is missing.Any implementation for `DistributedVector` and `DistributedVectorView` is missing.Jakub KlinkovskýJakub Klinkovskýhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/46Remove getType() methods2019-11-08T14:49:39ZJakub KlinkovskýRemove getType() methodsThey are not necessary, because `std::type_info::name` either returns a human-readable string (MSVC, IBM, Oracle) or can be demangled (GCC, Clang). See https://en.cppreference.com/w/cpp/types/type_info/nameThey are not necessary, because `std::type_info::name` either returns a human-readable string (MSVC, IBM, Oracle) or can be demangled (GCC, Clang). See https://en.cppreference.com/w/cpp/types/type_info/nameJakub KlinkovskýJakub Klinkovskýhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/47Replace StaticArray::setValue with StaticArray::operator=2019-09-01T19:26:50ZTomáš OberhuberReplace StaticArray::setValue with StaticArray::operator=`StaticArray::getValue` should be replaced with `operator=` as well as it is in `Array`. I have created static array assignment class in `Containers/Algorithms/StaticArrayAssignment.h`, however, `IsStaticArrayType< T >` does not work for...`StaticArray::getValue` should be replaced with `operator=` as well as it is in `Array`. I have created static array assignment class in `Containers/Algorithms/StaticArrayAssignment.h`, however, `IsStaticArrayType< T >` does not work for integral types since they do not have `T::getSize` (`TypeTraits.h:146`). This should be fixed first.Tomáš OberhuberTomáš Oberhuberhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/48Delete setValue in Matrices::Dense::setDimensions.2020-03-03T16:27:01ZTomáš OberhuberDelete setValue in Matrices::Dense::setDimensions.`this->values.setValue(0.0)` should not be there, it just decreases performance. Check other matrix formats for the same issue.`this->values.setValue(0.0)` should not be there, it just decreases performance. Check other matrix formats for the same issue.Tomáš OberhuberTomáš Oberhuberhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/49Rename prefix sum methods in vectors to scan2019-11-08T14:49:39ZJakub KlinkovskýRename prefix sum methods in vectors to scanThe former `PrefixSum` class was renamed to `Scan`, but the methods in vector classes (`Vector`, `VectorView`, `DistributedVector` and `DistributedVectorView`) stayed as `prefixSum` and `segmentedPrefixSum` - see e.g. https://mmg-gitlab....The former `PrefixSum` class was renamed to `Scan`, but the methods in vector classes (`Vector`, `VectorView`, `DistributedVector` and `DistributedVectorView`) stayed as `prefixSum` and `segmentedPrefixSum` - see e.g. https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/blob/develop/src/TNL/Containers/Vector.h#L269
While we're renaming things, `exclusiveScan` method should be added as a shortcut for `vector.template scan< TNL::Containers::Algorithms::ScanType::Exclusive >()`. Both methods should be used in the [tutorial](https://mmg-gitlab.fjfi.cvut.cz/doc/tnl/tutorial_03_reduction.html#flexible_scan).Jakub KlinkovskýJakub Klinkovskýhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/50MpiCommunicator::selectGPU does not work reliably when multiple nodes are sel...2021-06-15T08:12:25ZJakub KlinkovskýMpiCommunicator::selectGPU does not work reliably when multiple nodes are selectedThe computation of "node rank" is problematic - maybe `MPI_Get_processor_name` does not work reliably. Sometimes multiple ranks are assigned the same GPU.The computation of "node rank" is problematic - maybe `MPI_Get_processor_name` does not work reliably. Sometimes multiple ranks are assigned the same GPU.Jakub KlinkovskýJakub Klinkovskýhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/52Meshes todo list2020-07-30T12:54:42ZJakub KlinkovskýMeshes todo list- [x] write tests for `Mesh` traverser
- [x] rewrite `Traverser` using `ParallelFor`
- [x] writer for the XML-based VTK format: see #6
- [x] big mesh refactoring
- mesh pointer added to `MeshEntity`, all topology accessors redirected...- [x] write tests for `Mesh` traverser
- [x] rewrite `Traverser` using `ParallelFor`
- [x] writer for the XML-based VTK format: see #6
- [x] big mesh refactoring
- mesh pointer added to `MeshEntity`, all topology accessors redirected through it to the mesh
- storage of points and subentity orientations moved from `MeshEntity` to `Mesh`
- removed `MeshEntityIndex` - `MeshEntity` always stores its index as the `GlobalIndexType`
- removed entity storage from Mesh - mesh entities can be generated on the fly
- some minor related simplifications, mainly in the mesh initializer
- remove `entityStorage` from the mesh configuration
- [x] `TypeResolver`: `resolveMeshType`, `loadMesh`, `decomposeMesh`:
- [x] better detection of the format (currently based on the file name suffix) (solved by the manual override option)
- [x] file detection is done twice: once for `resolveMeshType`, then again for `loadMesh`
- [x] refactor `MeshFunction` - implement `MeshFunctionView`
- [x] implement `tnl-decompose-mesh` (wrapper tool using Metis)
- [x] output `.pvtu` file
- [x] generate overlaps (ghost cells) for use in `DistributedMesh`
- [x] mesh entity tags
- we need to store at least boundary and ghost entity tags - use bitfield array similar to VTK
- if possible, implement a general interface for user tags (could be stored in the same array, but users would have to respect the boundary and ghost bits)
- [x] implement `PVTUReader`
- [x] implement `DistributedMesh`
- wrapper around `Mesh` - includes local and ghost entities, which are differentiated by a tag
- mapping from local to global indices is easy - separate index arrays (or just for ghost entities, if they are ordered after the local entities - then rank offsets is sufficient for local entities)
- the local mesh should be worked with using local indices (`getEntity`, `getSubentityIndex`, `getSuperentityIndex` etc.), because mapping from global indices to local indices is not easy (ghost entities may be discontinuous, mapping by rank offsets is impossible)
- local index mappings (index arrays for efficient iteration):
- `ghosts`
- `local`
- `ghostNeighbours` (entities which are ghosts on other ranks - for synchronization)
- `localInternal` (not ghosts on any ranks)
- `boundary` (boundary of the local mesh (including overlaps))
- `interior` (interior of the local mesh (including overlaps))
- local mesh decompositions: `all = local + ghosts`, `local = ghostNeighbours + localInternal`, `all = boundary + interior`
- [x] write tests
- [x] implement `DistributedMeshSynchronizer`
- [x] write tests
- [x] implement `forAll`, `forBoundary`, `forInterior` methods like in `NDArray`; also `forLocal`, `forGhost` (lambda function takes just the index, mesh access has to be handled manually)Jakub KlinkovskýJakub Klinkovskýhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/54How to compile tutorials.2019-11-28T13:27:39ZTomáš OberhuberHow to compile tutorials.Add information, how to compile tutorials. For example -DHAVE_CUDA is important.Add information, how to compile tutorials. For example -DHAVE_CUDA is important.Tomáš OberhuberTomáš Oberhuberhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/55Implement nD ParallelFor2021-11-23T19:05:01ZTomáš OberhuberImplement nD ParallelForImplement nD `ParallelFor` using `StaticArray< Dim, Index >` as:
```
template< typename Device, typename Mode >
struct ParallelFor{
template< int Dim,
typename Index,
typename Function,
typen...Implement nD `ParallelFor` using `StaticArray< Dim, Index >` as:
```
template< typename Device, typename Mode >
struct ParallelFor{
template< int Dim,
typename Index,
typename Function,
typename... FunctionArgs >
static void exec( const StaticArray< Dim, Index >& start, const StaticArray< Dim, Index >& end, Function F, FunctionArgs.. args );
}
```https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/57Bug in VTK export of Grid2020-02-05T09:41:02ZTomáš OberhuberBug in VTK export of GridFor 3D grids, the VTK export does not work. Z dimension of the exported mesh seems to be wrong.For 3D grids, the VTK export does not work. Z dimension of the exported mesh seems to be wrong.https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/58macro __STRING2020-03-02T13:28:28ZTomáš Jakubecmacro __STRINGUsage of non standard macro `__STRING`. This macro is defined in cdefs.h which is not part of stl.
The definition is really simple:
```c++
#define __STRING(x) #x
```Usage of non standard macro `__STRING`. This macro is defined in cdefs.h which is not part of stl.
The definition is really simple:
```c++
#define __STRING(x) #x
```Tomáš JakubecTomáš Jakubechttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/59CMake: skip -march=native -mtune=native for Python bindings2020-01-17T20:36:09ZJakub KlinkovskýCMake: skip -march=native -mtune=native for Python bindingsThis is needed on heterogeneous clusters (e.g. Helios) where several nodes have different CPU architectures.
See https://stackoverflow.com/questions/59733496/cmake-exclude-compile-options-for-one-targetThis is needed on heterogeneous clusters (e.g. Helios) where several nodes have different CPU architectures.
See https://stackoverflow.com/questions/59733496/cmake-exclude-compile-options-for-one-targetJakub KlinkovskýJakub Klinkovskýhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/60Addition of vector of static vectors and its RealType static vector2020-07-10T13:05:46ZTomáš JakubecAddition of vector of static vectors and its RealType static vectorThis code does not compile. The compiler tries to use `StaticExpressionTemplate`, thus it fails.
```c++
TNL::Containers::Vector<TNL::Containers::StaticVector<3,double>, TNL::Devices::Host, size_t> Tvec;
Tvec.setSize(1);
Tvec...This code does not compile. The compiler tries to use `StaticExpressionTemplate`, thus it fails.
```c++
TNL::Containers::Vector<TNL::Containers::StaticVector<3,double>, TNL::Devices::Host, size_t> Tvec;
Tvec.setSize(1);
Tvec = TNL::Containers::StaticVector<3,double>{1,2,3};
TNL::Containers::StaticVector<3,double>svec{5,6,7};
Tvec = (2*Tvec) + svec;
```Jakub KlinkovskýJakub Klinkovskýhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/61absolute value of vector of vertors2020-07-10T13:05:47ZTomáš Jakubecabsolute value of vector of vertorsThis code does not compile.
```c++
TNL::Containers::Vector<TNL::Containers::StaticVector<3,int>, TNL::Devices::Host, size_t> vec;
vec.setSize(2);
vec = TNL::Containers::StaticVector<3,int>{1,-2,3};
TNL::abs(a); // error
...This code does not compile.
```c++
TNL::Containers::Vector<TNL::Containers::StaticVector<3,int>, TNL::Devices::Host, size_t> vec;
vec.setSize(2);
vec = TNL::Containers::StaticVector<3,int>{1,-2,3};
TNL::abs(a); // error
```
The compiler does not see this funciton from the context of function `Abs::evaluate`:
```c++
////
// Abs
template< int Size, typename Real >
__cuda_callable__
auto
abs( const Containers::StaticVector< Size, Real >& a )
{
return Containers::Expressions::StaticUnaryExpressionTemplate< Containers::StaticVector< Size, Real >, Containers::Expressions::Abs >( a );
}
```
and the only visible function from the context is this one.
```c++
template< class T,
std::enable_if_t< ! std::is_unsigned<T>::value, bool > = true>
__cuda_callable__ inline
T abs( const T& n );
```
It does not apply the right function because the function `abs( const Containers::StaticVector< Size, Real >& a )` is in the namespace TNL instead of TNL::Containers as its argument is. Moreover, the `Abs::evaluate` calls `TNL::abs` which does not trigger ADL (argument dependent lookup), see [this](https://stackoverflow.com/questions/12530174/template-function-lookup). By the way that is the reason why operators +,-, etc. works properly. The same problem applies to other expression functions.Jakub KlinkovskýJakub Klinkovskýhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/62Recording errors into logs2021-11-17T16:39:52ZLukáš Matthew ČejkaRecording errors into logsCurrently, any input into logs consists of hardware information, or Benchmark results.
However, if a Matrix fails to be allocated, there needs to be some sort of way to record this in the logs.Currently, any input into logs consists of hardware information, or Benchmark results.
However, if a Matrix fails to be allocated, there needs to be some sort of way to record this in the logs.Jakub KlinkovskýJakub Klinkovskýhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/64SparseMatrix attributes are not protected2020-06-17T05:59:38ZJakub KlinkovskýSparseMatrix attributes are not protectedSee https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/develop/src/TNL/Matrices/SparseMatrix.h#L228-229See https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/develop/src/TNL/Matrices/SparseMatrix.h#L228-229Tomáš OberhuberTomáš Oberhuberhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/65SparseMatrixView::rowVectorProduct is not implemented2020-05-09T10:05:31ZJakub KlinkovskýSparseMatrixView::rowVectorProduct is not implementedSee https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/develop/src/TNL/Matrices/SparseMatrixView.hpp#L345-358See https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/develop/src/TNL/Matrices/SparseMatrixView.hpp#L345-358Tomáš OberhuberTomáš Oberhuberhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/66Matrix::operator== does not work2020-05-09T10:08:11ZJakub KlinkovskýMatrix::operator== does not workThe [operator==](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/TO/matrices/src/TNL/Matrices/Matrix.hpp#L144-159) in `Matrix` uses `getElement` which is not defined in `Matrix`:
```cpp
../src/TNL/Matrices/Matrix.hpp:156:20: e...The [operator==](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/TO/matrices/src/TNL/Matrices/Matrix.hpp#L144-159) in `Matrix` uses `getElement` which is not defined in `Matrix`:
```cpp
../src/TNL/Matrices/Matrix.hpp:156:20: error: ‘const class TNL::Matrices::Matrix<bool, TNL::Devices::Host, int, std::allocator<bool> >’ has no member named ‘getElement’
156 | if( this->getElement( row, column ) != matrix.getElement( row, column ) )
| ~~~~~~^~~~~~~~~~
```Jakub KlinkovskýJakub Klinkovskýhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/67Missing segmentsCount attribute in BiEllpack and ChunkedEllpack2020-05-09T10:05:31ZJakub KlinkovskýMissing segmentsCount attribute in BiEllpack and ChunkedEllpackSee https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/TO/matrices/src/TNL/Containers/Segments/BiEllpack.hpp#L351-360 and https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/TO/matrices/src/TNL/Containers/Segments/ChunkedEll...See https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/TO/matrices/src/TNL/Containers/Segments/BiEllpack.hpp#L351-360 and https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/TO/matrices/src/TNL/Containers/Segments/ChunkedEllpack.hpp#L304-312Tomáš OberhuberTomáš Oberhuberhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/69Implement parsing of ASCII arrays in the XMLVTK class2021-05-13T20:51:05ZJakub KlinkovskýImplement parsing of ASCII arrays in the XMLVTK classThis is not finished in !65.This is not finished in !65.Jakub KlinkovskýJakub Klinkovskýhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/70Implement parsing of binary data in VTKReader2021-05-13T20:51:06ZJakub KlinkovskýImplement parsing of binary data in VTKReaderThe `VTKReader` (for the legacy VTK format) can read only ASCII data, but `VTKWriter` can write ASCII or binary.The `VTKReader` (for the legacy VTK format) can read only ASCII data, but `VTKWriter` can write ASCII or binary.Jakub KlinkovskýJakub Klinkovskýhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/71Overload functions l2Norm and lpNorm for 1D StaticVector2021-01-08T17:27:33ZJakub KlinkovskýOverload functions l2Norm and lpNorm for 1D StaticVectorFor 1D vectors, the `l2Norm` and `lpNorm` functions are equivalent to taking the absolute value of the 0-th component, so there should be overloads avoiding the expensive calls to `TNL::sqrt` and `TNL::pow`.
Then remove `getVectorLength...For 1D vectors, the `l2Norm` and `lpNorm` functions are equivalent to taking the absolute value of the 0-th component, so there should be overloads avoiding the expensive calls to `TNL::sqrt` and `TNL::pow`.
Then remove `getVectorLength` from [getEntityMeasure.h](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/develop/src/TNL/Meshes/Geometry/getEntityMeasure.h#L53-68).Jakub KlinkovskýJakub Klinkovskýhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/72Simplify interface of loadDistributedMesh and distributeMesh2021-07-01T06:55:30ZJakub KlinkovskýSimplify interface of loadDistributedMesh and distributeMeshThe `loadDistributedMesh` and `distributeMesh` functions were written specifically for the distributed grid, but they don't match the interface of the general distributed mesh.
- only the distributed mesh should be passed to `loadDistri...The `loadDistributedMesh` and `distributeMesh` functions were written specifically for the distributed grid, but they don't match the interface of the general distributed mesh.
- only the distributed mesh should be passed to `loadDistributedMesh`, the local mesh can be obtained with `mesh.getLocalMesh()` (at least for the distributed mesh, distributed grid has an inverse relation with the local grid)
- `distributeMesh` should not be used at all with a general distributed mesh (decomposition is done by `tnl-decompose-mesh`, then it is loaded with `PVTUReader`). Try to merge `distributeMesh` for grids with the overload of `loadDistributedMesh` for grids.Jakub KlinkovskýJakub Klinkovskýhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/73Generalize DistributedMeshSynchronizer for faces2020-07-30T12:54:43ZJakub KlinkovskýGeneralize DistributedMeshSynchronizer for facesIf possible, for any subentity topology.
First, faces have to be assigned an owner subdomain. A hypothesis is that this can be done by checking if all subvertices are local (thanks to the way we assign an owner subdomain to vertices).If possible, for any subentity topology.
First, faces have to be assigned an owner subdomain. A hypothesis is that this can be done by checking if all subvertices are local (thanks to the way we assign an owner subdomain to vertices).Jakub KlinkovskýJakub Klinkovskýhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/78CUDA reduction does not work with result type whose alignment is not 8, 16, 3...2020-07-10T13:05:48ZJakub KlinkovskýCUDA reduction does not work with result type whose alignment is not 8, 16, 32 or 64 bitsThe implementation relies on `extern __shared__` variables, which are very restricted - see the comment in [SharedMemory.h](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/aaccf135a29513f270c7f34de3a2bdeeaaf3cfc5/src/TNL/Cuda/S...The implementation relies on `extern __shared__` variables, which are very restricted - see the comment in [SharedMemory.h](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/aaccf135a29513f270c7f34de3a2bdeeaaf3cfc5/src/TNL/Cuda/SharedMemory.h). There are specializations only for 8, 16, 32 and 64 bit types and it is not possible to make it general for any type, so CUDA reduction does not work for types such as `StaticVector< 5, double >` or general `struct`s whose size may not even be power of 2.
It would be much easier to use static size arrays for the shared memory (i.e. without `extern`). It is possible, since we only launch reduction kernels with a constant block size.Jakub KlinkovskýJakub Klinkovskýhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/83getOrganization() methods in segments have wrong return type2021-11-04T17:35:40ZJakub KlinkovskýgetOrganization() methods in segments have wrong return typeAll `getOrganization()` methods in segments return `bool` instead of `ElementsOrganization`, see e.g. [Ellpack.h](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/develop/src/TNL/Algorithms/Segments/Ellpack.h#L33)All `getOrganization()` methods in segments return `bool` instead of `ElementsOrganization`, see e.g. [Ellpack.h](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/develop/src/TNL/Algorithms/Segments/Ellpack.h#L33)Tomáš OberhuberTomáš Oberhuberhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/86PyTNL works with legacy matrix implementation2021-10-01T12:20:07ZTomáš OberhuberPyTNL works with legacy matrix implementationJakub KlinkovskýJakub Klinkovskýhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/87Scan only allows vector type2021-08-11T17:13:50ZXuan Thang NguyenScan only allows vector typeScan< Devices::Cuda, Type >::perform only allows vector types but not array.
The main issue seems to be mismatch of type alias, in scan `using RealType = typename Vector::RealType;` is declared.
vector uses the alias `using RealType = ...Scan< Devices::Cuda, Type >::perform only allows vector types but not array.
The main issue seems to be mismatch of type alias, in scan `using RealType = typename Vector::RealType;` is declared.
vector uses the alias `using RealType = Real;`
but
array uses the alias `using ValueType = Value;`
For usage, both mean the same thing but are named differently.
Refactoring of internal type alias should solve the problem.Jakub KlinkovskýJakub Klinkovskýhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/93Refactor parameters for linear solvers2021-12-09T02:00:34ZJakub KlinkovskýRefactor parameters for linear solvers- It is not sufficient to create an empty `ParameterContainer`, set a few parameters and pass it to some class -- the user also needs to create a `ConfigDescription` and call `configSetup` on all classes where they want to pass the conta...- It is not sufficient to create an empty `ParameterContainer`, set a few parameters and pass it to some class -- the user also needs to create a `ConfigDescription` and call `configSetup` on all classes where they want to pass the container.
- The user should not be forced to deal with default values. Default values should be taken implicitly when the parameter is missing in the `ParameterContainer`. There are functions like `checkParameter`, `checkParameters` and `checkParameterType`, but they are sparsely used. Most often `getParameter` is called and the program crashes if the parameter is missing.
- The `configSetup` functions are not intuitive, they cannot be used to build a hierarchy. For example, `GMRES::configSetup` does not call `configSetup` from its base class, and even the base class does not call `configSetup` from its base-base-class, so the user needs to call `configSetup` from 3 levels manually, even if they just want to configure _one_ linear solver.
- The parameter `convergence-residue` is set to 0 by default, so the linear solvers will practically never converge by default.Jakub KlinkovskýJakub Klinkovskýhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/96Log benchmarks metadata with the JSON format2021-11-17T16:39:52ZJakub KlinkovskýLog benchmarks metadata with the JSON formatThe current `JsonLogging` class writes the metadata only to `std::cout`, but not to the log file.The current `JsonLogging` class writes the metadata only to `std::cout`, but not to the log file.Jakub KlinkovskýJakub Klinkovský