tnl-dev issueshttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues2022-02-24T20:37:58Zhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/100JSON log transform script not working2022-02-24T20:37:58ZLukáš Matthew ČejkaJSON log transform script not workingBenchmark logs produced by the [run-tnl-benchmark-spmv](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/develop/src/Benchmarks/scripts/run-tnl-benchmark-spmv) script fail to be parsed by the JSON parser script [tnl-spmv-benchma...Benchmark logs produced by the [run-tnl-benchmark-spmv](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/develop/src/Benchmarks/scripts/run-tnl-benchmark-spmv) script fail to be parsed by the JSON parser script [tnl-spmv-benchmark-make-tables-json.py](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/develop/src/Benchmarks/scripts/tnl-spmv-benchmark-make-tables-json.py) with the following error:
```
Parsing input file....
Traceback (most recent call last):
File "tnl-spmv-benchmark-make-tables-json.py", line 956, in <module>
d = json.load(f)
File "/usr/lib/python3.8/json/__init__.py", line 293, in load
return loads(fp.read(),
File "/usr/lib/python3.8/json/__init__.py", line 357, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3.8/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 444)
```
**How to reproduce:**
1. If you don't have any matrices set up in the script directory, ten you can briefly run the [get-matrices](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/develop/src/Benchmarks/scripts/get-matrices) script to download some into used folder "scripts/mtx_matrices".
2. Run spmv benchmarks using the [run-tnl-benchmark-spmv](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/develop/src/Benchmarks/scripts/run-tnl-benchmark-spmv) script.
3. Convert the benchmark JSON logs using the tnl-spmv-benchmark-make-tables-json.py](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/develop/src/Benchmarks/scripts/tnl-spmv-benchmark-make-tables-json.py) script.
**Expected behaviour:**
- The python script will convert the log file containing JSON results of benchmarks to an html file.
**Actual behaviour:**
- The python script fails since the logs are not a valid JSON as a whole, rather, every line is a valid JSON on its own (source: @klinkovsky).
**Notes:**
- Loading the entire JSON from the logs won't work, each line will have to be parsed separately.
For example:
```
data = []
for line in open("sparse-matrix-benchmark.log").readlines():
data.append(json.loads(line))
```
- When working with tables in Python, @klinkovsky recommends to use the Pandas library. Specifically, to load logs to Pandas dataframe, the following function can be used: [link to file](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/develop/src/Python/BenchmarkLogs.py#L40-54).
- Example log file: [sparse-matrix-benchmark.log](/uploads/e1eb86b965fc9a7692d8c3bdd4cf7402/sparse-matrix-benchmark.log).https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/95Fix getSerializationType() methods in segments2021-10-28T20:31:50ZJakub KlinkovskýFix getSerializationType() methods in segmentsThe following discussions from !105 should be addressed:
- [ ] @klinkovsky started a [discussion](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/merge_requests/105#note_1902):
> FIXME
- [ ] @klinkovsky started a [discussion...The following discussions from !105 should be addressed:
- [ ] @klinkovsky started a [discussion](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/merge_requests/105#note_1902):
> FIXME
- [ ] @klinkovsky started a [discussion](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/merge_requests/105#note_1903):
> FIXME
- [ ] @klinkovsky started a [discussion](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/merge_requests/105#note_1904):
> FIXME
- [ ] @klinkovsky started a [discussion](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/merge_requests/105#note_1905):
> FIXME
- [ ] @klinkovsky started a [discussion](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/merge_requests/105#note_1906):
> FIXME
- [ ] @klinkovsky started a [discussion](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/merge_requests/105#note_1907):
> FIXME
- [ ] @klinkovsky started a [discussion](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/merge_requests/105#note_1908):
> FIXME
- [ ] @klinkovsky started a [discussion](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/merge_requests/105#note_1909):
> FIXME
- [ ] @klinkovsky started a [discussion](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/merge_requests/105#note_1910):
> FIXME
- [ ] @klinkovsky started a [discussion](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/merge_requests/105#note_1911):
> FIXMEhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/91Segments: "compute" parameter is not checked always2021-09-28T19:24:14ZJakub KlinkovskýSegments: "compute" parameter is not checked always- BiEllpack: `compute` seems to be checked correctly
- ChunkedEllpack: `compute` seems to be checked correctly
- Ellpack: `compute` is checked only in the general cases ([1](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/TO/ma...- BiEllpack: `compute` seems to be checked correctly
- ChunkedEllpack: `compute` seems to be checked correctly
- Ellpack: `compute` is checked only in the general cases ([1](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/TO/matrices-adaptive-csr/src/TNL/Algorithms/Segments/EllpackView.hpp#L410), [2](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/TO/matrices-adaptive-csr/src/TNL/Algorithms/Segments/EllpackView.hpp#L427)), but not in the CUDA specializations ([3](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/TO/matrices-adaptive-csr/src/TNL/Algorithms/Segments/EllpackView.hpp#L44-46), [4](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/TO/matrices-adaptive-csr/src/TNL/Algorithms/Segments/EllpackView.hpp#L79-81))
- SlicedEllpack: `compute` is not checked at all: [1](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/TO/matrices-adaptive-csr/src/TNL/Algorithms/Segments/SlicedEllpackView.hpp#L347-349), [2](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/TO/matrices-adaptive-csr/src/TNL/Algorithms/Segments/SlicedEllpackView.hpp#L364-366)
- CSR:
- Adaptive: `compute` is not checked: [1](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/TO/matrices-adaptive-csr/src/TNL/Algorithms/Segments/Kernels/CSRAdaptiveKernelView.hpp#L83-125)
- Hybrid: `compute` is checked in the [multivector kernel](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/TO/matrices-adaptive-csr/src/TNL/Algorithms/Segments/Kernels/CSRHybridKernel.hpp#L112), but not in the [hybrid kernel](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/TO/matrices-adaptive-csr/src/TNL/Algorithms/Segments/Kernels/CSRHybridKernel.hpp#L52-55)
- Light: `compute` is checked in the [multivector kernel](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/TO/matrices-adaptive-csr/src/TNL/Algorithms/Segments/Kernels/CSRLightKernel.hpp#L316-320), but not in the [other kernels](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/TO/matrices-adaptive-csr/src/TNL/Algorithms/Segments/Kernels/CSRLightKernel.hpp#L50-252)
- Scalar: `compute` seems to be checked correctly
- Vector: `compute` is not checked: [1](https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/TO/matrices-adaptive-csr/src/TNL/Algorithms/Segments/Kernels/CSRVectorKernel.hpp#L57-61)
Obviously we don't have any tests for this feature. But do we have some benchmark which proves that this optimization helps in some cases?Tomáš OberhuberTomáš Oberhuberhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/88SegmentsPrinter::print does not work on GPU2021-06-06T09:12:52ZTomáš OberhuberSegmentsPrinter::print does not work on GPUThe lambda function `fetch` in `SegmentsPrinter::print` causes CUDA kernel crash when it is called (`SegmentsPrinting.h:76`). It is not handled properly probably by the `SegmentsPrinter`. The same lambda function works well in function `...The lambda function `fetch` in `SegmentsPrinter::print` causes CUDA kernel crash when it is called (`SegmentsPrinting.h:76`). It is not handled properly probably by the `SegmentsPrinter`. The same lambda function works well in function `printSegments` (`SegmentsPrinting.h:121`). This can be tested for example using `Examples/Algorithms/Segments/SegmentsExample_General.cu` by replacing (line 39)
```
printSegments( segments, fetch, std::cout )
```
with
```
std::cout << segments.print( fetch ) << std::endl;
```Tomáš OberhuberTomáš Oberhuberhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/85Assignment of symmetric and general sparse matrices does not work2021-02-04T21:26:42ZTomáš OberhuberAssignment of symmetric and general sparse matrices does not workOnly lower part and diagonal of the symmetric matrix is assigned to the general one.Only lower part and diagonal of the symmetric matrix is assigned to the general one.Tomáš OberhuberTomáš Oberhuberhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/82VectorOfStaticVectorsTestCuda: unspecified launch failure in the CUDA reducti...2020-10-31T11:17:20ZJakub KlinkovskýVectorOfStaticVectorsTestCuda: unspecified launch failure in the CUDA reduction kernel with StaticVector valuesThere are compiler warnings like
```
[292/351] Building NVCC (Device) object src/UnitTests/Containers/CMakeFiles/VectorOfStaticVectorsTestCuda.dir/VectorOfStaticVectorsTestCuda_generated_VectorOfStaticVectorsTestCuda.cu.o
/tmp/rexe_klink...There are compiler warnings like
```
[292/351] Building NVCC (Device) object src/UnitTests/Containers/CMakeFiles/VectorOfStaticVectorsTestCuda.dir/VectorOfStaticVectorsTestCuda_generated_VectorOfStaticVectorsTestCuda.cu.o
/tmp/rexe_klinkovsky/tnl/src/TNL/Algorithms/CudaReductionKernel.h(60): warning #3126-D: calling a __host__ function from a __host__ __device__ function is not allowed
detected during:
instantiation of "auto TNL::Algorithms::CudaReductionFunctorWrapper(Reduction &&, Arg1 &&, Arg2 &&) [with Reduction=const std::plus<void> &, Arg1=TNL::Containers::Expressions::RemoveET<std::decay_t<TNL::Containers::Expressions::StaticBinaryExpressionTemplate<TNL::Containers::Expressions::StaticBinaryExpressionTemplate<TNL::Containers::StaticVector<3, int>, TNL::Containers::StaticVector<3, short>, TNL::Containers::Expressions::Multiplication, TNL::Containers::Expressions::VectorExpressionVariable, TNL::Containers::Expressions::VectorExpressionVariable>, TNL::Containers::Expressions::StaticBinaryExpressionTemplate<TNL::Containers::StaticVector<3, int>, TNL::Containers::StaticVector<3, short>, TNL::Containers::Expressions::Multiplication, TNL::Containers::Expressions::VectorExpressionVariable, TNL::Containers::Expressions::VectorExpressionVariable>, TNL::Containers::Expressions::Addition, TNL::Containers::Expressions::VectorExpressionVariable, TNL::Containers::Expressions::VectorExpressionVariable>>> &, Arg2=TNL::Containers::Expressions::RemoveET<std::decay_t<TNL::Containers::Expressions::StaticBinaryExpressionTemplate<TNL::Containers::Expressions::StaticBinaryExpressionTemplate<TNL::Containers::StaticVector<3, int>, TNL::Containers::StaticVector<3, short>, TNL::Containers::Expressions::Multiplication, TNL::Containers::Expressions::VectorExpressionVariable, TNL::Containers::Expressions::VectorExpressionVariable>, TNL::Containers::Expressions::StaticBinaryExpressionTemplate<TNL::Containers::StaticVector<3, int>, TNL::Containers::StaticVector<3, short>, TNL::Containers::Expressions::Multiplication, TNL::Containers::Expressions::VectorExpressionVariable, TNL::Containers::Expressions::VectorExpressionVariable>, TNL::Containers::Expressions::Addition, TNL::Containers::Expressions::VectorExpressionVariable, TNL::Containers::Expressions::VectorExpressionVariable>>>]"
(95): here
instantiation of "void TNL::Algorithms::CudaReductionKernel<blockSize,Result,DataFetcher,Reduction,Index>(Result, DataFetcher, Reduction, Index, Index, Result *) [with blockSize=256, Result=TNL::Containers::Expressions::RemoveET<std::decay_t<TNL::Containers::Expressions::StaticBinaryExpressionTemplate<TNL::Containers::Expressions::StaticBinaryExpressionTemplate<TNL::Containers::StaticVector<3, int>, TNL::Containers::StaticVector<3, short>, TNL::Containers::Expressions::Multiplication, TNL::Containers::Expressions::VectorExpressionVariable, TNL::Containers::Expressions::VectorExpressionVariable>, TNL::Containers::Expressions::StaticBinaryExpressionTemplate<TNL::Containers::StaticVector<3, int>, TNL::Containers::StaticVector<3, short>, TNL::Containers::Expressions::Multiplication, TNL::Containers::Expressions::VectorExpressionVariable, TNL::Containers::Expressions::VectorExpressionVariable>, TNL::Containers::Expressions::Addition, TNL::Containers::Expressions::VectorExpressionVariable, TNL::Containers::Expressions::VectorExpressionVariable>>>, DataFetcher=lambda [](int)->TNL::Containers::Expressions::RemoveET<std::decay_t<TNL::Containers::Expressions::StaticBinaryExpressionTemplate<TNL::Containers::Expressions::StaticBinaryExpressionTemplate<TNL::Containers::StaticVector<3, int>, TNL::Containers::StaticVector<3, short>, TNL::Containers::Expressions::Multiplication, TNL::Containers::Expressions::VectorExpressionVariable, TNL::Containers::Expressions::VectorExpressionVariable>, TNL::Containers::Expressions::StaticBinaryExpressionTemplate<TNL::Containers::StaticVector<3, int>, TNL::Containers::StaticVector<3, short>, TNL::Containers::Expressions::Multiplication, TNL::Containers::Expressions::VectorExpressionVariable, TNL::Containers::Expressions::VectorExpressionVariable>, TNL::Containers::Expressions::Addition, TNL::Containers::Expressions::VectorExpressionVariable, TNL::Containers::Expressions::VectorExpressionVariable>>>, Reduction=std::plus<void>, Index=int]"
(512): here
instantiation of "int TNL::Algorithms::CudaReductionKernelLauncher<Index, Result>::launch(Index, Index, const Reduction &, DataFetcher &, const Result &, Result *) [with Index=int, Result=TNL::Containers::Expressions::RemoveET<std::decay_t<TNL::Containers::Expressions::StaticBinaryExpressionTemplate<TNL::Containers::Expressions::StaticBinaryExpressionTemplate<TNL::Containers::StaticVector<3, int>, TNL::Containers::StaticVector<3, short>, TNL::Containers::Expressions::Multiplication, TNL::Containers::Expressions::VectorExpressionVariable, TNL::Containers::Expressions::VectorExpressionVariable>, TNL::Containers::Expressions::StaticBinaryExpressionTemplate<TNL::Containers::StaticVector<3, int>, TNL::Containers::StaticVector<3, short>, TNL::Containers::Expressions::Multiplication, TNL::Containers::Expressions::VectorExpressionVariable, TNL::Containers::Expressions::VectorExpressionVariable>, TNL::Containers::Expressions::Addition, TNL::Containers::Expressions::VectorExpressionVariable, TNL::Containers::Expressions::VectorExpressionVariable>>>, DataFetcher=lambda [](int)->TNL::Containers::Expressions::RemoveET<std::decay_t<TNL::Containers::Expressions::StaticBinaryExpressionTemplate<TNL::Containers::Expressions::StaticBinaryExpressionTemplate<TNL::Containers::StaticVector<3, int>, TNL::Containers::StaticVector<3, short>, TNL::Containers::Expressions::Multiplication, TNL::Containers::Expressions::VectorExpressionVariable, TNL::Containers::Expressions::VectorExpressionVariable>, TNL::Containers::Expressions::StaticBinaryExpressionTemplate<TNL::Containers::StaticVector<3, int>, TNL::Containers::StaticVector<3, short>, TNL::Containers::Expressions::Multiplication, TNL::Containers::Expressions::VectorExpressionVariable, TNL::Containers::Expressions::VectorExpressionVariable>, TNL::Containers::Expressions::Addition, TNL::Containers::Expressions::VectorExpressionVariable, TNL::Containers::Expressions::VectorExpressionVariable>>>, Reduction=std::plus<void>]"
(378): here
instantiation of "Result TNL::Algorithms::CudaReductionKernelLauncher<Index, Result>::finish(const Reduction &, const Result &) [with Index=int, Result=TNL::Containers::Expressions::RemoveET<std::decay_t<TNL::Containers::Expressions::StaticBinaryExpressionTemplate<TNL::Containers::Expressions::StaticBinaryExpressionTemplate<TNL::Containers::StaticVector<3, int>, TNL::Containers::StaticVector<3, short>, TNL::Containers::Expressions::Multiplication, TNL::Containers::Expressions::VectorExpressionVariable, TNL::Containers::Expressions::VectorExpressionVariable>, TNL::Containers::Expressions::StaticBinaryExpressionTemplate<TNL::Containers::StaticVector<3, int>, TNL::Containers::StaticVector<3, short>, TNL::Containers::Expressions::Multiplication, TNL::Containers::Expressions::VectorExpressionVariable, TNL::Containers::Expressions::VectorExpressionVariable>, TNL::Containers::Expressions::Addition, TNL::Containers::Expressions::VectorExpressionVariable, TNL::Containers::Expressions::VectorExpressionVariable>>>, Reduction=std::plus<void>]"
/tmp/rexe_klinkovsky/tnl/src/TNL/Algorithms/Reduction.hpp(368): here
instantiation of "Result TNL::Algorithms::Reduction<TNL::Devices::Cuda>::reduce(Index, Index, const ReductionOperation &, DataFetcher &, const Result &) [with Index=int, Result=TNL::Containers::Expressions::RemoveET<std::decay_t<TNL::Containers::Expressions::StaticBinaryExpressionTemplate<TNL::Containers::Expressions::StaticBinaryExpressionTemplate<TNL::Containers::StaticVector<3, int>, TNL::Containers::StaticVector<3, short>, TNL::Containers::Expressions::Multiplication, TNL::Containers::Expressions::VectorExpressionVariable, TNL::Containers::Expressions::VectorExpressionVariable>, TNL::Containers::Expressions::StaticBinaryExpressionTemplate<TNL::Containers::StaticVector<3, int>, TNL::Containers::StaticVector<3, short>, TNL::Containers::Expressions::Multiplication, TNL::Containers::Expressions::VectorExpressionVariable, TNL::Containers::Expressions::VectorExpressionVariable>, TNL::Containers::Expressions::Addition, TNL::Containers::Expressions::VectorExpressionVariable, TNL::Containers::Expressions::VectorExpressionVariable>>>, ReductionOperation=std::plus<void>, DataFetcher=lambda [](IndexType)->TNL::Containers::Expressions::StaticBinaryExpressionTemplate<TNL::Containers::StaticVector<3, int>, TNL::Containers::StaticVector<3, short>, TNL::Containers::Expressions::Multiplication, TNL::Containers::Expressions::VectorExpressionVariable, TNL::Containers::Expressions::VectorExpressionVariable>]"
/tmp/rexe_klinkovsky/tnl/src/TNL/Containers/Expressions/VerticalOperations.h(122): here
[ 7 instantiation contexts not shown ]
implicit generation of "testing::internal::TestFactoryImpl<TestClass>::~TestFactoryImpl() [with TestClass=binary_tests::VectorBinaryOperationsTest_scalarProduct_Test<binary_tests::Pair<TNL::Containers::Vector<TNL::Containers::StaticVector<3, int>, TNL::Devices::Cuda, int, TNL::Allocators::Cuda<TNL::Containers::StaticVector<3, int>>>, TNL::Containers::Vector<TNL::Containers::StaticVector<3, short>, TNL::Devices::Cuda, int, TNL::Allocators::Cuda<TNL::Containers::StaticVector<3, short>>>>>]"
/tmp/rexe_klinkovsky/tnl/Release/googletest-src/googletest/include/gtest/internal/gtest-internal.h(742): here
instantiation of class "testing::internal::TestFactoryImpl<TestClass> [with TestClass=binary_tests::VectorBinaryOperationsTest_scalarProduct_Test<binary_tests::Pair<TNL::Containers::Vector<TNL::Containers::StaticVector<3, int>, TNL::Devices::Cuda, int, TNL::Allocators::Cuda<TNL::Containers::StaticVector<3, int>>>, TNL::Containers::Vector<TNL::Containers::StaticVector<3, short>, TNL::Devices::Cuda, int, TNL::Allocators::Cuda<TNL::Containers::StaticVector<3, short>>>>>]"
/tmp/rexe_klinkovsky/tnl/Release/googletest-src/googletest/include/gtest/internal/gtest-internal.h(742): here
implicit generation of "testing::internal::TestFactoryImpl<TestClass>::TestFactoryImpl() [with TestClass=binary_tests::VectorBinaryOperationsTest_scalarProduct_Test<binary_tests::Pair<TNL::Containers::Vector<TNL::Containers::StaticVector<3, int>, TNL::Devices::Cuda, int, TNL::Allocators::Cuda<TNL::Containers::StaticVector<3, int>>>, TNL::Containers::Vector<TNL::Containers::StaticVector<3, short>, TNL::Devices::Cuda, int, TNL::Allocators::Cuda<TNL::Containers::StaticVector<3, short>>>>>]"
/tmp/rexe_klinkovsky/tnl/Release/googletest-src/googletest/include/gtest/internal/gtest-internal.h(742): here
instantiation of class "testing::internal::TestFactoryImpl<TestClass> [with TestClass=binary_tests::VectorBinaryOperationsTest_scalarProduct_Test<binary_tests::Pair<TNL::Containers::Vector<TNL::Containers::StaticVector<3, int>, TNL::Devices::Cuda, int, TNL::Allocators::Cuda<TNL::Containers::StaticVector<3, int>>>, TNL::Containers::Vector<TNL::Containers::StaticVector<3, short>, TNL::Devices::Cuda, int, TNL::Allocators::Cuda<TNL::Containers::StaticVector<3, short>>>>>]"
/tmp/rexe_klinkovsky/tnl/Release/googletest-src/googletest/include/gtest/internal/gtest-internal.h(742): here
instantiation of "__nv_bool testing::internal::TypeParameterizedTest<Fixture, TestSel, Types>::Register(const char *, const testing::internal::CodeLocation &, const char *, const char *, int, const std::vector<std::string, std::allocator<std::string>> &) [with Fixture=binary_tests::VectorBinaryOperationsTest, TestSel=testing::internal::TemplateSel<binary_tests::VectorBinaryOperationsTest_scalarProduct_Test>, Types=binary_tests::gtest_type_params_VectorBinaryOperationsTest_]"
/tmp/rexe_klinkovsky/tnl/src/UnitTests/Containers/VectorBinaryOperationsTest.h(618): here
```
And when the test is executed, it fails with
```
1/95 Test #32: VectorOfStaticVectorsTestCuda .................Child aborted***Exception: 4.78 sec
[==========] Running 130 tests from 8 test suites.
[----------] Global test environment set-up.
[----------] 19 tests from VectorBinaryOperationsTest/0, where TypeParam = binary_tests::Pair<TNL::Containers::Vector<TNL::Containers::StaticVector<3, int>, TNL::Devices::Cuda, int, TNL::Allocators::Cuda<TNL::Containers::StaticVector<3, int> > >, TNL::Containers::Vector<TNL::Containers::StaticVector<3, short>, TNL::Devices::Cuda, int, TNL::Allocators::Cuda<TNL::Containers::StaticVector<3, short> > > >
[ RUN ] VectorBinaryOperationsTest/0.EQ
[ OK ] VectorBinaryOperationsTest/0.EQ (4029 ms)
[ RUN ] VectorBinaryOperationsTest/0.NE
[ OK ] VectorBinaryOperationsTest/0.NE (0 ms)
[ RUN ] VectorBinaryOperationsTest/0.LT
[ OK ] VectorBinaryOperationsTest/0.LT (0 ms)
[ RUN ] VectorBinaryOperationsTest/0.GT
[ OK ] VectorBinaryOperationsTest/0.GT (1 ms)
[ RUN ] VectorBinaryOperationsTest/0.LE
[ OK ] VectorBinaryOperationsTest/0.LE (0 ms)
[ RUN ] VectorBinaryOperationsTest/0.GE
[ OK ] VectorBinaryOperationsTest/0.GE (1 ms)
[ RUN ] VectorBinaryOperationsTest/0.addition
[ OK ] VectorBinaryOperationsTest/0.addition (0 ms)
[ RUN ] VectorBinaryOperationsTest/0.subtraction
[ OK ] VectorBinaryOperationsTest/0.subtraction (0 ms)
[ RUN ] VectorBinaryOperationsTest/0.multiplication
[ OK ] VectorBinaryOperationsTest/0.multiplication (1 ms)
[ RUN ] VectorBinaryOperationsTest/0.division
[ OK ] VectorBinaryOperationsTest/0.division (0 ms)
[ RUN ] VectorBinaryOperationsTest/0.assignment
[ OK ] VectorBinaryOperationsTest/0.assignment (0 ms)
[ RUN ] VectorBinaryOperationsTest/0.add_assignment
[ OK ] VectorBinaryOperationsTest/0.add_assignment (1 ms)
[ RUN ] VectorBinaryOperationsTest/0.subtract_assignment
[ OK ] VectorBinaryOperationsTest/0.subtract_assignment (0 ms)
[ RUN ] VectorBinaryOperationsTest/0.multiply_assignment
[ OK ] VectorBinaryOperationsTest/0.multiply_assignment (1 ms)
[ RUN ] VectorBinaryOperationsTest/0.divide_assignment
[ OK ] VectorBinaryOperationsTest/0.divide_assignment (0 ms)
[ RUN ] VectorBinaryOperationsTest/0.scalarProduct
terminate called after throwing an instance of 'TNL::Exceptions::CudaRuntimeError'
what(): CUDA ERROR 719 (cudaErrorLaunchFailure): unspecified launch failure.
Source: line 81 in /tmp/rexe_klinkovsky/tnl/src/TNL/Allocators/Cuda.h: unspecified launch failure
```Jakub KlinkovskýJakub Klinkovskýhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/81Fix reorderEntities for DistributedMesh2020-07-29T11:35:11ZJakub KlinkovskýFix reorderEntities for DistributedMeshThe current naïve implementation cannot work - `DistributedMeshSynchronizer` assumes that global indices of local entities are sorted, so we should update the global indices too and exchange the new global indices for ghost entities.The current naïve implementation cannot work - `DistributedMeshSynchronizer` assumes that global indices of local entities are sorted, so we should update the global indices too and exchange the new global indices for ghost entities.Jakub KlinkovskýJakub Klinkovskýhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/79StaticComparison bug2021-12-09T14:26:52ZTomáš JakubecStaticComparison bugTesting the following code:
```c++
#include <iostream>
#include <GTMesh/Debug/Debug.h>
#include <TNL/Containers/StaticVector.h>
#include <TNL/Containers/Vector.h>
using namespace std;
template<int Dim, typename Real>
struct std::numeric_...Testing the following code:
```c++
#include <iostream>
#include <GTMesh/Debug/Debug.h>
#include <TNL/Containers/StaticVector.h>
#include <TNL/Containers/Vector.h>
using namespace std;
template<int Dim, typename Real>
struct std::numeric_limits<TNL::Containers::StaticVector<Dim, Real>>{
static constexpr bool is_specialized = true;
static TNL::Containers::StaticVector<Dim, Real> max(){
TNL::Containers::StaticVector<Dim, Real> res;
res = std::numeric_limits<Real>::max();
return res;
}
static TNL::Containers::StaticVector<Dim, Real> lowest(){
TNL::Containers::StaticVector<Dim, Real> res;
res = std::numeric_limits<Real>::lowest();
return res;
}
};
using namespace TNL;
int main()
{
TNL::Containers::Vector<TNL::Containers::StaticVector<3,int>, TNL::Devices::Host, size_t> a;
a.setSize(2);
a[0] = TNL::Containers::StaticVector<3,int>{5,-3,6};
a[1] = TNL::Containers::StaticVector<3,int>{8, 1, -5};
TNL::Containers::StaticVector<3,int> a1 = a[0];
TNL::Containers::StaticVector<3,int> a2 = a[1];
DBGVAR(a); // == ..\lookup_problem\main.cpp << 36 >> [[ a ]] ==> [ [ 5, -3, 6 ], [ 8, 1, -5 ] ]
DBGVAR((a1 < a2), (a2 < a1)); // == ..\lookup_problem\main.cpp << 37 >> [[ (a1 < a2) ]] ==> false
// == ..\lookup_problem\main.cpp << 37 >> [[ (a2 < a1) ]] ==> false
DBGVAR(min(a)); // == ..\lookup_problem\main.cpp << 39 >> [[ min(a) ]] ==> [ 5, -3, -5 ]
DBGVAR(min(a1,a2), min(a2,a1));// == ..\lookup_problem\main.cpp << 40 >> [[ min(a1,a2) ]] ==> [ 5, -3, 6 ]
// == ..\lookup_problem\main.cpp << 40 >> [[ min(a2,a1) ]] ==> [ 8, 1, -5 ]
DBGVAR(TNL::min(a1, a2)); // == ..\lookup_problem\main.cpp << 42 >> [[ TNL::min(a1, a2) ]] ==> [ 5, -3, -5 ]
return 0;
}
```
The first problem is the comparison of `a1` and `a2`. If the `(a1 < a2)` is false the `(a2 < a1)` must be true. However in both cases, the result is false, which is incorrect. The comparison utilizes the `StaticCompare::LT` defined as:
```c++
__cuda_callable__
static bool LT( const T1& a, const T2& b )
{
TNL_ASSERT_EQ( a.getSize(), b.getSize(), "Sizes of expressions to be compared do not fit." );
for( int i = 0; i < a.getSize(); i++ )
if( ! (a[ i ] < b[ i ]) )
return false;
return true;
}
```
This function does not realize a suitable comparison, e.g., lexicographical.
Secondly, there is a difference in call of min. Both `min(a)` and `TNL::min(a1, a2)` utilize `StaticBinaryExpressionTemplate< ET1, ET2, Min >`, which results in retuning a vector with minimum in each element separately (which is awesome). However, `min(a1, a2)` `min` from stl is called (the `std::min` is prioritized over `TNL::Containers::Expressions::min`) and it employs `StaticCompare::LT` through `operator<`. This problem is solved by removing `using namespace std` (which is partialy my mistake, but worth mentioning). The incorrect implementation of `StaticCompare::LT` results in dependency of the result on the order of the arguments of `min`.
The macro `DBGVAR` is defined in the [GTMesh library](https://mmg-gitlab.fjfi.cvut.cz/gitlab/jakubec/GTMesh).Jakub KlinkovskýJakub Klinkovskýhttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/63Loading of the matrix circuit5M2020-06-20T09:33:14ZLukáš Matthew ČejkaLoading of the matrix circuit5MThe matrix circuit5M.mtx from the Florida Matrix Market takes days to load for some reason.
Further investigation is needed.The matrix circuit5M.mtx from the Florida Matrix Market takes days to load for some reason.
Further investigation is needed.Lukáš Matthew ČejkaLukáš Matthew Čejkahttps://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/56computeCompressedRowLengthsFromMtxFile( ... ) doesn't take account symmetric ...2021-04-13T19:25:53ZMatouš FenclcomputeCompressedRowLengthsFromMtxFile( ... ) doesn't take account symmetric formatFunction computeCompressedRowLengthsFromMtxFile( .. ) doesn't take account symmetric format in matrixReader.h. Computed rowLenghts are too big for ellpackSymmetric.
example:<pre>
/ 1 1 1 1 1 \
| 1 0 0 0 0 |
| 1 0 0 0 0 |
| 1 0 0 0 ...Function computeCompressedRowLengthsFromMtxFile( .. ) doesn't take account symmetric format in matrixReader.h. Computed rowLenghts are too big for ellpackSymmetric.
example:<pre>
/ 1 1 1 1 1 \
| 1 0 0 0 0 |
| 1 0 0 0 0 |
| 1 0 0 0 0 |
\ 1 0 0 0 0 /
</pre>
non-symmetric rowLenghts = 5;
symmetric rowLenghts = 1;
Function should compute only under-diagonal rowLenghts.https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/issues/53Bug in analytic functions2019-11-07T13:28:16ZMatouš FenclBug in analytic functionsAnalytic functions as input MeshFunction in Hamilton-Jacobi solver with MPI on GPU give bad values.
Generated function from *tnl-init* is passed into *tnl-direct-eikonal-solver*. TNL devides input MeshFunction into blocks, which number...Analytic functions as input MeshFunction in Hamilton-Jacobi solver with MPI on GPU give bad values.
Generated function from *tnl-init* is passed into *tnl-direct-eikonal-solver*. TNL devides input MeshFunction into blocks, which number depends on number of processes. Devided MeshFunction that we get in file *tnlDirectEikonalProblem_impl.h* in function *Solve()* has invalid values.
Starting script is attached.
[tnl-run-dir-eik-solver](/uploads/f3a60b9e323f9bbecedbf3f23f3f1f9e/tnl-run-dir-eik-solver)Tomáš OberhuberTomáš Oberhuber