Skip to content
Snippets Groups Projects

Repository graph

You can move around the graph by using the arrow keys.
Select Git revision
No matching results
An error occurred while fetching branches. Retry the search.
An error occurred while fetching tags. Retry the search.
Created with Raphaël 2.2.026Nov10825Oct24211312118528Sep25203227Aug26252423221716141098765432131Jul3029282726251615141312111097542125Jun24201918171424May529Apr2622212019161514131211Deleted useless old troubleshooting cout statements.Deleted out-of-date TODO that wasn't for developmental purposes.Deleted useless comments on a solved issue.Fixed detection of changes in .gitlab-ci.ymlRemoved 'using namespace std;' from documentation examplesDocumentation: load MathJax via httpsDocumentation: enable MathJax in DoxyfileMerge branch 'JK/execution' into 'develop'Moved skipping of synchronization directly into the synchronizeSmartPointersOnDevice functionFixed handling of Cuda::getTransferBufferSize() in memory operationsFixed internal linkage of the getHardwareMetadata function in benchmarksAdded missing __cuda_callable__ to StaticArray and StaticVector methodsReimplemented mesh traverser using ParallelForAdded MeshTraverserTestSwapped template parameters for methods in Meshes::Traverser so that UserData can be deducedUpdated documentation in README.mdRenamed prefixSum methods to scanRemoved HostType and CudaType aliases in containers, matrices and gridsRemoved useless typedefs such as ThisTypeRemoved Containers::List because it has no benefits over std::listFixed handling of --build parameter in the install scriptEnforce builds without (more or less) any warningsAdded Devices::Sequential and corresponding specializations in TNL::AlgorithmsSerialization in TNL::File: File::save and File::load are specialized by Allocator instead of DeviceMoved algorithms from TNL/Containers/Algorithms/ to just TNL/Algorithms/Split ArrayOperations into MemoryOperations and MultiDeviceMemoryOperationsArrayOperations: using more parallel algorithms and suitable sequential fallbacksArrayOperations: added missing methods for the static/sequential specializationBenchmarks: added benchmarks for array copy and compare using memcpy and memcmpMoved SystemInfo class out of the Devices namespaceCleaned up Devices::CudaRemoved duplicate TransferBufferSize constantsMoved atomicAdd function from Devices/Cuda.h into Atomic.hMoved synchronization of smart pointers from Devices::Cuda into TNL::Pointers namespace as free functionsMoved (most of) static methods from TNL::Devices::Cuda as free functions into separate namespace TNL::CudaAdded default stream synchronizations after kernel launches in CudaReductionKernel.hFixed parseCommandLine after refactoring the getType functionReimplemented getType() function using typeid operator and removed useless getType() methodsRemoved custom implementation of std::make_unique which is available in STL since C++14Removed useless operator<< for TNL::String
Loading