Loading TODO +12 −14 Original line number Diff line number Diff line - pridet execution policy https://github.com/harrism/hemi/blob/master/hemi/execution_policy.h - prejmenova Assert na TNL_ASSERT a rozsirit asserce podobne jako v GTest - odstranit paramee lazy ze smart pointeru TODO: - pri zpracovani dat z MRI jde vetsinou o prilis male snimky na optimalni vyuziti GPU (ve 2D). Kdyby se ale pomoci CUDA streamu provadelo vice vypoctu soucasne, mohlo by se dosahnout mnohem lepsiho urychleni TODO: - objekt NeighborEnities by mohl vracet i lokalni index dane neighbor entity, coz je potreba pro spravne vkladani maticovych elementu, ted se tyto indexy doplnuji rucne podle znalosti indexovani v gridu. Jelikoz neighbor entities mohou znat typ okoli/vzor numerickeho schematu, dokazaly by se prizpusobit i ruznym patternum. To by pak vyresilo i skladani operatoru s ruznymi patterny. TODO: - pridat execution policy https://github.com/harrism/hemi/blob/master/hemi/execution_policy.h - odstranit parametr lazy ze smart pointeru TODO: - implementovat tnlMixedGridBoundaryConditions, kde by se pro kazdou stranu gridu definoval jiny zvlastni typ Loading @@ -15,11 +24,6 @@ TODO: - data by se na hostu preskupila do souvisleho bloku dat a ten se prenesl najednou TODO: TODO: - zavest namespaces TODO: CUDA unified memory - pretizit operator new s cudaMallocManaged, pak by bylo mozne vytvaret CUDA objekty pristupne pro host a device - v TNL solveru by pak vlastne jen stacilo vytvaret objekty pomoci new Loading @@ -28,11 +32,6 @@ TODO: CUDA unified memory se s nimi pracovat postaru - bylo by dobre to obalit unique poinetry, aby se nemusela delat dealokace rucne TODO: shared pointery - mohli bysme pomoci nich odstranit Shared objekty - asi by bylo lepsi datcounter z shared pointeru primo do array a tento counter by se alokoval az po porvnim sdileni dat - diky tomu by se array mohlo vytvaret i na gpu bez nutnosti dynamicke alokace, jen by nebylo mozne delat bind (nebo nejaky zjednoduseny) TODO: Mesh * vsechny traits zkusit presunout do jednotneho MeshTraits, tj. temer MeshConfigTraits ale pojmenovat jako MeshTraits * omezit tnlDimesnionsTag - asi to ale nepujde Loading @@ -45,7 +44,6 @@ TODO: implementace maticovych resicu * Gaussova eliminace * SOR metoda * Jacobiho metoda * TFQMR metoda * IDR metody TODO: Nahradit sablonovy parametr dimenze sitove entity za typ entity. Pak by se mohlo zkusit, napriklad u gridu Loading src/TNL/Assert.h +1 −1 Original line number Diff line number Diff line Loading @@ -288,7 +288,7 @@ TNL_IMPL_CMP_HELPER_( GT, > ); } // namespace TNL // Internal macro wrapping the __PRETTY_FUNCTION__ "magic". #if defined( __NVCC__ ) && ( __CUDACC_VER__ < 80000 ) #if defined( __NVCC__ ) && ( __CUDACC_VER_MAJOR__ < 8 ) #define __TNL_PRETTY_FUNCTION "(not known in CUDA 7.5 or older)" #else #define __TNL_PRETTY_FUNCTION __PRETTY_FUNCTION__ Loading src/TNL/CMakeLists.txt +1 −0 Original line number Diff line number Diff line Loading @@ -20,6 +20,7 @@ SET( CURRENT_DIR ${CMAKE_SOURCE_DIR}/src/TNL ) set( headers Assert.h Constants.h CudaSharedMemory.h CudaStreamPool.h Curve.h DevicePointer.h Loading src/TNL/Config/ConfigDescription.h +2 −2 Original line number Diff line number Diff line Loading @@ -128,7 +128,7 @@ class ConfigDescription return ( ( ConfigEntry< T > * ) entries[ i ] ) -> default_value; else return NULL; } std::cerr << "Asking for the default value of uknown parameter." << std::endl; std::cerr << "Asking for the default value of unknown parameter." << std::endl; return NULL; } Loading @@ -144,7 +144,7 @@ class ConfigDescription return ( ( ConfigEntry< T > * ) entries[ i ] ) -> default_value; else return NULL; } std::cerr << "Asking for the default value of uknown parameter." << std::endl; std::cerr << "Asking for the default value of unknown parameter." << std::endl; return NULL; } Loading src/TNL/Config/ParameterContainer.cpp +1 −1 Original line number Diff line number Diff line Loading @@ -106,7 +106,7 @@ ParameterContainer:: tnlParameterBase* param = parameters[ i ]; param -> type. MPIBcast( root, MPI_COMM_WORLD ); param -> name. MPIBcast( root, MPI_COMM_WORLD ); if( param -> type == "mString" ) if( param -> type == "String" ) { ( ( tnlParameter< String >* ) param ) -> value. MPIBcast( root, mpi_comm ); } Loading Loading
TODO +12 −14 Original line number Diff line number Diff line - pridet execution policy https://github.com/harrism/hemi/blob/master/hemi/execution_policy.h - prejmenova Assert na TNL_ASSERT a rozsirit asserce podobne jako v GTest - odstranit paramee lazy ze smart pointeru TODO: - pri zpracovani dat z MRI jde vetsinou o prilis male snimky na optimalni vyuziti GPU (ve 2D). Kdyby se ale pomoci CUDA streamu provadelo vice vypoctu soucasne, mohlo by se dosahnout mnohem lepsiho urychleni TODO: - objekt NeighborEnities by mohl vracet i lokalni index dane neighbor entity, coz je potreba pro spravne vkladani maticovych elementu, ted se tyto indexy doplnuji rucne podle znalosti indexovani v gridu. Jelikoz neighbor entities mohou znat typ okoli/vzor numerickeho schematu, dokazaly by se prizpusobit i ruznym patternum. To by pak vyresilo i skladani operatoru s ruznymi patterny. TODO: - pridat execution policy https://github.com/harrism/hemi/blob/master/hemi/execution_policy.h - odstranit parametr lazy ze smart pointeru TODO: - implementovat tnlMixedGridBoundaryConditions, kde by se pro kazdou stranu gridu definoval jiny zvlastni typ Loading @@ -15,11 +24,6 @@ TODO: - data by se na hostu preskupila do souvisleho bloku dat a ten se prenesl najednou TODO: TODO: - zavest namespaces TODO: CUDA unified memory - pretizit operator new s cudaMallocManaged, pak by bylo mozne vytvaret CUDA objekty pristupne pro host a device - v TNL solveru by pak vlastne jen stacilo vytvaret objekty pomoci new Loading @@ -28,11 +32,6 @@ TODO: CUDA unified memory se s nimi pracovat postaru - bylo by dobre to obalit unique poinetry, aby se nemusela delat dealokace rucne TODO: shared pointery - mohli bysme pomoci nich odstranit Shared objekty - asi by bylo lepsi datcounter z shared pointeru primo do array a tento counter by se alokoval az po porvnim sdileni dat - diky tomu by se array mohlo vytvaret i na gpu bez nutnosti dynamicke alokace, jen by nebylo mozne delat bind (nebo nejaky zjednoduseny) TODO: Mesh * vsechny traits zkusit presunout do jednotneho MeshTraits, tj. temer MeshConfigTraits ale pojmenovat jako MeshTraits * omezit tnlDimesnionsTag - asi to ale nepujde Loading @@ -45,7 +44,6 @@ TODO: implementace maticovych resicu * Gaussova eliminace * SOR metoda * Jacobiho metoda * TFQMR metoda * IDR metody TODO: Nahradit sablonovy parametr dimenze sitove entity za typ entity. Pak by se mohlo zkusit, napriklad u gridu Loading
src/TNL/Assert.h +1 −1 Original line number Diff line number Diff line Loading @@ -288,7 +288,7 @@ TNL_IMPL_CMP_HELPER_( GT, > ); } // namespace TNL // Internal macro wrapping the __PRETTY_FUNCTION__ "magic". #if defined( __NVCC__ ) && ( __CUDACC_VER__ < 80000 ) #if defined( __NVCC__ ) && ( __CUDACC_VER_MAJOR__ < 8 ) #define __TNL_PRETTY_FUNCTION "(not known in CUDA 7.5 or older)" #else #define __TNL_PRETTY_FUNCTION __PRETTY_FUNCTION__ Loading
src/TNL/CMakeLists.txt +1 −0 Original line number Diff line number Diff line Loading @@ -20,6 +20,7 @@ SET( CURRENT_DIR ${CMAKE_SOURCE_DIR}/src/TNL ) set( headers Assert.h Constants.h CudaSharedMemory.h CudaStreamPool.h Curve.h DevicePointer.h Loading
src/TNL/Config/ConfigDescription.h +2 −2 Original line number Diff line number Diff line Loading @@ -128,7 +128,7 @@ class ConfigDescription return ( ( ConfigEntry< T > * ) entries[ i ] ) -> default_value; else return NULL; } std::cerr << "Asking for the default value of uknown parameter." << std::endl; std::cerr << "Asking for the default value of unknown parameter." << std::endl; return NULL; } Loading @@ -144,7 +144,7 @@ class ConfigDescription return ( ( ConfigEntry< T > * ) entries[ i ] ) -> default_value; else return NULL; } std::cerr << "Asking for the default value of uknown parameter." << std::endl; std::cerr << "Asking for the default value of unknown parameter." << std::endl; return NULL; } Loading
src/TNL/Config/ParameterContainer.cpp +1 −1 Original line number Diff line number Diff line Loading @@ -106,7 +106,7 @@ ParameterContainer:: tnlParameterBase* param = parameters[ i ]; param -> type. MPIBcast( root, MPI_COMM_WORLD ); param -> name. MPIBcast( root, MPI_COMM_WORLD ); if( param -> type == "mString" ) if( param -> type == "String" ) { ( ( tnlParameter< String >* ) param ) -> value. MPIBcast( root, mpi_comm ); } Loading