Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
T
tnl-dev
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Deploy
Releases
Model registry
Analyze
Value stream analytics
Contributor analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
This is an archived project. Repository and other project resources are read-only.
Show more breadcrumbs
TNL
tnl-dev
Commits
87bf3605
There was an error fetching the commit references. Please try again later.
Commit
87bf3605
authored
5 years ago
by
Jakub Klinkovský
Browse files
Options
Downloads
Patches
Plain Diff
Updated documentation in README.md
parent
afba52d9
No related branches found
No related tags found
1 merge request
!42
Refactoring for execution policies
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
README.md
+15
-5
15 additions, 5 deletions
README.md
with
15 additions
and
5 deletions
README.md
+
15
−
5
View file @
87bf3605
...
...
@@ -12,13 +12,20 @@ Similarly to the STL, features provided by the TNL can be grouped into
several modules:
-
_Core concepts_.
The main concept used in the TNL is the
`Device`
type which is used in most of
the other parts of the library. For data structures such as
`Array`
it
specifies where the data should be allocated, whereas for algorithms such as
`ParallelFor`
it specifies how the algorithm should be executed.
The main concepts used in TNL are the _memory space_, which represents the
part of memory where given data is allocated, and the _execution model_,
which represents the way how given (typically parallel) algorithm is executed.
For example, data can be allocated in the main system memory, in the GPU
memory, or using the CUDA Unified Memory which can be accessed from the host
as well as from the GPU. On the other hand, algorithms can be executed using
either the host CPU or an accelerator (GPU), and for each there are many ways
to manage parallel execution. The usage of memory spaces is abstracted with
[
allocators
][
allocators
]
and the execution model is represented by
[
devices
][
devices
]
. See the
[
Core concepts
][
core concepts
]
page for details.
-
_
[
Containers
][
containers
]
_.
TNL provides generic containers such as array, multidimensional array or array
views, which abstract data management on different hardware architectures.
views, which abstract data management and execution of common operations on
different hardware architectures.
-
_Linear algebra._
TNL provides generic data structures and algorithms for linear algebra, such
as
[
vectors
][
vectors
]
,
[
sparse matrices
][
matrices
]
,
...
...
@@ -39,6 +46,9 @@ several modules:
[
libpng
](
http://www.libpng.org/pub/png/libpng.html
)
for PNG files, or
[
libjpeg
](
http://libjpeg.sourceforge.net/
)
for JPEG files.
[
allocators
]:
https://mmg-gitlab.fjfi.cvut.cz/doc/tnl/namespaceTNL_1_1Allocators.html
[
devices
]:
https://mmg-gitlab.fjfi.cvut.cz/doc/tnl/namespaceTNL_1_1Devices.html
[
core concepts
]:
https://mmg-gitlab.fjfi.cvut.cz/doc/tnl/core_concepts.html
[
containers
]:
https://mmg-gitlab.fjfi.cvut.cz/doc/tnl/namespaceTNL_1_1Containers.html
[
vectors
]:
https://mmg-gitlab.fjfi.cvut.cz/doc/tnl/classTNL_1_1Containers_1_1Vector.html
[
matrices
]:
https://mmg-gitlab.fjfi.cvut.cz/doc/tnl/namespaceTNL_1_1Matrices.html
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment