@@ -147,18 +150,22 @@ The main login node is __gp3.fjfi.cvut.cz__, which has the largest disk array an
- Local storage:
-`/`: 960 GB SSD (KINGSTON SA400S37960G)
Note that the `gp14` node has only one GPU instead of two as the other nodes.
## Network
All compute nodes together with the login node are connected to the 10 Gbit Ethernet switch ([TP-Link T1700X-16TS](https://www.tp-link.com/us/business-networking/smart-switch/t1700x-16ts/)).
The compute nodes are not accessible from the outside network, they must be accessed from the login node.
Internet access from the compute nodes is provided via [NAT](https://en.wikipedia.org/wiki/Network_address_translation) on the login node.
There is a 10 Gbit Ethernet switch ([TP-Link T1700X-16TS](https://www.tp-link.com/us/business-networking/smart-switch/t1700x-16ts/)) that provides fast interconnection among the following nodes:
Other nodes are not connected to the 10 Gbit switch and their use for distributed computations is therefore very inefficient.
Internet access from the compute nodes is provided via [NAT](https://en.wikipedia.org/wiki/Network_address_translation) on the `gp3.fjfi.cvut.cz` node.
## Storage
The `/mnt/gp3/` file system is __not backed up__ and since it is on RAID 0, even __a single drive failure would mean destruction of all data__.
Hence, users are advised not to keep valuable data here or make their own backups if needed.
The `/mnt/gp3/` storage is shared with compute nodes over network.
## Other nodes
Other nodes (gp{1,2,4,5,6}) are not connected to the 10 Gbit switch and cannot be used for distributed computations.
They also do not have common hardware specifications, see http://mmg.fjfi.cvut.cz/mmg/gpu for details.