Client Configuration
Copy the key in the home directory to /etc/cvmfs/keys/md.singleparticle.net/
1
2
mkdir /etc/cvmfs/keys/md.singleparticle.net/
cp ~/md.singleparticle.net.pub /etc/cvmfs/keys/md.singleparticle.net/
Create the client configuration file /etc/cvmfs/default.d/60-md-singleparticle-net.conf and add the following lines to this file
1
2
3
4
5
CVMFS_CONFIG_REPOSITORY=md.singleparticle.net
CVMFS_DEFAULT_DOMAIN=singleparticle.net
CVMFS_SERVER_URL="http://106.15.9.90/cvmfs/md.singleparticle.net"
CVMFS_KEYS_DIR="/etc/cvmfs/keys/md.singleparticle.net/"
CVMFS_HTTP_PROXY='DIRECT'
Create another configuration file /etc/cvmfs/config.d/md.singleparticle.net.conf and add the following lines to this file
1
2
CVMFS_SERVER_URL="http://106.15.9.90/cvmfs/md.singleparticle.net"
CVMFS_KEYS_DIR="/etc/cvmfs/keys/md.singleparticle.net/"
Perform a mount operation (This step may take up to ten seconds or so, depending on the speed of the network.)
1
2
mkdir /cvmfs/md.singleparticle.net
sudo mount -t cvmfs md.singleparticle.net /cvmfs/md.singleparticle.net
If you want to automount on boot, just add the following line to /etc/fstab
1
md.singleparticle.net /cvmfs/md.singleparticle.net cvmfs defaults,_netdev,nodev 0 0
Lmod Configuration
Make a new file /etc/profile.d/singleparticle.sh and add the following lines to this file
1
2
source /cvmfs/md.singleparticle.net/lmod/etc/conda/activate.d/lmod-activate.sh
module use /cvmfs/md.singleparticle.net/Repo/modules/all
Then run
1
source /etc/profile
Instructions for using the software
List all the packages
module avail
OpenMM
module load OpenMM/8.0.0-foss-2022a-CUDA-11.5.2
PySCF
module load PySCF/2.1.1-foss-2022a
PSI4
module load PSI4/1.7-foss-2022a
AmberTools
module load AmberTools/22.3-foss-2022a
GROMACS
module load GROMACS/2022.5-foss-2022a-CUDA-11.7.0-PLUMED-2.8.2
PLUMED
module load PLUMED/2.8.2-foss-2022a
Contact Us
Stay Connected
Stay Tuned

WhisperPro Workstation
Cryo-EM Workstation
MD Workstation
GPU Clusters
ANTcryo Grids



+
Dual Intel Xeon Scalable (10 to 16 cores)
128GB ECC DDR4 (up to 4TB)
2x NVIDIA RTX 4090 Graphics Cards with 24GB GDDR6X
480GB SSD for boot
2TB Enterprise SSD
2 x 16TB/ 18TB/ 22TB Enterprise HDD
Recommended for developers and researchers with a light load
Developer Workstation


+
Dual Intel Xeon Scalable (16 to 24 cores )
256GB ECC DDR4 (up to 4TB)
4x NVIDIA RTX 4090 Graphics Cards with 24GB GDDR6X
1TB SSD for boot
4TB Enterprise SSD
Mainstream Workstation
6 x 16TB/ 18TB/ 22TB Enterprise HDD
Recommended for cryo-EM researchers for routine use


+
Dual Intel Xeon Scalable Gold (24 to 26 cores)
384GB ECC DDR4 (32 x DIMM slots)
8x NVIDIA RTX 4090 Graphics Cards with 24GB GDDR6X
1TB SSD for boot
2x 4TB Enterprise SSD
6 x 16TB/ 18TB/ 22TB Enterprise HDD
Recommended for power users working on large complexes
Performance Workstation


+
Dual Intel Xeon Scalable Gold (24 to 26 cores)
256GB ECC DDR4 (up to 4TB)
2x or 4x GPUs in 2U or
Performance Cluster Node
8x or 10x GPUs in 4U
2x 480 GB Enterprise ssD
Optimized for rackmount applications
Infiniband 100-200Gb
*Storage will be separate


Update Log
Stay informed about PsiStack’s ongoing development. We are continuously enhancing PsiStack based on user feedback and evolving research needs.
October 16, 2024: PsiStack repository officially released.
August 23, 2024: Public testing began.
August 22, 2024: PSI4, PySCF, and xTB online.
August 17, 2024: OpenMM online.
August 16, 2024: GROMACS and LAMMPS online.
August 15, 2024: OpenBabel and AutoDock online.
August 12, 2024: PLUMED online
August 11, 2024: AmberTools online.
July 27, 2024: Search path scheme for shared libraries updated.
July 11, 2024: Internal testing begins.
July 10, 2024: Mathematics libraries online.
July 1, 2024: Toolchain compilation completed.