Getting started

From DeepSense Docs
Jump to: navigation, search
Getting Started with DeepSense 


On June 26 we will update the GPU compute nodes to a new version of IBM Watson Machine Learning Accelerator. This will change the way you access deep learning packages like Tensorflow and Pytorch. Instead of "activating" these packages, you will be able to install new versions directly in your anaconda environment.

We are actively updating the wiki documentation to explain the new method of accessing deep learning packages. Please bear with us during these updates as some documentation may still refer to the old method of "activating" deep learning packages

1. Logging on

DeepSense has two login nodes, and . You can access these through SSH with your username and password from any computer on campus.

For example, if your userid is user1, you can connect to deepsense by typing ssh just like logging on to any other network computer.

Note: The login nodes are intended for testing and compiling code. Please don’t run long or intensive computation on these nodes. Keep reading for instructions on how to submit compute jobs to dedicated compute nodes.

1.1 VPN

To connect to the DeepSense platform from outside of the Dalhousie Campus, you'll need to use a VPN. If you are are student, staff or faculty, you can use the Dalhousie VPN (

If you are not a Dalhousie staff, student, or faculty but require offsite access and cannot use the Dalhousie VPN then contact your project leader or ( to make different arrangements.

For more info, see VPN Setup.

2. Transfer data

For more information, see Transferring Data.

Deepsense has two protocol nodes, and . You can connect to these using the SAMBA transfer protocol, e.g. smb:// with your username and password. Please contact your project leader or if you need help transferring large amounts of data.

Data transferred through the protocol nodes will be located in the shared /data directory .

See Storage policies for more information about the available shared file systems, storage policies, and backup policies.

3. Configure your environment

DeepSense compute and management nodes are IBM Power8 computers (ppc64le) running Redhat Enterprise Linux. See Resources for more details on the available nodes.

3.1 Loading a python environment

You have two options for using python on DeepSense. You can use the systemwide python install, managed by DeepSense administrators. This is recommended for users new to Linux. You will need to contact DeepSense support to have additional software packages installed in the systemwide python.

Alternatively, you can install an Anaconda python environment or other software in your home directory. This allows you to install or update packages or software without requesting and waiting for DeepSense staff.

Systemwide python (managed by DeepSense)

DeepSense nodes have anaconda2 python installed in /opt/anaconda2. To use this systemwide python add a parameter to your .bashrc file in your home directory:

echo ". /opt/anaconda2/etc/profile.d/" >> ~/.bashrc

Then source your .bashrc file: source ~/.bashrc

To load the python2 environment run conda activate

To use python3 you can activate the py36 environment: conda activate py36

You can add either line to your .bashrc file to automatically load the desired environment when you log in.

Local python install (managed by individual user)

See Installing local software for more information.

4. Running compute jobs

DeepSense has two different methods of submitting compute jobs.

4.1 Load Sharing Facility (LSF)

LSF is a set of command line tools for submitting compute jobs. You may be familiar with other similar software such as Sun Grid Engine or SLURM.

LSF jobs are submitted using the bsub command.

You can examine the progress of your currently running jobs with the bjobs command.

You can examine the available compute nodes and their available resources with the bhosts command.

For more information about using LSF see LSF.

4.2 Conductor with Spark (CWS)

CWS is an IBM web-based graphical interface for creating and running Apache Spark compute jobs.

To use CWS, connect to the IBM Spectrum Computing Cluster Management Console at Log in with your username and password.

Note that currently you need to accept a self-signed web certificate. In the future this will be fixed.

For more information about using CWS see CWS.

5. Deep Learning packages and other available software

DeepSense has a variety of Deep Learning packages available as part of IBM Watson Machine Learning Accelerator including Tensorflow, Caffe, and PyTorch. These packages can be installed from the anaconda repository

These packages were formerly installed in /opt/DL/ on each compute node and used to need to be activated before using them, e.g. source /opt/DL/tensorflow/bin/tensorflow-activate.

Deep Learning packages are typically used on the GPU nodes but some deep learning packages can also be used on the login nodes and CPU-only nodes. This can be useful for testing your code or running CPU-bound workloads. Note that some deep learning packages may fail if run without a GPU, e.g. Caffe currently requires a GPU.

For a brief tutorial including running Caffe and Tensorflow in a Jupyter notebook see Getting started with Deep Learning.

See Available software for the current list of installed software. If you require additional software you are welcome to install it locally in your home directory or contact DeepSense support.

6. Technical and research support

DeepSense has a dedicated support team of research scientists ready to help you with technical questions, installing software, or even research questions.

If you can't find the answer to your question on this wiki or need more extensive help then send an email to .

See Technical support for more information about the support available.