Participating in The Lattice Project

Volunteering

Anyone interested in volunteering their computing resources to The Lattice Project may do so by signing up for The Lattice BOINC Project.

To learn more about BOINC, visit boinc.berkeley.edu.


Running Applications on the Grid

In order to run an application on the grid, a grid service must be created for the application. We have worked hard to streamline this process, and we already have grid services for a number of popular bioinformatics software packages. Interfacing with the system to submit jobs, monitor progress, and receive results is possible using web pages or a Grid Brick. You may set up a Grid Brick of your own, or you may apply for an account on an existing one. In addition, we are developing a gateway where it is possible to submit jobs through a web interface. The GARLI web service on molecularevolution.org is an example of such an interface.


Joining the Grid

If you would like to become an institutional partner, please read on.


Overview

The Lattice Project has grown to comprise a number of computing resources within various departments at the University of Maryland, including CBCB, UMIACS, CMNS, PSLA, and OIT. We have also integrated computational resources of the following partner institutions: Bowie State University, Coppin State University, and the Smithsonian National Museum of Natural History. See our grid resources page for up-to-the-minute information about our computing resources.


Benefits of Joining the Grid

Institutions that include their computing resources in the grid system derive a number of benefits. First and foremost, the institution becomes eligible to use all grid resources, thus providing their researchers with access to a significant amount of computing capability, which includes a large pool of public computing clients available through the Lattice BOINC Project. Therefore, joining the grid might obviate the need for future hardware purchases (e.g., a new cluster).

Some organizations don't have a surplus of demand, but instead have the opposite problem: they've purchased a cluster that is underutilized, and would like to increase its use. Contributing such resources to the grid system will make them available to many more scientific researchers.

In addition, including a computational resource in the grid system can increase the efficiency of its use, due to the sophistication of the grid meta-scheduler. For example, jobs with large memory requirements can be sent to clusters with large memory nodes, and tightly-coupled jobs (e.g., MPI jobs) can be sent to clusters with fast interconnects. Strictly high-throughput jobs can be sent to Condor pools or to BOINC, thus reserving clusters for jobs that need a fast interconnect. Not only are jobs of various types matched up with appropriate resources, but the scheduler load-balances these resources, and also tries to use the fastest resources first, thus maximizing overall throughput.


Integrating a Computational Resource

Computational resources can be divided into two broad classes: dedicated resources (e.g., clusters, networks of workstations) and non-dedicated resources (e.g., University desktops, personal computers). In either case, some type of local scheduling software must tie these resources together and make them addressable. Common schedulers for dedicated resources include PBS and SGE. For non-dedicated resources, Condor is a popular choice. Hence, the first step in the grid resource integration process is to install a local scheduler on the resource.

Once a local scheduling framework is established and works satisfactorily, the next step is to install the Globus Toolkit on one of the resource nodes. The node needs to run a UNIX-based operating system, and should have the capability to submit jobs to the computing resource using the local scheduling software.


Globus

The Globus installation will integrate your computational resource with the grid system, thus allowing jobs submitted from the grid system to run on your cluster or Condor pool. These jobs will not interfere with your local use of the computing resource, however; we recommend a configuration in which grid jobs run at a low priority, and are preemptable by local users.

The Globus Toolkit has many components, and not all of them are currently used by The Lattice Project. In addition, many of the components require customization.

Detailed instructions for creating a Lattice-compatible Globus installation can be found here: Globus 4.2.1.


Condor

We have been encouraging groups to install Condor on computers that will participate in the grid. These computers are often departmental desktops, laptops, and computing labs. Setting up a Condor pool in your administrative domain is an easy way to provide distributed computing to your constituents, and to contribute resources to the larger grid. As part of this process, one machine should be designated to function as the Condor central manager, which will perform matchmaking (i.e., scheduling) between Condor submit nodes and execute nodes. Detailed instructions for installing Condor can be found on the Condor web site.

Here is a sample Condor submit file.