CAS: Installation Guide

Prerequisites

The CAS service needs to be deployed in some hosting environment. The GT3 Core has a stand alone hosting environment that may be used. As an alternative, Jakarta Tomcat server maybe used to host the service in which case the GT3 Core and the CAS service needs to be deployed. Once the CAS service has been deployed and the hosting environment has been started up, the service is available for use.

Typically, a privilege user, say casadmin, owns the database, installs GT3 core, deploys the CAS service, bootstraps it with data and starts up the service. The URL of the CAS service instance is published for the users to contact the service. This document describes the instructions to do the same.

GT3 install location is referred to as GLOBUS_LOCATION in this document.

There are two ways the CAS distribution can be obtained and installed.

The instructions from Section Three and later are common to both types of install.

Contents:

  1. GT3 core install
  2. CAS install from source
  3. Database install and configuration
  4. CAS Configuration
  5. Starting Hosting Environment
  6. Testing CAS install
  7. CAS database bootstrap

1. GT3 Core Install

Instructions for downloading, installing and configuring GT3 core can be found here. More detailed instructions including required tools and their installation instructions maybe found here.

Listed below are commands to build and run container from GT3 core source distribution. But these may have changed and the documentation in GT3 core would be the best reference for the same.

To build GT3 core,

  casadmin$ cd ogsa/impl/java
casadmin$ ant
casadmin$ ant setup

2. CAS Install (from source)

To check out the source code for CAS from CVS:
  1. To login,
    cvs -d :pserver:anonymous@cvs.globus.org:/home/globdev/CVS/gridservices login
  2. At password prompt, hit enter.
  3. To check out trunk code:
     cvs -d :pserver:anonymous@cvs.globus.org:/home/globdev/CVS/gridservices co cas
The top level directory in this check out is referred to as CAS_HOME in this document. The CAS distribution has code for the service, command line clients to access the service, tests for backend database access code and frontend service, and sample properties files. This section has instructions to build and deploy the CAS server into a GT3 container.

2.1 Deploying CAS into GT3 container.

The CAS_HOME/build.properties file needs to be modified or the properties need to be overridden with each ant command. The properties relevant to deployment are,

The following command in CAS_HOME will build the service and deploy server side of CAS into the GT3 core. Note, if the container is already running, it needs to be restarted after the deploy, for the service to be visible.

For tests also to be deployed, Junit needs to be installed. The build script requires that "junit.jar" be on the classpath for the tests to be compiled and deployed. Typically this is placed in the "lib" directory of the Ant install

  casadmin$ ant deployAll 

The above target deploys server, client and tests. It also generates all client scripts and places them in $GLOBUS_LOCATION/bin. This directory needs to be added to the PATH to be able to execute these scripts from other locations.

To deploy only the client side of CAS, from CAS_HOME

  casadmin$ ant deployClient

To deploy only the server, from CAS_HOME

  casadmin$ ant deployGar

2.2 Obtaining credentials for CAS server

The CAS service could run with its own set of credentials. Instructions to obtain service credentials may be found here.

The standard administrator clients that come with the distribution can be used to perform host authorization and expect that the CAS service have credentials that have service name "cas". The command in the above mentioned webpage maybe altered as follows,

  casadmin$ grid-cert-request -service cas -host FQDN

In this document, the location of certificate and key files is referred to as CAS_CERT_FILE and CAS_KEY_FILE respectively.

3. Database Install and Configuration

CAS uses a backend database to store all user data. Any JDBC compliant database may be used. This section describes briefly the installation of such a database and the creation of database using schema required for CAS backend.

3.1 Installing Database

Any JDBC compliant database may be used. PostgreSQL has been used for development and testing. The drivers for the same are included in the distribution. If a different database is used, the corresponding driver should be added to GLOBUS_LOCATION/lib

Brief instructions to install a JDBC compliant database and specifically PostGres can be found here. For more detailed instructions, please refer to specific database documentation.

3.2 Creating CAS database

The schema of the database that needs to be created for CAS can be found at:

GLOBUS_LOCATION/etc/databaseSchema/cas_database_schema.sql

To create a database, for example casDatabase, on a PostgreSQL install on local machine,

  casadmin$ createdb casDatabase 
casadmin$ psql -U casadmin -d casDatabase -f GLOBUS_LOCATION/etc/databaseSchema/cas_database_schema.sql

On running the above command, a list of notices will be seen on the screen. Unless any of them say "ERROR", these are just output for information.

3.3 Database configuration file

CAS server requires to be configured with a properties file with database configuration information. A sample file has been provided in the distribution, GLOBUS_LOCATION/etc/casDBProperties. The values need to be modified to suit the particular database install. Note: Since this file contains the database access username and password, set appropriate permissions to protect the file.

4. CAS Configuration

The CAS service deployed has default values set. Prior to using the service these values need to be altered to suit the specific install. The properties described below need to be altered in GLOBUS_LOCATION/server-config.wsdd

CAS configuration properties:

GT3 Core configuration properties: There are quite a few features supported by GT3 core for service configuration. Two that are relevant to this service are listed below and are used to configure the service with credentials of its own as described in Section 2.2 If these two properties are removed from the configuration, default credentials are used.

Refer to GT3 Security documentation for more details.

5. Starting hosting environment

Now the CAS service has been deployed and configured. The hosting environment needs to be started up. If GT3 standalone container is used, instructions to start it up are given below. But these may have changed and the documentation in GT3 core would be the best reference for the same. If Tomcat or any other hosting environment is used, refer to specific documentation to start it up.

Prior to running these commands, an environment variable GLOBUS_LOCATION needs to be set to point to GT3 install location. (GLOBUS_LOCATION in this document)

To start a container on local machine on port number 8888 say, in GLOBUS_LOCATION, the following command maybe used.

  casadmin$ bin/globus-start-container -p 8888
If -port is not used, then a container is started up port 8080.

When the container needs to be stopped, the following maybe used. If the container is running on local machine on port number 8888 say, in GLOBUS_LOCATION, the following command maybe used. The "hard" option forcefully will shut down the container even if there are errors. For more options, use "-help".

 casadmin$ bin/globus-stop-container -secure hard http://localhost:8888
Now, on starting up the container into which the deployment was done, the CAS service becomes available. If GT3 standalone container was used, it displays a list of services that have been deployed at startup. If the CAS service was deployed fine, the CAS URL will be displayed as described below.

If GT3Host and GT3Port stand for the host and port on which the container is running, then the URL looks like

  http://GT3Host:GT3Port/ogsa/services/base/cas/CASService

As an example, if the container is running on localhost and port 8888, then the instance URL will look like

  http://localhost:8888/ogsa/services/base/cas/CASService

This instance URL needs to be published for the users to be able to contact the CAS service.

6. Testing CAS install

This is an optional step and if skipped, proceed to section 7, Bootstrap.

The CAS distribution has two sets of tests built using Junit. For these test to have been compiled and deployed, Junit needs to be installed. If you installed usin source distribution, refer to section 2 of the install. If you used an installer, ensure that "junit.jar" is on your classpath at install time.

6.1 Tests for the backend database access functionality.

This test does not need any service set up and only requires that a database set up and a file with database configuration. The target used to run the tests picks up the path to the database configuration file from the property cas.db.properites. If not overridden with -Dcas.db.properties option while running the test, the value defaults to GLOBUS_LOCATION/etc/casDBProperties.

The database needs to be empty for this test. To delete any existing entries in the database, a script has been provided in the distribution. For a PostGres install, the command would be

 casadmin$ psql -U casadmin -d casDatabase -f GLOBUS_LOCATION/share/cas/database_delete.sql
The following command runs the test,
  casadmin$ ant -f GLOBUS_LOCATION/etc/cas-build.xml runDatabaseTests
There might some lines output on the screen which start with "ERROR", but they are not any indication of failure of the test. At the end of the run, a summary message will be printed indicating number of tests run and number that failed. Test report will be generated in GLOBUS_LOCATION/cas-test-reports. The file will be named TEST-org.globus.ogsa.impl.base.cas.server.databaseAccess.test.PackageTests.xml

6.2 Tests for the CAS service frontend

These tests are targeted at testing the frontend CAS server capability and also can be used to simulate multi user scenario.
Test properties: It is required that the CAS service be deployed and a GT3 container started up. The tests pick up the host and port of the GT3 container from following properties: If in any of the Ant commands below, you are accessing a container not running in the above said default, use the -D flag in Ant to override the variable as shown.
 ant <targetName> -DGT3Host="your host name" -DGT3Port="your port number"

There are two test targets that have been set up which can be run with different user proxies. The first test target tests all self operations and sets up the database for the second user. The second test target, run with another user's proxy then ensures that the set up was done correctly. While the first test only requires that the cas database is bootup with implicit objects only, the second test requires that the first test to have successfully run.

The steps outlined below also describe how two sets of proxies (one being an independent proxy) can be generated from one set of credentials.

Other than the database configuration file described for the previous test, this test also uses a test properties file that is picked up by the target using the property cas.test.properties. If not overridden with -Dcas.test.properties it defaults to GLOBUS_LOCATION/etc/casTestProperties. The following properties need to be set in that file,

Steps to test the CAS install

Ensure cog-jglobus.jar is on the classpath. This jar can be found in GLOBUS_LOCATION/lib. This can be placed in the classpath by using the GLOBUS_LOCATION/etc/globus-devel-env.csh script (or bat script in the case of windows)

  casadmin$ source $GLOBUS_LOCATION/etc/globus-devel-env.csh 
  1. In the test properties file, set user2SubjectDN to be the subject in your regular proxy. The following returns the appropriate string
      casadmin$ java org.globus.tools.CertInfo -subject -globus
  2. Generated an independent proxy using the following command.
      casadmin$ java org.globus.tools.ProxyInit -independent
  3. Set the identity in the proxy generated from the above step as user1SubjectDN in the test properties file. The following command will return the relevant string.
      casadmin$ java org.globus.tools.ProxyInfo -subject -globus
  4. Delete all data from database. The following command would do it for a PostGres install with database name casDatabase and database username casadmin
     casadmin$ psql -U casadmin -d casDatabase -f GLOBUS_LOCATION/share/cas/database_delete.sql
  5. Now the database needs to be populated with CAS server implicit data for these tests to run. This maybe done using the following command. The -d option takes a path to a file with database configuration needs to be set as described here. (If prior steps we done, the file in $GLOBUS_LOCATION/etc/casDBProperties should be set with requires values.)
      casadmin$ $GLOBUS_LOCATION/bin/cas-server-bootstrap -d $GLOBUS_LOCATION/etc/casDBProperties -implicit 
  6. Start a GT3 container as described here.
  7. The following command, runs tests for self permissions and sets up the database for user with subjectDN that is user2SubjectDN. If you are not running container on default host/port, set properties as shown here.
      casadmin$ ant -f GLOBUS_LOCATION/etc/cas-build.xml serverTestsUser1 
    Test report will be generated in GLOBUS_LOCATION/cas-test-reports. The file will be named TEST-org.globus.ogsa.impl.base.cas.server.test.PackageTests.xml
  8. To test as the second user, generate proxy for the subject DN specified for the second user.
      casadmin$ java org.globus.tools.ProxyInit 
  9. To test as second user, run the following. If you are not running container on default host/port, set properties as shown here.
      casadmin$ ant -f GLOBUS_LOCATION/etc/cas-build.xml serverTestsUser2
    Test report will be generated in GLOBUS_LOCATION/cas-test-reports. The file will be named TEST-org.globus.ogsa.impl.base.cas.server.test.PostPackageTests.xml
After these tests, the CAS database needs to be reset. The following command will delete all entries from the database.
 casadmin$ psql -U casadmin -d casDatabase -f GLOBUS_LOCATION/share/cas/database_delete.sql

7. CAS Server Bootstrap

A client to bootstrap the CAS backend database is in GLOBUS_LOCATION/bin.
 casadmin$  cas-server-bootstrap [options] -d dbPropFile [ -implicit | -b bootstrapFile]
where

Steps to bootstrap the database

  1. Copy GLOBUS_LOCATION/share/cas/bootstrapSample to bootstrap
  2. Edit the file bootstrap. This file is used to grant super user permissions to a CAS user and provide details about the user.

    For example say a user with nickname "superUser" using a subject DN "/O=Grid/O=Globus/OU=something/CN=someone" and in a user group called "suGroup" needs to be added. Moreover say, the user's trust anchor is "someTrustAnchor" with "X509" as authentication method and a DN "/C=US/O=something/CN=Some CA". The bootstrap file should look like,

    ta-name=someTrustAnchor
    ta-authMethod=X509
    ta-authData=/C=US/O=something/CN=Some CA
    user-name=superUser
    user-subject=/O=Grid/O=Globus/OU=something/CN=someone
    userGroupname=suGroup
    
  3. The following command from GLOBUS_LOCATION/bin should populate the database with data in bootstrap file and implicit data, provided casDbProperties is configured as described here.
      casadmin$ cas-server-bootstrap -d GLOBUS_LOCATION/etc/casDbProperites -implicit -b bootstrap 
Now the CAS service has been successfully installed and can be used. For the users to be able to contact the CAS service, the instance URL needs to be made available. If the hosting environment in which the installation was done is up and running on localhost port 8888, then the instance URL will be
http://localhost:8888/ogsa/services/base/cas/CASService 
CAS Administrator's guide
CAS User's guide
Updated: 02 - 02 - 2004