The CAS service needs to be deployed in some hosting environment. The GT3 Core has a stand alone hosting environment that may be used. As an alternative, Jakarta Tomcat server maybe used to host the service in which case the GT3 Core and the CAS service needs to be deployed. Once the CAS service has been deployed and the hosting environment has been started up, the service is available for use.
Typically, a privilege user, say casadmin, owns the database, installs GT3 core, deploys the CAS service, bootstraps it with data and starts up the service. The URL of the CAS service instance is published for the users to contact the service. This document describes the instructions to do the same.
GT3 install location is referred to as GLOBUS_LOCATION in this document.
There are two ways the CAS distribution can be obtained and installed.
Listed below are commands to build and run container from GT3 core source distribution. But these may have changed and the documentation in GT3 core would be the best reference for the same.
To build GT3 core,
casadmin$ cd ogsa/impl/java
casadmin$ ant
casadmin$ ant setup
cvs -d :pserver:anonymous@cvs.globus.org:/home/globdev/CVS/gridservices login
cvs -d :pserver:anonymous@cvs.globus.org:/home/globdev/CVS/gridservices co cas
The CAS_HOME/build.properties file needs to be modified or the properties need to be overridden with each ant command. The properties relevant to deployment are,
The following command in CAS_HOME will build the service and deploy server side of CAS into the GT3 core. Note, if the container is already running, it needs to be restarted after the deploy, for the service to be visible.
For tests also to be deployed, Junit needs to be installed. The build script requires that "junit.jar" be on the classpath for the tests to be compiled and deployed. Typically this is placed in the "lib" directory of the Ant install
casadmin$ ant deployAll
The above target deploys server, client and tests. It also generates all client scripts and places them in $GLOBUS_LOCATION/bin. This directory needs to be added to the PATH to be able to execute these scripts from other locations.
To deploy only the client side of CAS, from CAS_HOME
casadmin$ ant deployClient
To deploy only the server, from CAS_HOME
casadmin$ ant deployGar
The CAS service could run with its own set of credentials. Instructions to obtain service credentials may be found here.
The standard administrator clients that come with the distribution can be used to perform host authorization and expect that the CAS service have credentials that have service name "cas". The command in the above mentioned webpage maybe altered as follows,
casadmin$ grid-cert-request -service cas -host FQDN
In this document, the location of certificate and key files is referred to as CAS_CERT_FILE and CAS_KEY_FILE respectively.
Brief instructions to install a JDBC compliant database and specifically PostGres can be found here. For more detailed instructions, please refer to specific database documentation.
The schema of the database that needs to be created for CAS can be found at:
GLOBUS_LOCATION/etc/databaseSchema/cas_database_schema.sql
To create a database, for example casDatabase, on a PostgreSQL install on local machine,
casadmin$ createdb casDatabase
casadmin$ psql -U casadmin -d casDatabase -f GLOBUS_LOCATION/etc/databaseSchema/cas_database_schema.sql
On running the above command, a list of notices will be seen on the screen. Unless any of them say "ERROR", these are just output for information.
CAS configuration properties:
GT3 Core configuration properties: There are quite a few features supported by GT3 core for service configuration. Two that are relevant to this service are listed below and are used to configure the service with credentials of its own as described in Section 2.2 If these two properties are removed from the configuration, default credentials are used.
Now the CAS service has been deployed and configured. The hosting environment needs to be started up. If GT3 standalone container is used, instructions to start it up are given below. But these may have changed and the documentation in GT3 core would be the best reference for the same. If Tomcat or any other hosting environment is used, refer to specific documentation to start it up.
Prior to running these commands, an environment variable GLOBUS_LOCATION needs to be set to point to GT3 install location. (GLOBUS_LOCATION in this document)
To start a container on local machine on port number 8888 say, in GLOBUS_LOCATION, the following command maybe used.
casadmin$ bin/globus-start-container -p 8888If -port is not used, then a container is started up port 8080.
When the container needs to be stopped, the following maybe used. If the container is running on local machine on port number 8888 say, in GLOBUS_LOCATION, the following command maybe used. The "hard" option forcefully will shut down the container even if there are errors. For more options, use "-help".
casadmin$ bin/globus-stop-container -secure hard http://localhost:8888Now, on starting up the container into which the deployment was done, the CAS service becomes available. If GT3 standalone container was used, it displays a list of services that have been deployed at startup. If the CAS service was deployed fine, the CAS URL will be displayed as described below.
If GT3Host and GT3Port stand for the host and port on which the container is running, then the URL looks like
http://GT3Host:GT3Port/ogsa/services/base/cas/CASService
As an example, if the container is running on localhost and port 8888, then the instance URL will look like
http://localhost:8888/ogsa/services/base/cas/CASService
This instance URL needs to be published for the users to be able to contact the CAS service.
The CAS distribution has two sets of tests built using Junit. For these test to have been compiled and deployed, Junit needs to be installed. If you installed usin source distribution, refer to section 2 of the install. If you used an installer, ensure that "junit.jar" is on your classpath at install time.
The database needs to be empty for this test. To delete any existing entries in the database, a script has been provided in the distribution. For a PostGres install, the command would be
casadmin$ psql -U casadmin -d casDatabase -f GLOBUS_LOCATION/share/cas/database_delete.sqlThe following command runs the test,
casadmin$ ant -f GLOBUS_LOCATION/etc/cas-build.xml runDatabaseTestsThere might some lines output on the screen which start with "ERROR", but they are not any indication of failure of the test. At the end of the run, a summary message will be printed indicating number of tests run and number that failed. Test report will be generated in GLOBUS_LOCATION/cas-test-reports. The file will be named TEST-org.globus.ogsa.impl.base.cas.server.databaseAccess.test.PackageTests.xml
ant <targetName> -DGT3Host="your host name" -DGT3Port="your port number"
There are two test targets that have been set up which can be run with different user proxies. The first test target tests all self operations and sets up the database for the second user. The second test target, run with another user's proxy then ensures that the set up was done correctly. While the first test only requires that the cas database is bootup with implicit objects only, the second test requires that the first test to have successfully run.
The steps outlined below also describe how two sets of proxies (one being an independent proxy) can be generated from one set of credentials.
Other than the database configuration file described for the previous test, this test also uses a test properties file that is picked up by the target using the property cas.test.properties. If not overridden with -Dcas.test.properties it defaults to GLOBUS_LOCATION/etc/casTestProperties. The following properties need to be set in that file,
Steps to test the CAS install
Ensure cog-jglobus.jar is on the classpath. This jar can be found in GLOBUS_LOCATION/lib. This can be placed in the classpath by using the GLOBUS_LOCATION/etc/globus-devel-env.csh script (or bat script in the case of windows)
casadmin$ source $GLOBUS_LOCATION/etc/globus-devel-env.csh
casadmin$ java org.globus.tools.CertInfo -subject -globus
casadmin$ java org.globus.tools.ProxyInit -independent
casadmin$ java org.globus.tools.ProxyInfo -subject -globus
casadmin$ psql -U casadmin -d casDatabase -f GLOBUS_LOCATION/share/cas/database_delete.sql
casadmin$ $GLOBUS_LOCATION/bin/cas-server-bootstrap -d $GLOBUS_LOCATION/etc/casDBProperties -implicit
casadmin$ ant -f GLOBUS_LOCATION/etc/cas-build.xml serverTestsUser1Test report will be generated in GLOBUS_LOCATION/cas-test-reports. The file will be named TEST-org.globus.ogsa.impl.base.cas.server.test.PackageTests.xml
casadmin$ java org.globus.tools.ProxyInit
casadmin$ ant -f GLOBUS_LOCATION/etc/cas-build.xml serverTestsUser2Test report will be generated in GLOBUS_LOCATION/cas-test-reports. The file will be named TEST-org.globus.ogsa.impl.base.cas.server.test.PostPackageTests.xml
casadmin$ psql -U casadmin -d casDatabase -f GLOBUS_LOCATION/share/cas/database_delete.sql
casadmin$ cas-server-bootstrap [options] -d dbPropFile [ -implicit | -b bootstrapFile]where
Steps to bootstrap the database
For example say a user with nickname "superUser" using a subject DN "/O=Grid/O=Globus/OU=something/CN=someone" and in a user group called "suGroup" needs to be added. Moreover say, the user's trust anchor is "someTrustAnchor" with "X509" as authentication method and a DN "/C=US/O=something/CN=Some CA". The bootstrap file should look like,
ta-name=someTrustAnchor ta-authMethod=X509 ta-authData=/C=US/O=something/CN=Some CA user-name=superUser user-subject=/O=Grid/O=Globus/OU=something/CN=someone userGroupname=suGroup
casadmin$ cas-server-bootstrap -d GLOBUS_LOCATION/etc/casDbProperites -implicit -b bootstrap
http://localhost:8888/ogsa/services/base/cas/CASServiceCAS Administrator's guide