ARSC system news for pacman

Menu to filter items by type

Type Downtime News
Machine All Systems linuxws pacman bigdipper fish lsi

Contents for pacman

News Items

"CENTER Old File Removal" on pacman

Last Updated: Tue, 17 Dec 2013 -
Machines: linuxws pacman fish
CENTER Old File Removal Begins 01/08/2014
On January 08, 2014 ARSC will begin automatically deleting old files
residing on the $CENTER filesystem.  The automatic tool will run
weekly and will target files older than 30 days.  The complete
policy describing this old file removal is available online:

In preparation for the activation of the automated file
removal tool, files targeted for removal will be listed in a
/center/w/purgeList/username directory and viewable by the individual
file owners. This file listing is an estimation only - files may be
deleted despite failing to appear in this listing.

Note: Modification of file timestamp information, data, or metadata
for the sole purpose of bypassing the automated file removal tool
is prohibited.

Users are encouraged to move important but infrequently used
data to the intermediate and long term $ARCHIVE storage
filesystem. Recommendations for optimizing $ARCHIVE file
storage and retrieval are available on the ARSC website:

Please contact the ARSC Help Desk with questions regarding the
automated deletion of old files in $CENTER.

"LDAP Passwords" on pacman

Last Updated: Mon, 20 May 2013 -
Machines: linuxws pacman bigdipper fish
How to update your LDAP password 

User authentication and login to ARSC systems uses University 
of Alaska (UA) passwords and follows the LDAP protocol to connect to
the University's Enterprise Directory.  Because of this, users must
change their passwords using the UA Enterprise tools.

While logging into ARSC systems, if you see the following message,
please change your password on

  Your are required to change your LDAP password immediately.
  Enter login(LDAP) password:

Attempts to change your password on ARSC systems will fail.

Please contact the ARSC Help Desk if you are unable to log into to change your login password.


"New Default PrgEnv-pgi" on pacman

Last Updated: Wed, 26 Jun 2013 -
Machines: pacman
Updated Default PrgEnv-pgi module to 13.4
In response to noticeable cases in which the PGI 12.10 compiler failed
to generate a working executable, we will be moving the pacman default
PrgEnv-pgi module from PrgEnv-pgi/12.10 to PrgEnv-pgi/13.4.

This will affect users who run the "module load PrgEnv-pgi" command
instead of specifying a particular module version, e.g. "module load
PrgEnv-pgi/13.4" for program compilation or in their job submission

If you are currently compiling and running successfully with
PrgEnv-pgi/12.10, you are welcome to continue using that version.
Make sure you review your ~/.profile or ~/.cshrc files and explicitly
load the PrgEnv-pgi/12.10 module instead of "module load PrgEnv-pgi".

If your code is failing to compile or run properly with
PrgEnv-pgi/12.10 (the current system default), we encourage you to
try again using the PrgEnv-pgi/13.4 environment instead. The
"module swap PrgEnv-pgi/12.10 PrgEnv-pgi/13.4" command will switch
versions of the PGI compiler for you.

Please forward any questions regarding this change or any issues with
compiling or running your program on the pacman system to the ARSC
Help Desk.


"PrgEnv" on pacman

Last Updated: Wed, 22 Oct 2008 -
Machines: pacman
Programming Environments on pacman
Compiler and MPI Library versions on pacman are controlled via
the modules package.  New accounts load the "PrgEnv-pgi" module by
default.  This module adds the PGI compilers and the OpenMPI stack 
to the PATH.  

Should you experience problems with a compiler or library in 
many cases a new programming environment may be available.

Below is a description of available Programming Environments:

Module Name      Description
===============  ==============================================
PrgEnv-pgi       Programming environment using PGI
                 compilers and MPI stack (default version).

PrgEnv-gcc       Programming environment using GNU compilers 
                 and MPI stack.

For a list of the latest available Programming Environments, run:

   pacman1 748% module avail PrgEnv-pgi
   ------------------- /usr/local/pkg/modulefiles -------------------
   PrgEnv-pgi/10.5           PrgEnv-pgi/11.2           

If no version is specified when the module is loaded, the "default"
version will be selected.

Programming Environment Changes
The following is a table of recent additions and changes to the
Programming Environment on pacman.

Updates on 1/9/2013

Default Module Updates
The default modules for the following packages will be updated on 1/9/2013.

  module name          new default        previous default
  ===================  =================  ================
  abaqus               6.11               6.10
  comsol               4.2a               4.3a
  grads                1.9b4              2.0.2
  idl                  8.2                6.4
  matlab               R2011b             R2010a
  ncl                  6.0.0              5.1.1
  nco                  4.1.0              3.9.9
  OpenFoam             2.1.0              1.7.1
  petsc                3.3-p3.pgi.opt     3.1-p2.pgi.debug
  pgi                  12.5               9.0.4
  PrgEnv-pgi           12.5               9.0.4              
  python               2.7.2              2.6.5
  r                    2.15.2             2.11.1
  totalview            8.10.0-0           8.8.0-1

Retired Modules
The following module files will be retired on 1/9/2013.

* PrgEnv-gnu/prep0
* PrgEnv-gnu/prep1
* PrgEnv-gnu/prep2
* PrgEnv-gnu/prep3
* PrgEnv-pgi/prep0
* PrgEnv-pgi/prep1
* PrgEnv-pgi/prep2
* PrgEnv-pgi/prep3

Known Issues:
* Some users have reported seg faults for applications compiled with
  PrgEnv-pgi/12.5 when CPU affinity is enabled (e.g. --bind-to-core
  or  --mca mpi_paffinity_alone 1).  Applications compiled with 
  PrgEnv-pgi/12.10 do not appear to have this issue.


"modules" on pacman

Last Updated: Mon, 28 Dec 2009 -
Machines: linuxws pacman
Using the Modules Package

The modules package is used to prepare the environment for various 
applications before they are run.  Loading a module will set the 
environment variables required for a program to execute properly.  
Conversely, unloading a module will unset all environment variables 
that had been previously set.  This functionality is ideal for 
switching between different versions of the same application, keeping 
differences in file paths transparent to the user.

The following modules commands are available:

module avail                - list all available modules
module load {pkg}           - load a module file from environment
module unload {pkg}         - unload a module file from environment
module list                 - display modules currently loaded
module switch {old} {new}   - replace module <old> with module <new>
module purge                - unload all modules

Before the modules package can be used in a script, its init file may
need to be sourced.

On pacman, to do this using tcsh or csh, type:

   source /etc/profile.d/modules.csh

On pacman, to do this using bash, ksh, or sh, type:

   . /etc/profile.d/

Known Issues:

2009-09-24  Accounts using bash that were created before 9/24/2009
            are missing the default ~/.bashrc file.  This may cause
            the module command to be unavailable in some instances.

            Should you experience this issue run the following:

            # copy the template .bashrc to your account.
            [ ! -f ~/.bashrc ] && cp /etc/skel/.bashrc ~

            If you continue to experience issues, please contact the 
            ARSC Help Desk.

"queues" on pacman

Last Updated: Wed, 17 Dec 2008 -
Machines: pacman
Pacman Queues

The queue configuration is as described below.  It is subject to
review and further updates.

   Login Nodes Use:
   The pacman1 and pacman2 login nodes are a shared resource and are 
   not intended for computationally or memory intensive work.  Processes 
   using more than 30 minutes of CPU time on login nodes may be killed 
   by ARSC without warning.  Please use compute nodes or pacman3 through
   pacman9 for computationally or memory intensive work.

   Specify one of the following queues in your Torque/Moab qsub script
   (e.g., "#PBS -q standard"):

     Queue Name     Purpose of queue
     -------------  ------------------------------
     standard       General use routing queue, routes to standard_16 queue.
     standard_4     General use by all allocated users. Uses 4-core nodes.
     standard_12    General use by all allocated users. Uses 12-core nodes.
     standard_16    General use by all allocated users. Uses 16-core nodes.
     bigmem         Usable by all allocated users requiring large memory 
                    resources. Jobs that do not require very large memory 
                    should consider the standard queues.  
                    Uses 32-core large memory nodes.
     debug          Quick turnaround queue for debugging work.  Uses 12-core 
                    and 16-core nodes.
     background     For projects with little or no remaining allocation. 
                    This queue has the lowest priority, however projects
                    running jobs in this queue do not have allocation    
                    deducted. The number of running jobs or processors 
                    available to this queue may be altered based on system load.
                    Uses 16-core nodes.
     shared         Queue which allows more than one job to be placed on a
                    node.  Jobs will be charged for the portion of the 
                    cores used by the job.  MPI, OpenMP and memory intensive
                    serial work should consider using the standard queue 
                    instead.   Uses 4-core nodes.
     transfer       For data transfer to and from $ARCHIVE.  Be sure to 
                    bring all $ARCHIVE files online using batch_stage 
                    prior to the file copy.  

   See 'qstat -q' for a complete list of system queues.  Note, some 
   queues are not available for general use.

   Maximum Walltimes:
   The maximum allowed walltime for a job is dependent on the number of 
   processors requested.  The table below describes maximum walltimes for 
   each queue.

   Queue             Min   Max     Max       
                    Nodes Nodes  Walltime Notes
   ---------------  ----- ----- --------- ------------
   standard_4           1   128 240:00:00 10-day max walltime.  
   standard_12          1     6 240:00:00 10-day max walltime.    
   standard_16          1    32  48:00:00 
   debug                1     6  01:00:00 Only runs on 12 & 16 core nodes.
   shared               1     1  48:00:00  
   transfer             1     1  60:00:00
   bigmem               1     4 240:00:00     
   background           1    11  08:00:00 Only runs on 16 core nodes.     

   * Feb 7, 2013    - The gpu queue and nodes were retired from the compute
                      node poll.  Fish is available for applications requiring
   * Oct 1, 2012    - Max walltime for transfer increased to 60 hours.
   * Sept 18, 2012  - Removed references to $WORKDIR and $LUSTRE
   * March 2, 2012  - standard_4 was added to the available queues.
                      The $LUSTRE filesystem should be used with the
                      standard_4 queue.  Accessing files in $WORKDIR
                      from the standard_4 queue may result in significant
                      performance degradation.
   * March 14, 2012 - shared queue was moved from 12 core nodes to 4 
                      core nodes.    

   PBS Commands:
   Below is a list of common PBS commands.  Additional information is
   available in the man pages for each command.

   Command         Purpose
   --------------  -----------------------------------------
   qsub            submit jobs to a queue
   qdel            delete a job from the queue   
   qsig            send a signal to a running job

   Running a Job:
   To run a batch job, create a qsub script which, in addition to
   running your commands, specifies the processor resources and time
   required.  Submit the job to PBS with the following command.   (For
   more PBS directives, type "man qsub".)

     qsub <script file>

   Sample PBS scripts:
   ## Beginning of MPI Example Script  ############
   #PBS -q standard_12          
   #PBS -l walltime=96:00:00 
   #PBS -l nodes=4:ppn=12
   #PBS -j oe

   mpirun ./myprog

   ## Beginning of OpenMP Example Script  ############

   #PBS -q standard_16
   #PBS -l nodes=1:ppn=16
   #PBS -l walltime=8:00:00
   #PBS -j oe

   export OMP_NUM_THREADS=16

   #### End of Sample Script  ##################

   Resource Limits:
   The only resource limits users should specify are walltimes and nodes, 
   ppn limits.  The "nodes" statement requests a job be  allocated a number 
   of chunks with the given "ppn" size.  

   Tracking Your Job:
   To see which jobs are queued and/or running, execute this

     qstat -a

   Current Queue Limits:
   Queue limits are subject to change and this news item is not always
   updated immediately.  For a current list of all queues, execute:

     qstat -Q

   For all limits on a particular queue:

     qstat -Q -f <queue-name>

   Scheduled maintenance activities on Pacman use the Reservation 
   functionality of Torque/Moab to reserve all available nodes on the system.  
   This reservation keeps Torque/Moab from scheduling jobs which would still 
   be running during maintenance.  This allows the queues to be left running
   until maintenance.  Because walltime is used to determine whether or not a
   job will complete prior to maintenance, using a shorter walltime in your 
   job script may allow your job to begin running sooner.  

   If maintenance begins at 10AM and it is currently 8AM, jobs specifying
   walltimes of 2 hours or less will start if there are available nodes.

   CPU Usage
   Only one job may run per node for most queues on pacman (i.e. jobs may 
   not share nodes). 
   If your job uses fewer than the number of available processors on a node 
   the job will be charged for all processors on the node unless you use the
   "shared" queue

   Utilization for all other queues is charged for the entire node regardless
   of the number of tasks using that node:

   * standard_4 - 4 CPU hours per node per hour
   * standard_12 - 12 CPU hours per node per hour
   * standard_16, debug, background - 16 CPU hours per node per hour
   * bigmem - 32 CPU hours per node per hour

"samples_home" on pacman

Last Updated: Wed, 31 Mar 2010 -
Machines: pacman
Sample Code Repository

Filename:       INDEX.txt 

Description:    This file contains the name,location, and brief 
                explanation of "samples" included in this Sample 
		Code Repository.  There are several subdirectories within 
		this code repository containing frequently-used procedures, 
		routines, scripts, and code used on this allocated system,
		pacman.  This sample code repository can be accessed from 
                pacman by changing directories to 
                $SAMPLES_HOME, or changing directories to the following 
		location: pacman% /usr/local/pkg/samples.  

                This particular file can be viewed from the internet at:


Contents:       applications

Directory:	applications 

Description:    This directory contains sample scripts used to run
                applications installed on pacman.

Contents:       abaqus

Directory:      bio

Description:    This directory contains sample scripts used to run
                BioInformatics applications installed on pacman.

Contents:       mrbayes

Directory:	config

Description:    This directory contains configuration files for applications
                which require some customization to run on pacman.

Contents:       cesm_1_0_4
Directory:	debugging 

Description:    This directory contains basic information on how to start up 
                and use	the available debuggers on pacman.

Contents:       core_files

Directory:	jobSubmission 

Description:	This directory contains sample PBS batch scripts
		and helpful commands for monitoring job progress.  
                Examples include options to submit a jobs such as
		declaring which group membership you belong to
		(for allocation accounting), how to request a particular  
		software license, etc.

Contents:       MPI_OpenMP_scripts 

Directory:	parallelEnvironment 

Description:    This directory contains sample code and scripts containing 
                compiler options for common parallel programming practices
                including code profiling.  

Contents:       hello_world_mpi

Directory:      training

Description:    This directory contains sample exercises from ARSC 

Contents:       introToLinux  


Back to Top