ARSC HPC Users' Newsletter 386, May 9, 2008

Little Alaska Weather Symposium, May 12-13

You are invited to attend the 2008 Little Alaska Weather Symposium. The Symposium provides a forum for the exchange of operational and research information related to weather in the Alaska environment. Participation from academic, research, government, military and private sectors is encouraged. Primary areas of focus are:

  • Operational forecasting and observational sources in Alaska
  • Polar aspects of weather
  • Air Quality and Chemistry Transport Modeling

This event takes place in the Elvey Auditorium on the UAF campus. More information including a link to the agenda are available at:

http://weather.arsc.edu/Events/LAWS08/

Faculty Camp Applications Due May 21

ARSC's annual three-week HPC Faculty Camp will occur: Aug. 4 - 22.

Faculty Camp is an excellent opportunity for ARSC users to learn how to manage large volumes of data, run longer simulations and visualize data in 3-D, while receiving plenty of one-on-one attention on individual projects from ARSC specialists.

Applications are due May 21. For registration and more information, see:

http://www.arsc.edu/support/training/FacultyCampApp2008.html

Cray User Group (CUG) News

The CUG 2008 meeting concluded today in Helsinki.

We're happy to report that Liam Forbes, of ARSC, was elected to the CUG Board as a Director-at-Large.

CUG 2009 will be May 4-7, 2009 in Atlanta, and hosted by ORNL. For more on CUG, see:

http://www.cug.org/

MPI Task Affinity on Midnight, Default Setting Change, May 18

On May 18th, task affinity will be enabled by default for MPI jobs on midnight. Task affinity ties each MPI task to a particular processor. In many cases this decreases variability in the run time of applications and/or reduces run time overall. You can experiment with and take advantage of the task affinity settings immediately by adding "-aff" to your mpirun command.

Explicitly enable task affinity with "-aff", e.g.:


  mpirun -np 32 -aff ./a.out

Since the task affinity settings are designed for code running one task per core, hybrid codes (e.g. codes using MPI and OpenMP together) will likely see a significant decrease in performance with task affinity enabled. If you happen to be running a hybrid code, you can experiment now with task affinity, as shown above. After May 18th, when it becomes the default, you can disable it with the "-noaff" flag.

Explicitly disable task affinity with "-noaff", e.g.:


  mpirun -np 8 -noaff OMP_NUM_THREADS=4 ./a.out

There are methods to enable task affinity for hybrid codes, however these settings are not currently included in the mpirun command. If you are running a hybrid code and would like assistance developing an MPI parameter file for your code, please contact consult@arsc.edu for assistance.

Quick-Tip Q & A



A:[[ I often use a driver script to set up and run FORTRAN executables
  [[ since each is better at doing particular things.  Usually the script
  [[ will just execute things before and after the executable, but
  [[ sometimes there's something I need done in the middle of the FORTRAN
  [[ execution that is more easily done by a script, so I'll use the "call
  [[ system" syntax.  On one occasion, I needed the FORTRAN to do some I/O
  [[ through both standard methods and system calls, and it needed to be
  [[ done in a particular order.  However, the compiled executable
  [[ insisted on doing all the system call read/writes before all of the
  [[ standard ones (or vice versa, it's been long enough that I don't
  [[ remember) regardless of the order they were written, or any stall
  [[ tactics I inserted to try and let these statements complete (dummy do
  [[ loops, sleep commands, etc.).  Is there a way to enforce a desired
  [[ order to these things?



#
# Editor's response
#

This looks like it might be a buffering problem.  I was able to
reproduce this behavior with the following simple fortran code:

  mg56 % more sample.f90
  program io_problem
      implicit none
      integer*4 :: ii
      do ii=0,100
          print *,"hello world", ii
      enddo
      call system('cat sample.f90')
      do ii=0,100
          print *,"hello mars", ii
      enddo
  end program

When an explicit "flush" of stdout (i.e. "call flush(6)") is added
before the "call system" statement the stdout is produced in the
correct order (at least for this simple code!)

There might also be buffering issues for any open files that will be
used by the "system" call as input.  Either a flush or a close should
ensure that any buffers are written to the file before the system call
is performed.

The same problem occurs in C.  You can get around the problem in C by
using "fflush(F)" where F is a FILE pointer (e.g. F=stdout).




Q: ARSC's data archive system works best when directory trees are
   stored as one tar file rather than many constituent files.  However,
   that makes comparing individual files in my working directory with
   my archived copy a pain.  Is there any way to easily synchronize
   my working directory with my archive directory when my archive is
   stored as a tar file?

[[ Answers, Questions, and Tips Graciously Accepted ]]


Current Editors:
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669
Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678
Arctic Region Supercomputing Center
University of Alaska Fairbanks
PO Box 756020
Fairbanks AK 99775-6020
E-mail Subscriptions: Archives:
    Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.
Back to Top