ARSC HPC Users' Newsletter 418, February 4, 2011
Supercomputing in Plain English Workshop Series
From our friends at the University of Oklahoma, a free HPC workshop series:
This semester, we’re again going to run our *FREE* "Supercomputing in Plain English" (SiPE) workshop series:
When: Tuesdays 2:00-3:00pm Central starting Tue Jan 25, *FREE* (4:00pm Atlantic, 3:00pm Eastern, 1:00pm Mountain, 12:00noon Pacific)
Available live in person, live via videoconferencing worldwide, and via streaming recorded video (details coming shortly).
For those who aren’t familiar, SiPE is a *FREE* workshop series on High Performance Computing.
We’ll be providing the workshops not only live in person at OU (in SRTC), but also live via videoconferencing (see below), and we’ll also record the sessions and make the recordings available as streaming video.
We’ll also provide exercises after most of the presentations, which you can run on OSCER’s big cluster supercomputer.
Check out the website for details on the schedule of topics, recommended prerequisites and registration procedure. It’s free, BTW.
New aprun ’-B’ Flag Available on chugach.arsc.edu[By Oralee Nudson]
There is a new aprun ’ -B ’ flag which eliminates the need to include the -n , -d , -N , and -m aprun flags for arranging and distributing tasks among multiple nodes for parallel jobs submitted to the PBS batch scheduler on chugach.arsc.edu. Instead, inclusion of this single flag in the aprun statement instructs ALPS (the application launcher for the Cray XE6) to follow your job submission script #PBS directives for task and thread placement.
The following chugach.arsc.edu job submission script demonstrates the use of this new ’ -B ’ flag. In this example, 32 MPI tasks will be launched. Instead of reserving all 16 cores on two nodes, the ’ #PBS -l mppnppn=4 ’ directive assigns 4 MPI tasks to each of 8 nodes. The ’ #PBS -l depth=2 ’ directive allows each individual MPI task to spawn up to two threads on neighboring cores. This example demonstrates an effective approach for running programs requiring large memory resources per task.
#!/bin/bash #PBS -q standard #PBS -l mppwidth=32 #PBS -l mppnppn=4 #PBS -l depth=2 #PBS -l walltime=8:00:00 #PBS -j oe cd $PBS_O_WORKDIR aprun -B ./helloParallelWorld
For more #PBS directives, browse the PBSPro User Guide available from www.pbsworks.com . More information about the aprun flags is available in the aprun man page on chugach.arsc.edu.
If you are interested in gaining access to chugach.arsc.edu, contact the High Performance Computing Modernization Program’s Consolidated Customer Assistance Center at email@example.com.
Over One Million Served
ARSC’s trusty Sun cluster, Midnight, is still going strong after being transferred from DoD service to an academic-only resource in the middle of last year. Recently, ARSC Research Professor Don Morton had the honor of running Midnight’s 1,000,000th job. The job was a weather forecast for the Fairbanks region using the WRF weather model. No word on whether the forecast was correct, but Don wins a one-year extension to his HPC Newsletter subscription anyway.
Congratulations, Don! Ditto to the ARSC staff who have kept Midnight a productive resource over its lifetime. Midnight arrived in Fairbanks on November 3, 2006 and went into production on March 6, 2007.
Internet Assigned Numbers Authority (IANA) IPv4 Warning
This past week we were forwarded a reminder of the harsh reality that the IPv4 address space is finite and filling quickly. Are you getting switched over to IPv6?
> Since it doesn’t seem to have hit this list yet: >> >> <http://www.apnic.net/publications/news/2011/delegation> >> >> Dear Colleagues >> >> The information in this announcement is to enable the Internet >> community to update network configurations, such as routing >> filters, where required. >> >> APNIC received the following IPv4 address blocks from IANA in >> February 2011 and will be making allocations from these ranges >> in the near future: >> - 39/8 >> - 106/8 >> Reachability and routability testing of the new prefixes will >> commence soon. The daily report will be published on the RIPE >> NCC Routing Information Service. >> >> Please be aware, this will be the final allocation made by >> IANA under the current framework and will trigger the final >> distribution of five /8 blocks, one to each RIR under the agreed >> "Global policy for the allocation of the remaining IPv4 address >> space". >> >> After these final allocations, each RIR will continue to make >> allocations according to their own established policies. >> >> APNIC expects normal allocations to continue for a further >> three to six months. After this time, APNIC will continue to >> make small allocations from the last /8 block, guided by section >> 9.10 in"Policies for IPv4 address space management in the Asia >> Pacific region". This policy ensures that IPv4 address space >> is available for IPv6 transition. >> >> It is expected that these allocations will continue for at >> least another five years. >> >> APNIC reiterates that IPv6 is the only means available for the >> sustained ongoing growth of the Internet, and urges all Members >> of the Internet industry to move quickly towards its deployment.
We thought we would take the opportunity of saying goodbye to ARSC Publications Specialist Mary Haley, who is taking a new position with UAF’s Intellectual Property and Commercialization, to reveal some of the inner workings of the mighty ARSC web presence.
Apparently Mary thought that Webmeister Tom Baring’s new and improved "404 page", the page displayed when a web server is unable to access a requested page, lacked, shall we say, sparkle . Her sentiments were captured in an email that was acquired by your editors under the "Freedom of Creativity Act" and is reproduced here:
> > From: Mary Haley <firstname.lastname@example.org> > To: Thomas Baring <email@example.com> > Subject: Re: wwwdev: /404.inc please review > > Can’t we do something more like: > > Once upon a midnight dreary, while I websurfed, weak and weary, > ...Over many a strange and spurious site of processors and cores, > ...While I clicked my fav’rite bookmark, suddenly there came a warning, > ...And my heart was filled with mourning, mourning for my dear amour. > ..."’Tis not possible," I muttered, "Where is ARSC to explore?" -- > Quoth the server, "404". > > Or if Poe ain’t your thing, maybe we could go all Zen..... > > You step in the stream > But the water has moved on. > Page not found. > > Mary >
Thanks Mary, for bringing your creative touch to so many ARSC articles, signs, (last minute!) posters, etc. We wish you the very best.
Quick-Tip Q & A
A.[[ Building Makefiles always seems like a work of art. Half the time I [[ see a Makefile define everything inside of it for the various includes [[ and libraries that an application needs. [[ [[ For example: [[ [[ INCLUDE = -I. -I/usr/include/ -I/usr/include/X11/ -I/usr/local/include/GL [[ INCH5 = -I/import/home/u1/uaf/webb/hdf5/include [[ INCOSG = -I/usr/local/pkg/osg/osg-2.8.0/include [[ [[ LDFLAGS = -L. -L/usr/lib64 -L/usr/X11R6/lib64 -L/usr/local/lib64 [[ LDH5 = -L/import/home/u1/uaf/webb/hdf5/lib [[ LDOSG = -L/usr/local/pkg/osg/osg-2.8.0/lib64 -losg -losgViewer -losgSim [[ [[ Other times, a Makefile will rely on outside environment variables [[ to provide paths to these resources. When building your Makefiles, [[ which style do you use and when? Is there a powerful compelling reason [[ to use one over the other? [[
We received some solid advice from readers Scott Kajihara and Jed Brown and editor Kate Hedstrom.
# # From Kate Hedstrom #
I would say "it depends". If it’s a little project that only I will use, then I’ll put all that stuff in the makefile. If it’s a bigger project, I’d consider finding a better way. I still haven’t found a perfect way. The autotools are the default way to go for C projects on Unix, but I think they fall short for my needs with Fortran. For our ocean model with hundreds of users, NetCDF continues to be a source of adventure for the build process. Here’s why:
1. We now use the Fortran 90 "use netcdf". This means when compiling my code, it needs to query a file called "netcdf.mod" for insight into the netcdf function calls and constants. The NetCDF install puts this into the directory with include files. It must be consistent with the compiler being used, even down to version number for some. Compilers are not consistent on the option to use to point to module directories so we have decided to simply copy the netcdf.mod file to the compile directory.
2. NetCDF has more than one version. Some of the changes between version 3 and version 4 are the requirement to now link to the HDF5 libraries as well as the NetCDF library. Another change is that it can now build both a netcdf and a netcdff library, depending. A configure script would need to know to look for that second library if it doesn’t find the Fortran functions in the first.
So what are we doing? We need pointers to both the NetCDF include directory and the library directory. Gnu make allows you to set variables conditionally:
NETCDF_INCDIR ?= /some/long/path/include NETCDF_LIBDIR ?=/some/long/path/lib
This goes into the system-dependent part of the makefile, but it allows you to set them from an environment variable or from a build script that invokes make for you. If the environment variable is set, then it will be used, otherwise the value in the makefile will be used. We have other symbols such as "USE_NETCDF4" which brings in the HDF5 libraries, "USE_MPI" which invokes the MPI version of the compiler, etc. These need to be set by the user at compile time somehow.
# # From Scott Kajihara #
So, to state a not-necessarily obvious point, make(1) defines a programming language with variables and conditional execution. Note: although GNU Make (which is the _make_ on Linux) introduces additional conditional constructs, I would opt for GNU AutoTools (autoconf, automake, libtool) over the rather error-prone GNU Make constructs.
A combination of both environment variables/command-line arguments and Makefile macros is probably the best compromise. For the rules, use only the Makefile macros to make them as generic as possible. For the Makefile macros used within rules, define them as generically as possible with Makefile macros and common options or filenames.
The remaining macros (directories and options) should have some reasonable defaults that can be overridden by the environment variables or command-line options.
== Makefile ============================================================ X11_DEFINES = -DX11_ENABLED X11_INCLUDES = -I/usr/X11R6/include X11_LIBDIRS = -L/usr/X11R6/lib X11_LIBS = -lX11
DEFINES = $(X11_DEFINES) INCLUDES = $(X11_INCLUDES) LIBDIRS = $(X11_LIBDIRS) LIBS = $(X11_LIBS) -lm
CFLAGS = $(INCLUDES) $(DEFINES) LDFLAGS = $(LIBDIRS) $(LIBS)
OBJS = foobar.o EXE = snafu
.c.o: <TAB>$(CC) $(CFLAGS) -c $*.c
$(EXE): $(OBJS) <TAB>$(CC) -o $@ $(OBJS) $(LDFLAGS) ========================================================================
The example may be more complicated than the questioner expected. The advantage is that although defaults are defined within the description file (Makefile), the macros X11_DEFINES, X11_INCLUDES, X11_LIBDIRS, X11_LIBS can be defined by environment variables or command-line options without editing the description file. As these are the parts that will most likely vary, this method (with proper maintenance) balances between hardcoding all definable objects and increased complexity of the description file. Although another level of complexity, for managing description files on multiple architectures, I would use AutoTools which have a number of built-in queries to generate the _configure_ scripts which the end-users run without knowing the specifics for the system.
# # And Jed Brown agrees pretty strongly with Scott’s last point: #
Relying on environment variables is horrible since it’s almost impossible to reproduce, especially if you use that abomination called "module" and tend to forget what you have loaded. Having your users put magic stuff into makefiles is the road to madness, it doesn’t scale well as complexity (number/size of dependencies) increases. In my opinion, you are spitting in your users’ faces if you have a nontrivial size project and don’t provide a decent configuration system.
Q: After I added an "echo" statement to the .bashrc of a certain machine, I was no longer able to scp to it. When I tried to scp to that machine, I would see the echo and nothing else, and the prompt returned without any indication of failure. Why did an echo statement in my .bashrc prevent scp from working?
[[ Answers, Questions, and Tips Graciously Accepted ]]
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669 Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678 Arctic Region Supercomputing Center University of Alaska Fairbanks PO Box 756020 Fairbanks AK 99775-6020
Subscribe to (or unsubscribe from) the e-mail edition of the
ARSC HPC Users' Newsletter.
Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.