ARSC HPC Users' Newsletter 302, October 22, 2004
ARSC Announces Open Research Systems Status
ARSC is pleased to announce a change in the center's account application process.
All High Performance Computing Modernization Program (HPCMP) resources at ARSC are now "open" research systems. This includes "klondike," the Cray X1 and "iceberg," the IBM Power4 Cluster.
With this change, potential users are no longer required to have a National Agency Check (NAC) when applying for an account. In particular, this change is expected to simplify the application process for UAF researchers and foreign nationals. Users are required to provide proof of citizenship (and current visa, if applicable) and consent to routine standard background checks in lieu of a NAC.
Fall Training: Hands-On Sessions + On-Call Experts
In all the excitement about 10-year anniversaries and 300th issues, I forgot to tell you something of actual importance. ARSC's Fall Training is already under way, with a new format:
- Tuesdays: Hands-on session and lecture on specific skills
- Thursdays: "Working Thursdays": ARSC Expert will be on-call in a lab
Next two skills sessions:
Location: West Ridge Research Building (WRRB), Room 009 Time: 1pm-2pm
- Tuesday, Oct 26: "Cray Quick-Start"
- Tuesday, Nov 2 : "IBM Quick-Start"
Next "Working Thursday":
ARSC Classroom (WRRB 009); 1:00 pm - 5:00 pm
- Thursday, Oct. 28,
X1 Default PrgEnv Updated
On 10/21/2004 at 6pm Alaska time we updated the default programming environment, "PrgEnv," from PE 220.127.116.11 to PE 18.104.22.168. This makes some bug fixes and performance enhancements available to those who use the default programming environment.
The programming environments are now configured as follows:
- PrgEnv.old : Unchanged: points to PE 5.1
- PrgEnv.52.first_set : Unchanged: points to the previous default PrgEnv (PE 22.214.171.124)
- PrgEnv : The current default: PE 126.96.36.199
- PrgEnv.new : Unchanged: points to PE 188.8.131.52
To switch your environment to PrgEnv.old, you would issue the command: module switch PrgEnv PrgEnv.old
For more on programming environments and "module" commands read "news prgenv", "man module", or contact email@example.com.
Book Review: Cogwheels of the Mind: The Story of Venn Diagrams
[ Another thank-you to Guy Robinson, who just had a big birthday! ]
Cogwheels of the Mind: The Story of Venn Diagrams, A.W.F. Edwards, John Hopkins University Press, ISBN 0801874343.
Many folks may remember Venn diagrams from their school mathematics classes as a way to present information about sets or classes and how they intersect. "Cogwheels of the Mind" is the story of these curious objects and their creator, 19th century logician John Venn. It expands on the idea with some recent contributions by the author, A.W.F. Edwards.
One problem with Venn diagrams comes when trying to display the relationships of more than a few sets. After five sets it becomes increasingly difficult to draw simple Venn diagrams with only one region for each relationship.
The book reviews the history of Venn diagrams, many of which show that science and art can come together. Edwards has devised a solution, the "cogwheels" of the title, which is elegant and permits the continued use of Venn style diagrams to high numbers in a systematic manner. Sadly a simple text review cannot capture both the ease and beauty of the various Venn diagrams within the book.
Anybody faced with the display of the relationship of complex data will find ideas within this book useful.
Loupe Performance of the Cray X1: Part II
[ Thanks to Lee Higbie for another chapter... ]
We have been cocksure of many things that are not so. -- Oliver Wendell Holmes
The hardly Holmesian detective Lytton, alias Snoopy, has been investigating the performance of Klondike. While conspiracy might improve the narrative, the vagaries of hardware offer more plausible explanations. The vic, who was sometimes called X1, had a data memory hierarchy with three levels: several sets of registers, a pound of cache, and a stere (kiloliter) of memory. How these are used or exploited has a major impact on system performance and forms the crux of this part of the mystery he was trying to solve.
The performance on the simple set of loops shown in Newsletter 301 could be slowed to about 3% of their good speed by injudiciously using the memory system. Some might call this a crime of conspiracy, or incompetence, but another interpretation seems more likely--a crime of blindness. The performance of any real program is difficult to analyze. Many forensic geekologists have been trained to look at the compiler's information. By compiling with "ftn -rm" and "cc -h list=m", Snoopy was able to dump a loopmark listing to a .lst file. There the legend told him that a "C" meant the loop had been Collapsed, "M" meant Multistreamed, "V" meant Vectorized, and "w" meant unwound. "Aha. You're lucky, Lytton," he said to himself, "You're onto the nexus now."
In this performance investigation, the compiler said it "M'd" the outside loop in both cases. The middle loop was "C'd" in the fast case. The inner loop was "V'd" for the fast loops and "Vw'd" for the three-percenters.
With such a wide disparity, it's tempting to think that Vectorizing is good and Vector-unWinding is bad. "I guess the compiler isn't trying to heal loops," he mumbled. But there were loops with intermediate performance that were only "Vw'd" on the inner loop (nothing on the outer or middle ones). Snoopy figured that Multistreaming and Collapsing were red herrings, also. "The difference seems to be due to the efficient use of the memory in the fast case, which Snoopy called the 'Case of the Klondike Rush,' and its poor use in the slow case, the 'Case of the Chilkoot Trudge.'" But any Cheechako could see that vectorization was necessary, even if not sufficient.
Aha, "The Cheechako Crawl" is when there is no multistreaming or vectorization and the memory system is poorly used. And the slowest crawling is when the loops were coded in the C Language. It's becoming clearer, C is not the culprit but is certainly not a good guy. Like a hostile witness, C has to be pushed in the right direction. Left to its own devices, C would sometimes crawl along at a fraction of 1% of the speed of the fast Fortran loops. Now Snoopy Lytton was in a position to formulate the moral for this chapter.
Short Strides are Swift. Fortran for Fast. C can crawl.
To Google, or not to Google...
[ Thanks to Shawn Houston. ]
As an introduction, and to explain why I am writing this article, I am an ARSC User Consultant. So what does that have to do with Google?
ARSC's public web, www.arsc.edu, hosts all of the information an ARSC user, possibly you, needs to access and best use our systems. As a User Consultant, it is important to me that this information is readily available. Additionally I am the ARSC Webmaster, making me more than a little responsible if you can't find what you need.
It has been brought to my attention that people are using Google to search ARSC's web site. I have nothing against Google and I happen to use the Google search service on a daily basis to do my job, or to find deals, or for a thousand other things that I just need to know. Google works, as do many other search services out there.
Now my point. ARSC's public web site has its very own search facility.
At the bottom of every page is a link that will take you to this simple tool for searching our site. Now, Google is anything but simple and has made a fortune selling its deserved image as the best search engine on the planet. But we have an advantage over Google. The ARSC search tool was built with ARSC in mind. Let's take a look at some features:
- Database updated within one hour of website update
- Word-based search
- Partial word-based search (where 'cat' matches both cats and catalog)
- String search (matches string exactly to text in web page)
- Case sensitive search option (sometimes case does matter)
- Advanced boolean search option
We beat Google on speed and freshness. Google is fast, and caches our web site on a regular basis, but far less than once an hour.
Here's a plunge into the details of the advanced boolean search option. (If you'd like to experiment as you read, the search tool is at: http://www.arsc.edu/cgi/search.cgi .)
Literal strings are entered by using doubles quotes to surround your search terms. This flags the input as literal, spaces and all. This is the least likely search to succeed, but there are times when this is exactly what you want. Boolean search uses the following logical operators to sort your search terms:
- "and" is represented by an ampersand, '&', or the word 'AND'
- "or" is represented by a plus sign, '+', or the word 'OR'
- "not" is represented by an exclamation point, '!', or the word 'NOT'
(The "and," "or," and "not" operators can be given in upper or lower-case.)
To use the boolean search facility you probably want to stick with whole word searches. As an example, suppose I am looking for fortran compiling information for the IBM P655+/P690+ cluster. I do not want to see any Cray system pages, nor do I want to see information about the P690 system, iceflyer, even though it might help. I am only interested in iceberg. Try this search string:
fortran AND compiler AND NOT klondike AND NOT yukon AND NOT Cray AND NOT iceflyer OR iceberg
This yields 22 pages, of which the first is the introduction to the IBM P655+/P690+ cluster, iceberg.
Lets take the search input apart. The default search is not case sensitive and is word based, so we are looking for whole words only, not partial word matches. The first two key words are what we are searching for, "fortran compiler." We add the key word AND between the words to override the default of OR. (Alternatively, the default search could be switched using the "Match All Search Terms" check box.) Note that the search tool ranks matches based on the OR separated search terms working left to right. There are two OR separated terms in this search. The first six words are ANDed together, the second term provides the hint that I'm most interested in the fortran pages that include the word "iceberg." The bulk of the first search term is exclusionary. I do not want any page with klondike, cray, yukon, or iceflyer.
Here's another example:
iceberg fortran compiler program example compileThe ARSC search tool returns 461 pages, but the number one hit is a T3E Newsletter from the year 1999. Why did this get the top spot? The page is ranked on the number of times each search term is found in each page, and also using a left to right precedence of the terms. The first link has the word fortran so many times it outranks the pages that have the more important word, "iceberg." The second link is, however, the introduction to iceberg that made the top of the last search.
Let's contrast with Google:
Google ranks pages based on the number of other pages on the internet that link to it, along with some very complex algorithms for content. The Google page rank is sort of democracy in action, and for the wilderness of the entire internet, it works great. But how many people in the world link to pages designed specifically for ARSC users, on ARSC's web site?
If you want the most popular ARSC web page, based on a limited number of "voters," use Google, and force Google to stay in our domain by adding the search term "site:arsc.edu".
If you want the most current search data with a simple but powerful search system, use the ARSC tool, linked at the bottom of every ARSC web page.
I am always looking for input on how to make this search tool, as well as every part of our web site, the best it can be. Please contact me (firstname.lastname@example.org) with suggestions, gripes, or to just say "hello."
Addendum for Newsletter readers: you may restrict your search to the ARSC Newsletter Archives using this interface:http://www.arsc.edu/support/news/HPCnewsSearch.shtml
Click the check box(es) to chose the T3D, T3E, and/or HPC Newsletters, and try a search for valuable information, like this, maybe:
cucaracha pie and mud
Quick-Tip Q & A
A:[[ I know all about redirecting files in Unix. Like, I can do this: [[ [[ % cat f.newinfo >> my.big.file [[ [[ which puts the contents of "f.newinfo" at the end of "my.big.file". [[ What I really want, though, is to put the contents of "f.newinfo" at [[ the TOP, not the bottom, of "my.big.file". I tried this: [[ [[ % cat f.newinfo ^^ my.big.file [[ [[ but it didn't work. How do you prepend files? # # Dale Clark # There are lots of ways to do this using multiple commands (copying, moving, etc.), but the challenge for me was to do this with a single command: % cat bar bar % cat foo foo % ex -sc "0 r bar wq" foo ### Single command prepends bar to foo. % cat foo bar foo The -s just suppresses interaction (silent). The -c specifies an ex command, which in this case is a compound command, with ' ' as a separator. So, we tell ex to open foo, read in bar at position 0, then write-quit. # # Robert Osinski, Hank Happ, Liam Forbes, and Ed Kornkven's first # solution played variations on a theme. Here are Ed's three # solutions, including the dubious number 3: # 1) Simple shell answer: cat f.newinfo my.big.file > temp ; mv temp my.big.file 2) If you're going to be editing my.big.file anyway, it's just a little harder in vi using the vi command ":r f.newinfo". The complication is that this is going to insert f.newinfo AFTER the current line being edited. To do the prepend then, one must insert a new line at the top of the file % vi my.big.file # insert at the beginning of the line (that's a capitol letter "I") I <CR><ESC> # go to that top line 1G # read in the new file :r f.newinfo # delete the inserted line 1Gdd #save and exit :wq 3) I like the creative approach of the questioner. And it's surprisingly close to working so let's fix it up: % cat f.newinfo ^^ my.big.file > temp ; mv temp my.big.file # # Greg Newby # There's no way I know of to do this that doesn't involve rewriting the old file. This is due to the way files are stored on the Unix filesystems I am aware of: as a sequence of bytes, starting at an address on a disk. Since the file starts at the beginning of the address, inserting at the start of the file involves moving all of the rest of the file. It might be that some of the new database-based filesystems will get around this, and simply let you make a link within a file from the new prepended content to the old. But this is not available currently. So, my recommendation is to instead "cat" or "cp" to a new file, and append the old, then rename. It sounds as though you already know how to do this, but here are some sample sequences: % mv my.big.file my.big.file.old % cp f.newinfo my.big.file % cat my.big.file.old >> my.big.file % rm -i my.big.file.old or: % cat f.newinfo my.big.file > temp % mv -i temp my.big.file Q: Almost every program I compile these days requires some pre-processing. The IBM XLF compiler runs the pre-processor if the source file has the suffix ".F". For files with other typical Fortran suffixes, is there any way, other than renaming the file, to get XLF to run the pre-processor?
[[ Answers, Questions, and Tips Graciously Accepted ]]
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669 Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678 Arctic Region Supercomputing Center University of Alaska Fairbanks PO Box 756020 Fairbanks AK 99775-6020
Subscribe to (or unsubscribe from) the e-mail edition of the
ARSC HPC Users' Newsletter.
Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.