ARSC T3D Users' Newsletter 55, October 6, 1995
T3D/E Software Information at the Alaska CUG
One of the most important parts of the CUG meetings is a chance to hear about software plans from some of the developers themselves. From the way they talk about their product, a user might gauge whether a fix or a feature is a major or minor effort, whether is will be on time or not, whether it's what the user wanted or not, ... Of course, the answers from a salesmen are that everything will be on time, it's exactly what you wanted and it was a snap to implement.
For the T3D user, the software plans must have been a disappointment. CRI is racing to complete software for the T3E and the T3D user is shown what will not be implemented on his machine. With the T3E not available until March 1996, even the the software planned for the T3E seems remote to this T3D user. This summary concentrated on T3D/E software plans.
From the Totalview Talk by Dennis MoenCRI is moving to have totalview as the common debugger on PVP, MPP and Sparc systems. Stability is their number one concern and they request that coredumps with the current product be submitted with any SPR. The newest version, 2.0, will be released in the 4th quarter of this year and the 2.1 version in the 1st quarter of 1996. A list of features for the 2.0 version:
- a line mode version
- simplified windows
- additional ease of use buttons and sliders and boxes
- ability to pipe data into user programs
From the Message Passing Toolkit Talk by Peter RigsbeeCRI plans to organize PVM, MPI and SHMEM under one unbundled software called the "message passing toolkit" (MPT). This will provide one common interface for all three products. Then the support for each product can be uniform. The schedule looks like:
MPT version 1.0 4Q95 PVM, MPI, SHMEM for PVP 1.1 1Q96 PVM for T3E 1.2 3Q96 MPI for T3EThere will be no MPT release for the T3D. Peter came up with two interesting facts about message passing:
- There have been .25M downloads of the public domain version of PVM from Oak Ridge National Labs since PVM was made available in 1991. (Being free, probably had a lot to do with that!)
- The work on the MPI2 standard will include some form of "one-sided message passing", like CRI's SHMEMS.
From the CRI Hardware and Software Report by Irene QualtersThe current version of MAX is 18.104.22.168 and it was made available 8/28/95. This version of MAX has the rollout/rollin capability of T3D job management. The next version will be the 1.3 version, out in October. (ARSC will move to this version as part of the Unicos upgrade [editor])
The 1.3 Programming Environment will be out in 4Q95 and it will include:
Craylibs 2.0 Craytools 2.0Also in 4Q95 will be the 2.0 version for CF90. This release will include the T3D.
(I think that all of these releases for the 4th quarter will be out late in the quarter, rather than this month [editor]).
Irene reports that there as been no progress in the industry on HPF and that CRI continues to believe that CRAFT Fortran is the better product for performance.
From Bill Harrod's Talk on the MPP Scientific LibraryWith the 1.2.2 version of Craylibs_m (that ARSC is running now), most of scalapack is now available. Scalapack is the "scalable" version of lapack based on the Blacs, Blas and Pblas from Oak Ridge National Laboratories.
In the 2.0 version of Craylibs_m will be:
- complete implementation of Scalapack
- all of Lapack, including the eigenvalue routines and the singular value decomposition routines
- 3D FFTs
CRI is continuing its work on distributive sparse solvers.
PATP Scientific ConferenceJPL sponsored a conference of T3D PATP sites and have captured those presentations in a binder of the presentor's overhead transparencies. A complete list of the presentors and their talks was given in ARSC T3D newsletter #50 (09/0195) and below is the list of presentor's transparencies available in the binder. I will duplicate and mail out or fax requested sets of slides.
- Ocean Modeling on the Cray T3D, Yi Chao, JPL
- High Performance CFD Applications, Steve Taylor, Caltech
- Lawrence Livermore National Laboratory (LLNL)
- The High Performance Parallel Processing Project and MPP Access Program Alice Koniges, LLNL
- Ab Initio Material Simulations for Massively, Parallel Environment Lin Yang, LLNL
- Structures and Acoustics, Rich Procassini, LLNL
- Los Alamos National Laboratory (LANL)
- High Performance Parallel Processor Project at LANL, Bruce Wienke-LANL
- Deterministic Neutral Particle Calculations for Well Logging on the T3D Randy Baker, LANL
- Oil Reservoir Models, Olaf Lubeck, LANL
- North Carolina State University
- Ab Initio Simulations of Advanced Materials, Jerry Bernholc, North Carolina State University
- Pittsburgh Supercomputer Center (PSC)
- Science on the CRI T3D System: An Overview, Sergiu Sanielevici, PSC
- Swiss Federal Institute of Technology Lausanne (EPFL)
- Modeling Materials with Ab Initio Molecular Dynamics, Roberto Car, EPFL
- Parallel CF on the Cray T3D: Programming Models, Methods and Applications, Mark Sawley, EPFL
- Parallel Implementation and Interactive Optimization of Video Data Compression Techniques, T. Ebrahimi, EPFL
- Kinetic Modeling of Fusion Relevant Plasmas, Kurt Appert, EPFL
- T3D Research Centers
- Arctic Region Supercomputing Center Michael Ess, ARSC
- Edinburgh Parallel Computing Center, Arthur Trew, EPCC, U of Edinburgh
MPPMON and/or MPPVIEWIt seems about a year ago that a very useful CRI utility for the T3D was installed at ARSC, mppmon. This tool provides a snap shot of what is running on the T3D in one screen and with "qstat -a" is great for monitoring the T3D. At ARSC the utility is called mppmon, but at other T3D sites it may be called mppview. Below is a sample of its output and its man page. One undocumented flag is the capture of the current single screen with the command:
mppmon -L > now.screenSample screen:
\ governat governat governat governat ess ess ess ess \ \ governat governat governat governat ess ess ess ess \ \________________________________________________________________________\ _________________________________________________________________________ \ governat governat governat governat ess ess ess ess \ \ governat governat governat governat ess ess ess ess \ \________________________________________________________________________\ _________________________________________________________________________ \ governat governat governat governat . . . . \ \ governat governat governat governat . . . . \ \________________________________________________________________________\ _________________________________________________________________________ \ governat governat governat governat . . . . \ \ governat governat governat governat . . . . \ \________________________________________________________________________\ Part User PID Program State Flags Shape- XYZ (base) Elapsed ---- -------- ------ -------- ------ ------ ----------------------- --------- 39 governat 12081 pkdgrav. Active B 64= 8x 4x 2 (0x000) 0:41:52 40 ess 13527 main Active B 32= 8x 2x 2 (0x008) 0:00:21man page for mppview/mppmon:
MPPVIEW(8) NAME mppview - Displays massively parallel processing (MPP) system activity SYNOPSIS /usr/bin/mppview [-h host] [-i interval] IMPLEMENTATION All Cray Research systems with a Cray MPP system DESCRIPTION The mppview command displays a map of active partitions running on the Cray MPP system. It uses the curses(3) library to drive the terminal display so that many terminal types may be supported. To specify the system to be monitored, use the host option. If you do not specify host, the local system is monitored. The screen refreshes every interval seconds until you type q to quit mppview. The mppview command communicates through rpc(3C) with the system activity monitoring (sam) server, samdaemon(8), running on the host system to obtain the information that you request. The mppview command accepts the following options: [-h host] Specifies the network name of the host to be monitored. The default host is the local system. [-i interval] Sets the refresh rate to interval seconds. The default interval is set by the server. Interactive Input After you enter the mppview command, an interactive screen appears. At the top of the screen, the following option menu is displayed: help summary torus page refresh clear quit To move between the options, use the <TAB> key. When the desired option is highlighted, press the <RETURN> key to execute it. For the help option, first you must press the <?> key to enable help mode. For the other options, you can type just the first letter to execute the option directly. Option Description help Displays help information. You first must press the <?> key to enable help mode; then, other menu options can bring up their help displays. summary Displays summary information for all of the MPP partitions. torus Graphically displays each node of the Cray MPP system. To select the information displayed in a node, use the torus submenu options: user User name of the program on that node group Group ID of the program pid Process ID of the program prog Program name page Moves you to other pages. If the display does not fit on one screen page, enter one of the following commands to select a page: + Forward one page - Backward one page # Select a page number refresh Sets the rate at which the display is refreshed. The refresh is expressed in tenths of a second. Refresh rates smaller than the rate at which the samdaemon(8) is collecting data do not take affect. clear Clears the screen and repaints the entire display. quit Quits the program. After the csam(8) utility is running on your terminal, you can use the following keys to change displays: Key Description TAB Changes the selected (highlighted) option. ? Enables help mode, in which other menu options can bring up their help displays. NOTES If mppview is not compiled on a system with UNICOS MAX software, you will get a message, "No MPP system present", when you try to execute the mppview command. SEE ALSO csam(8) for more information on displaying system activity data on a terminal samdaemon(8) for information on the system activity data daemon xsam(8) for information on graphically displaying data about system activity UNICOS System Administration, publication SG-2113 UNICOS Administrator Commands Reference Manual, publication SR-2022, for the printed version of this man page. USMID @(#)man/man8/mppview.8 90.1 01/06/95 14:17:37
List of Differences Between T3D and Y-MPThe current list of differences between the T3D and the Y-MP is:
- Data type sizes are not the same (Newsletter #5)
- Uninitialized variables are different (Newsletter #6)
- The effect of the -a static compiler switch (Newsletter #7)
- There is no GETENV on the T3D (Newsletter #8)
- Missing routine SMACH on T3D (Newsletter #9)
- Different Arithmetics (Newsletter #9)
- Different clock granularities for gettimeofday (Newsletter #11)
- Restrictions on record length for direct I/O files (Newsletter #19)
- Implied DO loop is not "vectorized" on the T3D (Newsletter #20)
- Missing Linpack and Eispack routines in libsci (Newsletter #25)
- F90 manual for Y-MP, no manual for T3D (Newsletter #31)
- RANF() and its manpage differ between machines (Newsletter #37)
- CRAY2IEG is available only on the Y-MP (Newsletter #40)
- Missing sort routines on the T3D (Newsletter #41)
- Missing compiler allocation flags (Newsletter #52)
- Missing compiler listing flags (Newsletter #53)
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669 Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678 Arctic Region Supercomputing Center University of Alaska Fairbanks PO Box 756020 Fairbanks AK 99775-6020
Subscribe to (or unsubscribe from) the e-mail edition of the
ARSC HPC Users' Newsletter.
Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.