ARSC T3D Users' Newsletter 16, December 22, 1994
Reduced Problem Size and Software Upgrades
MPPs are often used as vehicles for accessing large cheap memory for large problems. Therefore researchers on MPPs often publish statistics on the largest problem that will fit into a given MPP configuration. (The "linpeak" part of the Linpack benchmark is an illustration of this.)
Each software upgrade adds new functionality and this functionality usually comes at the expense of memory at least in the code area but often times in the data area with increased buffer sizes, more system and library variables. What the system software and libraries now use comes out of what was previously available to the user. The recent upgrade of T3D software at ARSC brought out some of these issues.
One bug reported by ARSC, that was fixed in the latest upgrade, had to do with these programs that expand to the largest problem that fit in memory. Previously the program ran and hung the system. Now the mppexec correctly reports that the problem is too large to run.
However, some of my own test problems and those of another user are no longer runnable because the executables are now too big. So the amount of memory available is not fixed between upgrades, users wishing to have some continuity between upgrades shouldn't expect the largest possible problem to be always runnable after a software upgrade.
This is a good time to say that ARSC will be upgrading the memory on each PE from 2MW to 8MW sometime in February. We'll have more about this in the New Year.
PVM Timings Between T3D ProcessorsIn the last newsletter we looked at some of the PVM timings between the Y-MP and a processor of the T3D using an example program that comes with the public domain distribution of PVM. This example used the the Y-MP as a master and the T3D as a slave and measured on the Y-MP the time to send messages of varying size back and forth from the T3D. In this section we look at the same timings between T3D processors.
For the T3D timings, we combine the master and slave programs into one program for the T3D partition. The master runs on PE0 and the slave code runs on the other processors. The master on PE0 successively sends and times the same set of messages to PE1 through PE7 as was sent between the Y-MP and PE0 in last week's newsletter. The table below summarizes the results:
Y-MP PE0 PE0 PE0 PE0 PE0 PE0 PE0 to PE0 to PE1 to PE2 to PE3 to PE4 to PE5 to PE6 to PE7 round trip time (microseconds) 14069 2146 2238 2233 2387 2243 2324 2265 speed(MB/s) for message size 100 bytes .014 .047 .046 .046 .045 .042 .044 .045 1000 bytes .123 .457 .448 .433 .431 .442 .425 .438 10000 bytes .706 4.383 4.243 4.148 4.161 4.207 3.979 4.224 100000 bytes 1.205 32.744 32.494 31.636 30.918 30.960 30.836 31.245 00000 bytes 1.280Because of the variability in timings, the timings reported here are average time for 20 messages of each size. The 1 Mbyte message is missing for the T3D to T3D timings because the PVM buffers were too small. The programs are available on denali:
Y-MP to T3D programs in /usr/local/examples/mpp/timers/ympt3d T3D to T3D programs in /usr/local/examples/mpp/timers/t3dt3dIf it's possible to put the master of a master/slave program pair onto single T3D processor there are at least two major benefits:
- Reduced system overhead (Newsletter #7)
Better communication timings because of
- No conversion of data types (PvmDataRaw)
- Faster hardware connection
Future NewslettersThe next newsletter will be January 5th.
Upgrade on ARSC T3D SoftwareARSC upgraded the T3D software to CrayLib_M 126.96.36.199, MAX 188.8.131.52 and SCC_M 184.108.40.206 on December 11th. There have been no problems detected by ARSC testing or reported by users. Users with problems to report should contact Mike Ess
PE LimitsOn December 19th, PE limits were changed for some users to the default allocations:
8 PEs for a maximum of 1 hour in interactive mode 32 PEs for a maximum of 24 hours in batch modeUsers should contact Mike Ess if they have any questions about this or want their allocation changed.
Phase II I/O on the T3DARSC is evaluating the effort of moving from the current Phase I I/O to Phase II I/O on the T3D. In future newsletters I can summarize the differences, but for now I would like to ask if any ARSC users are interested in this upgrade or would want to be part of the evaluation?
List of Differences Between T3D and Y-MPThe current list of differences between the T3D and the Y-MP is:
- Data type sizes are not the same (Newsletter #5)
- Uninitialized variables are different (Newsletter #6)
- The effect of the -a static compiler switch (Newsletter #7)
- There is no GETENV on the T3D (Newsletter #8)
- Missing routine SMACH on T3D (Newsletter #9)
- Different Arithmetics (Newsletter #9)
- Different clock granularities for gettimeofday (Newsletter #11)
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669 Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678 Arctic Region Supercomputing Center University of Alaska Fairbanks PO Box 756020 Fairbanks AK 99775-6020
Subscribe to (or unsubscribe from) the e-mail edition of the
ARSC HPC Users' Newsletter.
Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.