Jump to content
IGNORED

Sounds like SLURM for a 990


Recommended Posts

Thanks for sharing, that is most interesting.

It you are referring to slurm as the resource manager for supercomputers, it does sound as a precursor.

 

Coincidentally, the TI-99/4a got me into IT and scientific computing.

I normally don't talk about my day job, but for the last 5 years I've been responsible for resource scheduling on our shops' supercomputer.

We're using PBS (Portable Batch System) as scheduler, which also originated at NASA.

 

Seems going full circle here 👍

 

 

 

Edited by retroclouds
  • Like 2
Link to comment
Share on other sites

Looking over this and other reports and papers on the FEM, I'm wondering what influence the design of the nodes and the overall architecture might have had on the Transputer, another memory-to-memory architecture, and its serial interconnect links. Or vice-versa; they're roughly contemporaneous.  Wish I could find a pic of the FEM I/O boards. Wikipedia does have a pic of the array and one of the CPU boards.

 

https://en.wikipedia.org/wiki/Finite_element_machine

Edited by jbdigriz
  • Like 3
Link to comment
Share on other sites

Found the FACS User's Guide:  https://ntrs.nasa.gov/citations/19830026340 and the PASLIB manual: https://ntrs.nasa.gov/citations/19850012417. The NODAL EXEC (OS) programming manual is eluding me. Be nice to find this stuff on a DX10 disk or tape somewhere. NODAL and the node part of PASLIB could be dumped from the ROMS on one of the node CPUs if one could be found. Pics and/or schematics would of course be useful.

 

 

19830026340.pdf 19850012417.pdf

Edited by jbdigriz
  • Like 2
Link to comment
Share on other sites

@dhe

Thanks for posting this! It was a really good read. 
 

What struck me about it was how familiar and natural the architecture felt. I learned parallel computing on a Cray T3D, which had an array of 256 nodes.  (Alpha 21164, each with 8MB of RAM.) 

 

I think it shows that the Nodal architecture gave rise to tools and algorithms for programmers.  
 

I make a wild guess that  then new hardware in turn was built to support those algorithms. 

Similarities:

 

0. Nodes are just CPUs with memory, and must share everything else through the host. 
 

1. N nodes are provided with  interconnects to M = log2(N) neighbors. Sufficient to run many good algorithms.  (FEM has +1 more.) 

 

FEM: N=16, M = 4 (or 5?)

T3D: N=256, M = 8

 

2. one host communication bus snakes around all N nodes. Host attaches this way. 
 

3. All coding is done on the host computer. 
FEM: TI 990

T3D: Cray C90 with Unix

 

4. The host uploads one  program to all the nodes, and  handles the I/O.  Scheduled users jobs. 

FEM: one user gets all nodes

T3D: you CAN divide in half, and jobs run, if they fit. 
 

An example algorithm for such a parallel machine is:

 

Find the top 1% of (big-number)  computed values across N nodes. No node can store the entire data set, only it’s slice. 


Algorithm:


On every node X (0 to N-1)

 

For I=0 to M-1, each node compares data with its neighbor at XOR(X, 2^I)

 

Afterward, every node has the full answer. 
 

Explanation :

First every  node sorts its own values. 
At I=0, these nodes compare values:

0,1

2,3

4,5

254,255

 

At I=1,

0,2 (so 0 and 3 come together)

1,3

4,6

253,255

 

At I=2,

0,4

251,255


At I=7,

0,128 and at last the two halves of the problem are joined. 
1,129 also complete the problem. In fact every node has. 

  • Like 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...