I cut my teeth porting Fortran programs from an old 60 bit word CDC Cyber mainframe to its new 64 bit word successor. Lots of fun was had figuring out the various tricks programmers had used to manipulate text that was 10 x 6 bit characters per word into the 8 x 8 bit successor. I think it also went from some form of EBCDIC to ASCII, just to make it more, shall we say, fun.
Our computer room also had its own UPS, which consisted of a large motor driving a generator via a large flywheel. Cutting edge stuff. Worked well to give the operators enough time to shut things down gracefully in the even of a blackout, at least until a hapless electrician somehow managed to accidently bridge the generator output with his screwdriver.
Both machines were Control Data? I love their hardware: they were the fastest of the BUNCH (the mainframe manufacturers also including Burroughs, Univac (who would later merge with Burroughs to form Unisys) and Honeywell in terms of performance and were I think in terms of raw clock speed faster than System/360
Unfortunately I know very little about any BUNCH mainframe environments other than Unisys systems, specifically those with the Burroughs MCP operating system, which was ahead of its time.
I would be very interested to know any system-level insights you could provide on the CDC machines including what their OS and interfaces were like compared to other mainframes, and if you had console access and access to the “Dinosaur Pen” or were just working via punch cards and the array of secretaries and other bureaucracy that provided a kind of human task scheduling on closed shop mainframes.
By the way, a few places still use motor-generators. For example, much high end computer gear, even including blade server chassis, really wants three phased power, and it is a positive requirement for running IBM system Z mainframes, along with a raised floor for cable management purposes (mainframes and supercomputers are the last bits of hardware that require a raised floor; nearly everything else is compatible with the overhead cooling that is preferred in telecom-standard datacenters, and one gets better efficiency with overhead cold air ducts and hot air exhaust ducts in a datacenter with full cold/hot aisle separation, including the use of blanks in the equipement cabinets to prevent cold air from being wastefully blasted into the hot aisle).
One datacenter in LA whose design I was involved in was designed to exploit the cool temperatures in downtown LA, which for most of the year do not exceed 65 degrees fahrenheit, by sucking in cool outside air on one side of the relatively small retrofitted 15-story datacenter building next door to one Wilshire, and dumping the exhaust out the other side. Water-Cooled Room Air Conditioner or CRAC units made by Liebert were attached to the cold air intake plenum and were configured to activate automatically if required, but normally the facility operated only on the basis intake fans to create a positive pressure of filtered cold air in the plenum and negative pressure from the suction fans on the hot aisle, which would evacuate the rising hot air from the servers, and which combined with the fans in the server to generate suction which introduced cold air from the outside into the datacenter. To improve air quality, the intake air went either through either CRAC units or HEPA filters on the intake fans, which would be sealed off automatically via electric motors in a high temperature scenario outside, along with the exhaust ports. So basically depending on how hot it got outside, the automatic control system for the facility could initially just augment the cold air by energizing one or more CRAC units, but in doing so, this increased electrical power consumption considerably, so we would wait until interior temperature in the datacenter exceeded 74 degrees F. Once the Liebert CRAC units were turned on, the next step, on a really hot day, would be to close the shutters on both the intake and and hot air venting ports, which would result in the warm air from the hot aisles being recirculated through the plenum back into the Liebert CRAC units. This was advantageous when the outside air temperature exceeded the temperature of the air extracted off of the hot aisles, which could occur in July, August and September (not so much in June due to the unique Southern California phenomena of “June Gloom”), and occasionally in May, but even in July there would be many days on which the doors could be left open.
The placement of the building relative to other skyscrapers and our elevation above street level also helped increase the supply of cool air. And unlike most datacenters in legacy buildings in Los Angeles at least, we put our windows to good use, rather than simply covering them up on the inside with drywall (except for windows that could be used for emergency evacuation via the vintage 1920s fire escape) which is what the other datacenters in the building did, aside from the Meet Me Room and certain other telecom spaces which did not run a large number of servers, but rather only a small amount of routing and switching equipment which tends to not produce as much heat as servers.
Telecom equipment by the way really wants DC power, and tends to use old fashioned lead acid batteries as a UPS, which requires that meet me rooms be equipped with hydrogen detectors for reasons any sailor on a WWII or other diesel military submarine would be familiar with. However for a datacenter space that mainly accommodates server computers, the preference of everyone involved is to use UPS machines (uninterruptible power supply) with lithium ion batteries. These UPS units are put in between the servers and an automatic transfer switch, which in the event of a loss of mains power (utility power, in the case of LA, from the city owned LADWP, which at the time offered very good rates to datacenters, making it extremely cost effective to run a datacenter in the City of Los Angeles rather than in the burbs where power would be supplied by Southern California Edison or San Diego Gas and Electric in the far southern suburbs.*
*Of course no mention of the unpleasant reality of California utility companies would be complete without mentioning PG&E, which 30 years ago was well run and well respected, but since that time, whose incompetence has become legendary, with exploding pipelines, arcing powerlines and failing spillways on dams. Their lethal incompetence managed to destroy the town of Paradise, where I spent much of my childhood, and the beautiful Covered Bridge on the Honey Run Road between Paradise and nearby Chico, and a year before, a structural problem on the Oroville Dam came terrifyingly close to annihilating in a man-made tsunami the nearby cities of Oroville, Marysville and Yuba City (Marysville/Yuba City has a very large Sikh population from the Punjab, which has saved the city from falling as severely into the problems of methamphetamines abuse that has affected Redding to the north or various cities in Arizona, Nevada, New Mexico and so on; likewise, while Fresno remains a meth haven, the Assyrian and the Eastern and Oirental Orthodox Christians in the central valley have through their pious conduct had a positive impact on the health of communities in the Central Valley and in the area between Sacramento and the Bay Area, which is home to a large Russian American diaspora. Also the area traditionally has had a large number of Basque Americans and Scandinavian Americans, with Kingsburg at one time having been something of Swedish American Medina to the Danish American Mecca that is Solvang in Central Calfiornia.