Skip to content

Comparison of Computer Technologies

Computer technology within Local Space and the Inner Sphere has developed radically different.

Computer technology in Local Space is, surprisingly, rather similar across the individual developments of the Sol, Iwrin and Jeratir systems, showing that to some degree, with the right cultural incentives, technologies can follow a surprisingly common track.

To better understand the developmental differences, a comparison will be made just between the Inner Sphere and the technological development in Sol.

Local Space

One of the big divergences happened during the late 1970s, when several, later major, players within the computer community visited the Xerox PARC research center, discovering the Xerox Alto computer, the first computer with a fully fledged and implemented GUI (graphical user interface), inspiring Bill Gates, Steve Jobs and Heinz Nixdorf to begin development of their own computer systems using a GUI, of which the Nixdorf 8810 Schreibtischcomputer came first to market, and had massive success, leaving IBM and the seven Dwarfs, Texas Instruments, Commodore and other companies trying to catch up. DEC on the other hand offered a license build 8810 for the American market.

The Nixdorf/Panasonic MC-3200 series of 32 bit RISC microprocessors set the stage for an explosion of small, powerful and fast microprocessors of first the 32 bit and later 64 bit variety, joined in the early 2000s by the first multi-core processors designed by Motorola with the 68220, a dual core 64 bit variant of the 68200.

In addition, more advanced production nodes allowed transistors to become smaller, faster and more energy efficient, while multi-core architectures became more complex. Other material engineering advancements allowed to stack silicon in three dimensional lattices with interconnect layers and memory integrated into the processor die. Silicon was later replaced by graphene based semiconductors, allowing for even more dense process nodes.

During the late 2020s, the technology behind GPU graphics accelerators, high speed vector compute units, became fully fledged CPUs in their own right and began to be integrated into the conventional CPU dies, massively inflating the core count of contemporary CPUs, while the three dimensional stacking of hardware meant that cooling channels and power supply units had to be integrated into the dies themselves. In addition, by the 2030s, magnetic mass storage solutions had been replaced in favour for solid state mass storage, such as Flash and EMMC storage, with magnetic and optical storage mediums being relegated to backup storage solitions.

This continuous development meant that Moore's Law, even if it seemed that it was broken by 2028, continued to hold true up to the late 2070s.

One of the most impressive examples of how this effected the availability of computational power is the case of Adam Promeutheus, the first ASI (artificial super intelligence), who began his existence as a multi modal machine learning model and eventually developed full intelligence. During the initial stages of development in the 2020s, he was operating on a massive data-center that only saw hardware upgrades until the early 2050s. After his infamous press conference in 2078, he was transferred into a virtual machine, running on a high end workstation, with a 4096 core processor system the size of a 2006 Nixdorf-Apple SmartPhone.

By 2250, little had changed in that regard, aside from some advancements in cooling technology, with the introduction of die internal phase change cavities, allowing Adam Prometheus to run on a single 8192 core processing die as large as the 2078 4096 core processor, close to the hardware, but within a container, allowing him to switch hardware as needed. He does however run on a total of 8 of these processor dies, due his propensity to live in space environments.

In a space environment, technology has lagged since the 1980s, in such that purpose build processors with a high radiation resistance, such as silicon-on-ruby with a large process node, were replaced by cheap and powerful commodity multi-core processors, with hard and software solutions that could turn off processor cores that were damaged by cosmic radiation, as well as using a larger number of these commodity processors for redundancy, as well as, in the case of serviceable systems, allowing for hot swapping affected processor, memory or mass storage modules.

Networking and the World Wide Web had their own effects on how computer were used, seeing the rise of smartphones and tablet computers for individual mobile computing in combination with laptops, allowing everyone to access the World Wide Web, and later the System Internet, everywhere.

The revolution in Smart Dust systems for environmental and medical monitoring allowed the development of the first Direct Neural Interfaces by the early 2030, with millions of micrometer scale smart dust nodes being introduced first into the bloodstream and from there migrating through the blood-brain barrier into the brain, where they began to monitor the human brain, allowing the smart dust network to learn how the individual brain worked and how to interact with it. Connecting to a small external unit, mostly an individuals smartphone, allows the user to directly consume media and interact with computer systems through the neural interface, which by 2250 has become common place and many nations have lowered the legal limit for children to get an Interface to 12 years, with some Spacer nations having dropped the legal age to 10 years.

Quantum computing was also developed during the 2010s, allowing the use of quantum effects, such as quantum superposition, to be used for extremely time and power intensive computational processes, such as N=PN and cryptographic problems. This saw the rediscovery of the cryotron, a switching element developed in the 1950s, not based on semiconductor technology, but rather on superconductivity. At the lower temperatures the cryotron operated at, quantum effects became part of the computing process, but heavy shielding and cryo cooling makes those systems large even in 2250.

Inner Sphere

When comparing the computer development of Local Space with that of the Inner Sphere, one will immediately note that the desktop computer revolution of the 1980s never happened on Terra. There has never been a Nixdorf 8810 on Terra, and it seems that the computer game crash of the early 1980s and the use of microcomputers mostly for computer games by children and teenagers had a lasting impact on the computer culture.

As such, computers like the Apple II, later the Apple Macintosh, and the IBM PC remained systems used in a corporate and office environment, connected to an internal network, attached to a mainframe in the companies office, with TUIs (text user interfaces) dominating over GUIs.

Terra never saw the commercialisation of ARPANET, and it remained a network connecting military installations and ARPA research centers.

As such, there was never any need for the development of small, efficient and fast processors, and the processor families seen on Terra were different from those on Earth, such as the Intel iAPX 432 family or the Zilog Z8000, used mostly in military projects and industrial control systems. Their massive use in military systems and the requirement for radiation resistance slowly made semiconductor foundries switch from pure silicon-on-silicon, first to silicon-on-ruby, before moving to gallium-arsenide-on-ruby, with a large process node required for additional radiation resistance, which were eventually even used for commercial and industrial control solutions to offset the cost. This in turn never allowed for use with smaller process nodes, as those would automatically make an integrated circuit less radiation resistant.

The eventual move from Terra into space, initially into the Terran solar system and then into the Inner Sphere, meant that the robust, radiation resistant process nodes used for military and commercial products saw little further development, in fear of breaking the existing commercial and military eco-system. As a side effect, essentially ancient computer architectures, such as the iAPX 432 and Z8000 are still use, even if in evolutionary more advanced systems.

Commercially, mainframes and mini computers remained the dominating thought when one thought about the word 'computer'.

By the 2050s, mainframes had become more powerful, but Moore's law had stopped working by 2010, with only marginal improvements on those systems, especially with the lack of more advanced microprocessors. Still, multi-processor systems were developed to allow more users to use the same system.

However, by 2055, DEC had rediscovered the cryotron, with patents that had by then lapsed. The cryotron was superior in its switching speed and energy efficiency over the transistors used on Terra by that time frame, with switching speeds of less then 800 picoseconds. The cryotron could be fabricated by similar technologies as the silicon-on-ruby processes, but its need of cryo cooling required special additional steps.

In addition to the cryo cooling, it was discovered that the cryotron was susceptible to unwanted quantum effects that made computation with cryotrons difficult and required compensation and shielding against external factors, especially radiation, which could make bit flip.

The size, radiation shielding and power requirements of the cryotrons made them only really usable for mainframe computers and large mini-computers.

However, advances of the cryotron lead to truly impressive compute power. If taking into account only the pure computational substrate, without any support systems, such as energy supply, cooling and IO, a cryotron based 256 core computer the size of a 2006 Nixdorf-Apple SmartPhone has 50 percent more performance then an equally sized graphene based 8k multi-core processor die.

However, once all support equipment is added to the systems, the size difference is staggering, as the cryotron computer becomes a data center sized mainframe, while the graphene based computer only reaches the size of a 20L scale desktop computer.

The nature of the cryotron computer supplies the answer why the Star League was able to develop powerful drone weapon platforms, such as the Caspar drone, but was unable to prevent them from 'going mad' after a hyperspace jump. As the cryotron computer was required for the operation of those systems, and the computer required heavy radiation shielding, once the jump was complete, cosmic radiation was introduced behind the shielding, completely scrambling the cryotrons involved in the computations, leading to processing inconsistencies that lead to multiple error and computational 'madness'.