A Forever Computer is one that, as the name suggests, lasts forever. In a way.
The buildings at Ise Jingū in Japan were constructed in 2013 for the 62nd time - every twenty years they are dismantled and new ones built. Wood and computers succumb to entropy alike; the people of Ise point to a way to make the latter also last for millennia.
The Long Now Foundation came to the same conclusion while in the planning stages of their 10,000 year clock: renewal is essential to extreme longevity.
In both cases, there are exacting specifications in the form of physical records on-site. Instructions sent to future generations which detail exactly how this renewal is to happen, in order to preserve the function of the structure.
As renewal is essential to longevity, specification is essential to renewal.
This of course raises the ship of Theseus question (i.e. if you replace both the axe's head and its handle is it still the same axe?) but for the purposes of the Forever Computer Project we have a practical test: if two black boxes, given the same inputs, always produce identical outputs (the same bits within the same nanosecond), then they are the same computer.
And so we come to an initial definition of a Forever Computer: A computer with a recorded specification from which a future person can create a functionally identical computer.
However, the Commodore 64 meets this definition: its microchips have been decapped (i.e. the silicon exposed), and each etched component painstakingly reverse-engineered to a degree that, combined with other written specifications, it can be perfectly recreated. But this is not what is meant by a Forever Computer.
So let me add a clause to the above definition: ...that is capable of evolving to remain useful to each new generation for their everyday tasks.
Because the intention is not to create museum pieces; it's to have the means for computers to be a foundational part of our civilization on the timescale of generations.
Which leads to the question of "why?".
Computers have improved our lives so much over the last few decades that we've overlooked something: we're only hotel guests.
Some people love to live out of hotels, and it's understandable why. You can check out on a whim, there's housekeepers, you could visit a new city every week if you'd like.
But most of us want to have a home. To arrange our things in the ways that make us feel comfortable, that enable efficiency through habits, to have a place we become attached to.
It used to be OK when computers played such a small role in our lives, but now that they are so integrated we should expect them to act in ways that the other things in our homes do. Imagine 100 years ago, a young novelist buys a sturdy typewriter: it stays on their desk, right where they put it, decade after decade, ready to be used. Stories they typed sixty years ago would be just as readable as they day they typed it; none of its keys would disappear or move locations overnight. That typewriter becomes a fixture in their story, a thread woven nearly the whole way through their life: comforting, reliable, theirs.
But then computers gave us word processors, and even just being able to backspace was a wonderful improvement. But at a cost. It's not foolishness that some writers horde diminishing supplies of 30-year-old computers just to keep using WordStar - they're holding onto a real human need.
The move to the cloud has especially revealed us all to be hotel guests. In one generation nearly all of the present cloud-based apps will certainly be gone. Not only will you lose whatever capability that app gave you, but also all of the recorded moments of your life that it was involved in - your conversations, photographs and videos, and other creations.
Computers can regain the good parts of what they replaced. The heady days of change are largely over for the most basic tasks that they help us perform - like setting reminders for ourselves, collecting memories, or sending a note to a distant friend.
We don't need new ways of doing such simple jobs, we need permanence and ownership of how we do them. And we need a slower pace, in places, that allows for every last bug to be squashed out of a system, and to stay that way. And we need to know that if we invest the effort of making changes and alterations - to fit an application to suit us perfectly - that it won't later become inoperative for no other reason than age.
The "why?" of Forever Computers is to give us back what we lost when we first heard the words "no longer supported".
Now on to the "how?".
The conflicting promise of both permanence and relevance goes by a pedestrian name: backwards compatibility. It's a difficult game - often promised, always desired, and usually only gained at great cost.
It's typically done using layers of software like operating systems and compilers. Where a hardware incompatibility exists - say, one computer uses an x86 CPU and another an ARM - the kernel, system libraries, and C compiler all provide abstraction layers such that you can reasonably expect an application to run the same on both computers, at least from the user's perspective.
However, among the flaws of this abstraction is that it only covers a narrow window of time. Bit rot sets in as OSes subside, and compilers and libraries necessarily evolve to keep up with the times - your applications break in places until they stop working entirely.
The Forever Computer Project's solution is to move the compatibility layer much further down from the OS. Nearly every foundational aspect of computing - e.g. the number of bits in a byte, or the ordering of multi-byte values - has changed several times since the first computers; the one constant since the very beginning is the layer below the processor: binary logic. And this is likely to continue because it's right about where technology becomes mathematics.
The buildings at Ise Jingū are rebuilt from scratch every twenty years; our Forever Computer will be rebuilt each time an application is run. All of what makes a computer unique - the CPU, the core logic chipset, the peripherals for things like graphics and sound - will be minted anew from a digital specification just before the application is loaded. When the app wakes up it will be (from its perspective) running on the exact same computer as it was written on, even if it's been centuries since then.
This is possible because of a class of microchips called Field Programmable Gate Arrays (FPGAs). The "field programmable" part means that its binary logic is constructed every time the chip is powered up. It's a mimic: provided the right specifications, an FGPA can become any other digital microchip.
Of course, there are always trade offs. FPGAs are slower and more power-hungry than purpose-built chips, and they take much more die-space to perform the same tasks (i.e. they can only mimic chips that are much smaller than them).
But, they are very useful, and will quite likely exist into the distant future in one form or another, even if we move beyond silicon. So they are the perfect foundation on which to built a Forever Computer - the trade offs are worth it for our purpose.
In the beginning, using a Forever Computer will (ironically) be like going back in time in some ways. The first such computer will have similar power to those from the 1990s. Enough to run a WordStar, certainly, but not a YouTube app. From there, though, as more powerful FGPAs become accessible, and FPGA technology itself improves, Forever Computers will grow in power enough to handle all but the most demanding special-purpose applications. Like a bow: you first draw the arrow back to send it forward.
But even at its most humble beginning, those who can look past its chunky pixels will see in a Forever Computer something that not even the highest DPI device can display: the future.