Keep Up


The inspiration, process, and reasoning behind Keep Up.

I began this book as a thought experiment, and later (after I became surprised at the results) continued working on it to explore a subtle point. 

Technology is outpacing us. Just a decade ago, it was far in the rear view mirror. Today, in many respects, technology exceeds the capacity of humans to accomplish many economically valuable tasks. And now, one of them is language. 

Most of the machine learning world was of the opinion that language was going to be one of the last problems we solve. It’s highly complex, features an incredible amount of nuance, and is one of the main differentiators between humans and animals. But it turns out that language and reasoning can be approximated by modern word-prediction algorithms, and that has profound implications for our future economy and living a meaningful life.

Many, in an effort to dismiss the tremendous progress of recent machine learning, will claim that this book is not very long, and therefore the algorithm must not be very noteworthy. Certainly they are correct about the first claim. Clocking in at ~70 pages or under 17,000 words, it just fulfills most peoples’ notion of how long a book needs to be before it’s considered a ‘read’. This was purposeful, because had it been made much longer, I suspect few people would have finished it. And I wanted people to finish it.

On the second claim, though, I must insist that they are incorrect. The technology that created this book is likely one of the most revolutionary technologies to have ever been created. It is, in a sense, a small mind – one that can run thousands of times faster than a human mind, and take advantage of algorithmic shortcuts that we cannot. This book was produced not in weeks, months, or years, but in just a few hours.

Needless to say, the ability to write a book in a few hours is a skill that not very many humans can reasonably claim to have. Most people would probably have taken several weeks to accomplish this task. Of course, it’s certainly possible that the quality of a human’s end-result might still exceed the quality of the writing in this book – I do not claim that GPT-3 is a Hemingway, after all. But considering that this is just the third iteration of an algorithm that was released a few years ago, I have no doubt that a future GPT-4, GPT-5, or GPT-6 will not only write faster, but better as well.

The content of this book spans a number of subjects, including philosophy, capitalism, environmentalism, and science fiction. To achieve this, I simply provided a few-sentence prompt on each of these subjects, had GPT-3 generate approximately 2,000 words of text each, and later (in most cases) removed the supplied prompt to keep the result as human-free as possible. 

It should be noted that, at times, I had to include ‘pivot’ phrases (e.g ‘therefore’, ‘in summary’, ‘perhaps’, and so on) before a new paragraph. Current networks appear to get off-track after ~1000 words, so this is necessary to improve consistency in predicting longer sequences of text. The final result was also copy-edited for spelling mistakes and punctuation (yes, AI makes mistakes!)

Everything else that you will read – the logic, the reasoning, and the conclusions – was generated entirely by GPT-3.