The value of a computer does not stem from its inherent capacity to do things well ⎯ it stems from an operator’s capacity to elucidate and structure what we want from the computer, it stems from a quiet conviction that this computer can do umpteen mathematical calculations faster than me, better, with more accuracy and speed I can only imagine of.

That very notion shows us what we can do with computers is often limited by the depths to which we can think creatively, transcribe these thoughts into coherent and articulated steps/instructions, and then share that work with others. Often, due to this, tools that leverage the power of collaborative creation hold immeasurable power to architect the way we communicate, articulate ideas, and translate that into knowledge forms understandable to completely different stakeholders ⎯ and this thesis seeks to explore newer paradigms of interacting with programs that leverage GPT-3’s articulate text generation capabilities to craft a semblance of human intuition for software, an experimental simulation that seeks to answer a fundamental question - “What if software could infer what the operator wants to do, and present to them the tools to get it done, without the operator’s explicit intervention?”

<aside> 🚧 Note that this thesis was authored in early June 2021 (and the latest model from OpenAI was GPT-3). Function calling and JSON output were released with GPT-3.5 and above much later in mid-2023. Most of the approaches and monkey patches you’d find below are unnecessary in the face of recent updates to GPT.

</aside>

Early tools


In the mid-1950s, computers were still room-sized execution machines to be fed on a singular diet of punch cards. Human operators broke down a problem into tasks and fed them into the machine in the precise way that the machines understood, and the computer would perform the desired operation.

Even later, with the advent of computers as personal terminals, the singular gateway to interface with the machine was still by typing specific commands in the terminal, that executed those commands to open up the vast underbellies of specific programs. The needle moved from punch cards and 0s and 1s to a layer above in abstractions. But the way we interface with machines is still far way off from how we interface with humans.

Alan Turing was once having arguments with his colleagues and critics over whether a machine could ever have human-level intelligence.

To prove his point, Turing proposed a game. In the game, an interrogator would ask questions to a human and a computer through a text-only chat interface. If the interrogator doesn't find any difference between both their responses and the computer can successfully project itself as a real human then it passes the test. Turing called it ⎯ the imitation game.

Thereafter, between 1964 and 1967, Joseph Weizenbaum developed an early natural language processing computer program called ELIZA. Created to explore communication between humans and machines, ELIZA simulated conversation by using a pattern matching and substitution methodology that gave users an illusion of understanding on the part of the program, but had no representation that could be considered “understanding” what was being said by either party. The ELIZA program itself was originally written in MAD-SLIP, the pattern matching directives that contained most of its language capability were provided in separate "scripts", represented in a lisp-like representation. The most famous script, DOCTOR, simulated a psychotherapist of the Rogerian school (in which the therapist often reflects back the patient's words to the patient) and used rules, dictated in the script, to respond with non-directional questions to user inputs. But methods notwithstanding, ELIZA was one of the first computer programs (early chatbot) that could attempt Turing’s imitation game.

Joy is a chatbot I wrote in early 2013 ⎯ modelled after ELIZA with a kinda cheeky sense of humour. You can find the project here ↗️

Joy is a chatbot I wrote in early 2013 ⎯ modelled after ELIZA with a kinda cheeky sense of humour. You can find the project here ↗️

While ELIZA was eligible to appear for Turing’s imititation game (later formalized in computational sciences as the Turing Test), it functioned with a very basic technology underneath.

Essentially, you type a sentence, the program breaks it down and looks for keywords in that sentence ⎯ and then it passes the keywords through pre-programmed modifiers and response templates, resulting in a human-like response.

Untitled

Sometimes, when it does not understand what the user is saying, it simply repeats their words back to them.

A snippet of the exchange with ELIZA that Weizenbaum included in his original paper.

A snippet of the exchange with ELIZA that Weizenbaum included in his original paper.