19 Jul GPT ‘n’ Me
I’ve always been fascinated by technology. Ever since I used my grandfather’s hammer or took apart my first doorknob as a child, I have always been intrigued by how things work underneath. Maybe it was the hunt for knowledge, that sense of discovery of peeling back the mask, revealing clear reality. This is the mindset that set me on a career as an Earth Scientist, and what has had me particularly interested in recent years in all things Artificial Intelligence.
A.I. is not new. Sure, I remember first using the term while playing computer games in the 1990s, with their algorithmically controlled enemies (or later, allies) we called A.I. players. The nomenclature was obvious: artificial intelligences, though perhaps euphemistically so, as they were only able to make some very rudimentary “decisions.” Maybe in the computer game, when the character that you controlled came within a certain number of pixels from the enemy, the A.I. would attack (or run) based on a fixed set of pre-programmed criteria. But now it seems all that has changed. We are now entering the period of Generative A.I. I hope you will appreciate the distinction as you read on.
It was a cool morning in early December 2022, and I had just signed up for ChatGPT. Having heard of the potential of GPT for a few years, mostly from guests on the Lex Fridman Podcast, I was aware of what it might be able to do—but I was still taken aback by how competently it could write! By now most of you have probably already played with ChatGPT, or at least heard of it, but I’ll share my little story. While my wife Anna and I were setting up our artificial Christmas tree and wondering about the folklore of the season, we asked GPT to write some Christmas stories based on a few input prompts—including some arbitrary background information. Given the seed of our conversation at the time, we asked it to write a story about our children (Marc and Sara) who, upon noticing that the Christmas tree was artificial, concluded that maybe Santa Claus was fake too. Being the nerds we are, we provided the constraint that the story had to be written in the form and style of a scientific abstract—which, as most of you know, follows certain standard requirements, especially that it be a summary of an article’s content, including its results and conclusions. Here’s what it wrote:
This study examines the experience of two siblings, who were compelled by their parents to set up an artificial Christmas tree. Through this experience, they concluded that if Christmas trees are fake, then Santa Claus doesn’t exist. Marc, an avid lover of planes, was initially uninterested in the task, while Sara, a lover of horses, was more enthusiastic. However, through dialogue between the siblings, they became intrigued by the idea of a fake tree and began to question the existence of Santa Claus. With further discussion, they concluded that if Santa Claus was real, then he must have a real Christmas tree, and they eventually accepted the truth that Santa Claus does not exist. The experience was a valuable lesson in critical thinking and skepticism for both Marc and Sara.
The most surprising thing about this was not just how quickly it generated this text (seconds), but rather, that it stitched together some unprompted, but relevant buzz words that may be associated with scientific writing. Namely, terms like critical thinking and skepticism. This led us down a curious rabbit hole of how exactly this algorithm was working, using among other things, ChatGPT itself. It really seemed to understand and being having a dialogue with us!
After using GPT for several months now, I honestly find my first set of input prompts to be rather lacking. Since that first chat, I have read more about large language models (LLMs), how they work and how they are coded. The preceding example played with some commonalities in abstract writing but like in seismic data processing—something I have spent the better part of my professional life doing—garbage in = garbage out. So, the better are your prompts, the better GPT can give you what you need or expect. This will have both positive and negative effects, as I hope to convince you. I now use it regularly to ask questions on a plethora of topics, knowing full well that it is incumbent upon me to look at every response with an especially critical eye. I’ve noticed that it excels at some topics and flounders with others, getting even some simple information wrong (apparently on subjects for which there would be little information available to train the language models). But even where it gets things wrong, its grammar is (for all intents and purposes) perfect. So, that’s why I’m writing today, because, as researchers, I think this can be a very valuable tool for us. I see GPT as both a fascinating innovation, and on the surface, a bit of a threat to my professional status quo. My conclusion: “if you can’t beat ‘em, join ‘em.” To wit, GPT eloquently opines:
The phrase “if you can’t beat ’em, join ’em” [‘em is a colloquial (mainly spoken) contraction of “them”] is an idiomatic expression that means if you are unable to defeat or overcome a particular group or situation [the inevitable spread of large language models], it may be more advantageous to join or align yourself with them instead of continuing to oppose them. It suggests that rather than persistently trying to compete against or resist a stronger force [the entirety of computer science and market capitalism], it might be wiser to collaborate or adapt to their ways in order to benefit from the situation or achieve your goals.
So, I will add GPT to my editing toolbox, thinking of it in the same way that I think about other information technologies—such as web search, Google Translate, word processors, spell-check, or calculators. GPT can be your personal assistant, doing all the menial work. For example, my first geophysics job in 1998 was basically geometry data entry and error checking for seismic metadata header files using the text editor Vi, a mind-numbingly tedious task that GPT would have made quick work of. As researchers, we can think of GPT as our own personal (and virtually free) grad student. For example, you might want to have it fetch information on a whim, doing all the dirty work of compiling data, helping brainstorm ideas, acting as someone to bounce ideas off, or analyzing spreadsheets for interesting correlations to investigate further. We’re all first-generation cyborgs now, augmented with the world’s information in our pants’ pockets. As Elon Musk has pointed out: the main barrier to being true cyborgs is the internet’s information bottleneck—typing with our thumbs.
Apart from the examples (in boldface font) above, I have done all the writing so far. But, one would be a fool to not take advantage of this step change in technology to improve one’s writing. To the best of my knowledge, there is no law or regulation yet which limits how much you can use GPT. Hopefully that will change with some comprehensive legislation, but for now, we’ll probably have to just assume that most new text on the internet is built on a GPT backbone. Obviously, this opens a Pandora’s Box of problems associated with plagiarism or copyright that adds a whole other dimension to this story. For my part, I will do my best to make it obvious when I use it extensively. At present, I mainly find it useful to write outlines on a particular topic (something that often prevents many a grad student from even getting started), to analyze text for some hidden detail or trend, to summarize complex text, or to look for alternative phrasings. But don’t fret dear reader, in any of my writings, text that is generated by GPT will ultimately be edited for content and veracity by me.
In theory, GPT could make us all better critical thinkers. In practice, the ability to create content at scale, has suddenly augmented our collective reality—as a result, I fear that the internet will soon become even more of a cesspool of misinformation, disinformation, and general nonsense. The large majority of human (or as A.I. may call us, non-bots) internet users won’t care about all the issues I raise in this article, they will more than likely just be financially incentivized. So, what content sells the most? Answer: emotional content (including fear, rage, hate, revenge, sexual desires, cute cat videos, or people just doing quirky things to grab a slice of the attention pie). In order to maintain a civil public discourse, we are going to need responsible government regulation very soon—presently the E.U. is drafting such a legislation, but it won’t come into force until 2026.
In the following, I will discuss large language models in more detail, delve into some more ways that you can put GPT to work for you, before analyzing the emerging conversation about the risks of artificial general intelligence, including intelligence without consciousness, and the alignment problem.
The Natural Language Revolution
“GPT” stands for “Generative Pre-trained Transformer.” Generative because it now has the ability to generate new information (both right and wrong), Pre-trained because, unlike for example, the 25-year-old technology of internet search engines, GPT doesn’t require an internet connection to work (other than to access its servers remotely, which is mostly the case). It has been pre-trained over the past years on a very large volume of text data—ostensibly, most of the internet itself. Transformer refers to the underlying architecture used in GPT. Transformers are deep learning models that excel in tasks especially involving sequential data, for which language processing is an apt example. The Transformer architecture employs a self-attention mechanism, allowing it to capture relationships between words in a sentence and to efficiently learn dependencies. The two italicized words in the previous sentence sparked my curiosity from a philosophical point of view. While I still think generative A.I. is far from being self-aware (i.e., having consciousness, best defined as what it is like to be something—to have an experience), it is clear that, by some definition, it does seem to learn. And, as some neuroscientists believe, if consciousness is just a matter of sufficiently complex information processing, then it might not be long before machines are truly conscious. Isn’t there a Black Mirror episode on Netflix about that?
GPT is the brainchild of the company OpenAI (originally a non-profit). They burst onto the scene in November 2022 with their fastest-growing app ChatGPT, quickly with the financial support of Microsoft, who now has GPT integrated into their search engine Bing, and even into Microsoft Office Copilot. It’s not hard to imagine where this partnership can take us. In the next few months and years, I predict that GPT-based models will become the norm, becoming so cheap as to simply be part of the informational architecture—there is already a lot of competition among the big tech players, and just as I am writing this blog, Meta (Facebook) has released their answer to OpenAI with their open source model, LLaMA. LLMs will become as normal as a Google search or voice-models like Alexa or Siri, and it will become integrated with them in an increasingly seamless way. There are already game modders who have re-directed open world game conversations with so-called NPCs (non-playing characters, or “A.I.s”) through GPT and generative voice synthesizers like Eleven Labs, then piping it back to the game, resulting in some interesting and unconstrained conversations that had previously been quite limited. But, while all that is happening—and yes, it’s quite fun and appealing—natural language models will become very disruptive to the modern information space on a more important and immediate level. Podcasts and YouTube videos will increasingly be written with generative A.I. Unchecked news articles have already appeared and will continue to do so more frequently. How will we know if it is A.I.-generated in practice? Yes, there are GPT-2 output detectors online. But who’s got time to check that? Ergo, unless we make laws to clearly distinguish human- from A.I.-generated text easily and reliably, we might be advised to start assuming that everything we read online has some generative A.I. aspect to it.
In a recent podcast episode of Making Sense, philosopher and neuroscientist Sam Harris interviews Nina Schick, who is a generative A.I. expert, having advised many tech corporations, in addition to governments and the U.N. She predicts that over 90% of online content will be created by A.I. by as early as 2025. The obvious solution to this is a technical one—to simply create an effective detection algorithm and to embed it with an encrypted, permanent digital watermark so that, although we still may at first be fooled, we have the watermark there to reference. That being said, of course, not all A.I.-generated content will be false, just as not all human-generated content is false. The opposite is more likely to be true. The problem is that this is a technically difficult solution, and so far such algorithms only give a percentage likelihood whether a content is a false positive or a false negative. Furthermore, we are quickly entering into the realm of multi-modal foundational models. So, it’s not just text we have to be concerned about, but text combined with video, digital art, and voice synthesis (Apple’s latest iOS 17 has built in voice synthesis software that just needs a few minutes of your voice sampled to be able to read aloud any text that you give it!). Will we soon have to anticipate phishing calls in the voices of our loved ones, asking for money or help!? It seems likely to me that our information space will soon not only be generated by, but also verified by, other (more trusted) A.I.s. But with whom do we place this trust? If you’re concerned about these issues, please watch this video by the Center for Humane Technology.
Applications and Limitations of GPT
So, in the present, how can we use GPT for the betterment of humanity. Let’s be a little optimistic now and ask GPT a deep question.
Please generate a list of the ways that GPT can be used to decrease the suffering of conscious creatures, while improving human societal flourishing so that we may be optimistic about a bright future for a global civilization?
GPT technology, like other A.I. tools, can be used in various ways to contribute to the goal of mitigating conscious creature suffering and improving human society flourishing [Ok, that’s a nicely worded intro to the following]. Here are some potential applications:
Mental Health Support: GPT can be used to provide personalized mental health support and counseling, offering a listening ear and advice to those in need. It can help individuals cope with stress, anxiety, and other mental health challenges, making mental health resources more accessible and reducing suffering. [Ok, good point. It can—but I would ask if that’s really what we want tech for, i.e., the emotional component, something where humanity already excels? Besides, how many mental health care worker jobs would be lost? This would be very disruptive in the short term and may require unprecedented measures in the mid-term to long-term to ensure stability, such as global universal basic income (UBI)].
Education and Skill Development: GPT can assist in personalized education and skill development by providing tailored learning materials, answering questions, and offering explanations. It can help individuals acquire knowledge and develop skills, leading to greater personal fulfillment and opportunities for growth. [Yes, but caveat emptor. Truth verification, or dare I say, “fact-checking,” is paramount. But how does one variously do so with or without an appeal to expertise? Who are those experts? Knowledge with an appeal to expertise, in one sense, is unavoidable and desirable; I want an expert pilot, surgeon, plumber, etc. But with the fast pace of change, and the financial incentive that is commensurate with content quantity over quality, I expect that claim verification will be more likely outsourced to A.I.s. There won’t always be a “human-in-the-loop,” and when there is, verification will necessarily be much slower (work schedules, sleep/leisure time, bureaucracy, etc.), so there’s that factor.]
Healthcare Assistance: GPT can aid in the field of healthcare by providing decision support to medical professionals. It can analyze patient data, suggest diagnoses, and recommend treatment options, augmenting the capabilities of healthcare providers and potentially improving patient outcomes. [For this one, I would tend to agree that there is lots of great potential, but with it will come much job loss. A.I.s are already better at analyzing X-rays and MRI scans. A.I. drones can semi-autonomously deliver medical supplies to remote areas. A.I.s have virtually solved protein folding (I have heard recently that what A.I. did in just a few days—it would have taken millions of years of individual grad student time!) and have created mRNA vaccines in record times.]
Scientific Research: GPT can accelerate scientific research by analyzing vast amounts of data, generating hypotheses [this should put even more emphasis on the nature of causality and on teaching critical thinking skills], and assisting in experimental design. It can aid in various domains, such as drug discovery, genomics, climate modeling, and other scientific endeavors, leading to advancements that benefit society and contribute to long-term flourishing. [Undoubtably, A.I. will help science in terms of growing the scientific knowledge base, with its virtually limitless and instantly accessible digital data storage. But, from the perspective of critical thinking and the scientific method, it is less clear to me. On the one hand, A.I. could be coded to be less biased, thus not suffering from the same objectivity hindrances that we all experience in human thinking. In practice, that is much more complex. And, as there will always be multiple competitive A.I.s—not just the totalitarian Big Brother one imagined in science fiction—there will be little power to control how A.I.s are coded in other countries or states. There will certainly be many creations, but there is no guarantee that they will all be optimally designed for human flourishing. We will need to prioritize ensuring that somehow!]
Ethical Decision-Making: GPT technology can assist in ethical decision-making processes by providing alternative perspectives, identifying potential biases, and considering a wide range of viewpoints. This can help individuals and organizations make more informed and ethical choices, reducing harm and promoting fairness. [Again, in theory, yes. But it will depend in large part on whether humans (assuming we’ll be making the decisions) will avail themselves of different viewpoints volitionally, and not just look for confirmatory evidence of long-held beliefs. We will probably need to revise our system of incentives, economic or otherwise. There is already a rich ethical debate about whether to even worry or not about A.I.s and society. Optimists will say, ‘you know we had this or that tech innovation in the past, and well, it all worked out in the end, okay.’ Maybe… but, what about on the way to that end? How much suffering and disruption will there be? Also, just because some tech worked in the past, based on someone’s definition of tech, it doesn’t follow that all tech will “work out in the end.” It’s a logical fallacy because every tech is different. And, most notably, A.I. is different because for the first time it can learn and take decisions. Furthermore, because A.I.s are not conscious, they will end up serving as much better listeners than emotional humans, who get tired and have the urge to express opinions of their own. An A.I. therapist could give you its undivided attention, thus creating the illusion that it is more caring than a selfish-by-nature human. And that’s the rub. A.I. only needs to create the perception of consciousness to manipulate us. Therefore, it is a certainty that A.I.s will manipulate us, but it will be us and our conscious and unconscious capricious desires that dictate that manipulation. Imagine a near-future scenario where generative A.I.s are much more closely integrated with us to appear as an extension of our consciousness. We might wish to express any ideas (let’s assume for now, online) cautiously. In interacting with that person/cyborg, will we even know who we are talking to, or what proportion of the thoughts and ideas are human or A.I. generated? Does it even matter? These are difficult questions to thoroughly parse.]
Environmental Conservation: GPT can contribute to environmental conservation efforts by analyzing data on climate change, biodiversity, and sustainable practices. It can help develop strategies for mitigating ecological damage, promoting renewable energy, and designing efficient resource management systems, thereby fostering long-term sustainability and reducing suffering caused by environmental degradation. [As with the last two examples, this will need to be baked into the code responsibly—the sooner, the better. So, there is the technical problem of how to functionally code morality into computer programming languages, but there is also the question of who are “we?” (i.e., will we become further divided along racial or economic lines? Could A.I.s save us from ourselves?) Who will get the burden and responsibility to decide that?]
It’s important to note that while GPT technology has tremendous potential, it also comes with ethical considerations. Safeguards should be in place to ensure the responsible and beneficial use of A.I.s to avoid unintended consequences and to protect individual and societal well-being. [To my last point, it’s nice how GPT output often ends with a short disclaimer such as this one, which has probably been consciously coded by a human to give it priority.]
GPT most certainly will continue to create fake information, which will have potentially deep societal implications. Our digital information diet is already one of the primary sources of human thought capture. How is your attention focused? What we see and read on the internet already plays multiple direct and indirect roles in where our attention is manifest. The first big (long-term) change we need to make across societies is to adapt our school system from a young age to emphasize critical thinking over the storage of facts, especially for online content. Kids need to be taught that it is necessary to ask basic questions about any new information. Such as: What is the source of this information? How reliable is it? Who benefits (Cui Bono) from this being true or false? What are the author’s incentives to create this content? Secondly, we also need to prioritize statistical thinking in schools, because, to analyze the veracity of a claim, we must be able to deduce its likelihood of occurrence. We need more ‘first-principles’ thinking. This needs to become the natural way of thinking for the next generation. If we don’t make this step-change to our educations systems soon, we will be susceptible to all types of deception, and it therefore follows there can be no hope of remaining in control.
Large Language Models are the worst that they will ever be. While they probably haven’t yet passed the Turing Test (the ability to convince a human that you are not an A.I.), they do come close. And, if you’re not fooled by an A.I., it might be paradoxically because the unhuman rate at which it can spit out coherent information is an obvious giveaway. There is already an app that is task-oriented and automates decisions for you, called AutoGPT. Barring some economic or energy crisis, it’s going to just keep getting more capable. Moreover, its rate of improvement, by design, is exponential—it “learns”—so we really shouldn’t be surprised if generative A.I.s are, to an extent, self-sustaining in a few years, where they both generate and consume content. The unknown factor here is just the precise power law exponential that it will follow. Will it be a slow or fast take-off toward what John von Neumann dubbed “the singularity?” It now seems largely a matter of when, not if. Finally, it is already so integrated that we cannot even consider just “turning it off.” How could we, when it might find a cure for cancer or provide climate solutions, not to mention that information technology forms the very backbone of our economy and our power grid. In the next few short years, A.I.s will understand context and generate more nuanced responses. So, we have arrived at a dilemma: we cannot turn this valuable tool off, but we may not be able to control it. This is often called the alignment or control problem, emphasizing that it is misplaced to fear that A.I. will “want” to dominate us, but rather that A.I.s’ goals will not necessarily be aligned with ours. Consider A.I. researcher Eliezer Yudkowsky or philosopher Nick Bostrom’s paper clip maximizer analogy: an A.I. factory is coded to create paper clips ad infinitum and will stop at nothing to accomplish this goal—including using all the atoms of the Universe. Furthermore, who do we mean when we say “ours?” Countries, corporations, academia, individuals? Leaving those deeper questions aside for now, let’s turn to some concrete present-day benefits that GPT can have for researchers.
How can you use GPT as a scientific researcher?
I would call myself a cautious optimist when it comes to A.I., in general. But, at present, with its ability to manipulate language en masse like never before, it can certainly be a valuable tool for researchers to manage the vast amounts of data available and to assist in various stages of writing a scientific paper. Here are some ways GPT can help with scheduling, organizing, and reviewing the literature.
In general, I would recommend that you prepare GPT for your input by giving it some information about what you want to accomplish. That is, don’t simply ask for something, but rather explain a little about yourself, and what are your goals. GPT really excels when you place limits on its output (i.e., number of paragraphs, words, or characters—or even to write (or transpose) text without certain parts of speech (we all hate those pesky English phrasal verbs, right?) Or, you might consider asking it to output the text in the style of a particular author or publishing standard. You can also ask it to write about a topic, but for a certain audience (children, scientists, the general public, etc.). Of course, you can also paste text from other writing and ask it to analyze, modify, or condense it). You can ask it to role-play, taking the perspective of another person, profession, or a peer reviewer (note: I don’t expect A.I. to replace the peer review process yet, but it’s a good dress rehearsal that you can employ before submission to your journal of choice). The more accurate information you give it, the more suitable will be the output. And, perhaps most interesting to me is that you can actually ask it to code for you. Ask it for some Python or C++ script for a specified purpose (or create Excel spreadsheets or macros), and you have yourself a working template, which you can then modify to suit your needs.
GPT (or more specifically, apps like AutoGPT) can help researchers to conduct literature searches by analyzing keywords and automating the retrieval of relevant scientific papers from databases. It can then help to summarize and synthesize key information from multiple sources, making the review process more efficient (if somewhat black box). Of course, the ultimate decision must be up to you, but the dilemma for many researchers will not be due to the lack of information, but rather saturation by it. GPT will likely be the cause of (and solution to) such dilemmas. You’ll employ it to generate a wealth of information on a certain topic, you’ll use it to synthesize and reduce that information to a given focus, then you’ll use it to organize your thoughts and ideas by generating outlines (introduction, methodology, and results, etc.) or intuitive mind maps based on the provided research topic. Once your outline is written and you have a good sense of your bibliography, GPT can assist researchers to manage it, including formatting in specific citation styles, for example, or checking reference lists for accuracy.
You know that feeling when you are writing a paper and you cite someone’s work, only to discover that it leads to a deep rabbit hole of nuance with branches of citations to other works? Well, consider tasking GPT with that, leaving it to probe the multiple lines of investigation. But here’s where I take issue. While you may indeed “get more done” you will lose some control over the creative and academic process. Take a recent example of how research has already changed profoundly since the spread of digital PDF documents. Whatever happened to the beauty of sitting in a library and perusing bound journals for a topic? With PDFs we may have become more efficient, but we are less open to serendipity. With large language models, we are poised to lose even more control over what is written.
Ultimately, the goal of a researcher should not just be to publish more papers and get more citations—though the system does incentivize us to do exactly that—but there is also a pride and a wonder for discovery that comes with the activity. So, while you may be able to be more efficient (and there’s a certain beauty to that, to be clear), if you’re like me, you can’t let a topic go until you understand it thoroughly. There is the real fear that A.I. will not only augment our writing, but also our laziness, and we’ll lose the beauty of discovery and life-long learning while in search of more publications per year. That would certainly be a pity. Moreover, I believe there is the real risk that, between the creation of scientific content, its dissemination, and the synthesis of that content by other people, that we end up with a situation where there is A.I. in the gap between most communications. Will our true intentions be meaningfully communicated to another person, with A.I. always acting as the liaison? It would be a bit like how I can “speak” a foreign language with Google Translate in the middle. Yes, we’re communicating, but what is lost in the process?
As a scientific copy editor, I am astonished at how well GPT excels at language enhancement. It can be your writing assistant, improving the clarity and coherence of your scientific papers and communications. It can suggest alternative sentence structures, provide vocabulary recommendations, and flag potential grammar or spelling errors. However, for the moment, it hasn’t quite reached the summit. GPT responses are highly correlated with detailed input information (i.e., “prompts”). Garbage-in leads to Garbage-out. So, with no human-in-the-loop you could be left with an amorphous blob of uncorrelated text, that may pass for perfect grammar, but will comprise loosely incoherent ideas at best. That’s where I come in.
Personally, I have found GPT very useful to suggest other phrasings to a certain string of text, or to untangle the meaning of poorly written sentences. However, I will never use it to bulk-edit pages of your manuscripts to save time. Let’s not forget that GPT is not yet capable of faithfully generating truth. Don’t be fooled by its impeccable grammar and speed. At present, it’s more of a charlatan that can be easily “tricked” into admitting it was wrong, just by “arguing” with it. Recently I tried to use it to translate my CV from Spanish to Catalan, while not translating any of the integrated English text (e.g., publication titles). GPT-3.5 (the free version of ChatGPT) failed miserably at this. Even when I corrected it by pointing out that I don’t want English text translated, just the Spanish, it simply “apologized” then repeated the mistake over and over no matter how many times I corrected it. Don’t blindly use GPT to write your paper. Instead, use it systematically and carefully, giving it specific detailed tasks (with well-defined prompts) as you compose your manuscript piecemeal. Better yet, ask GPT itself to help you help it create the content you want. Have a conversation with it about how best to write meaningful prompts so as to obtain a well-defined goal. My job, then, becomes not simply a copy editor, but a fact verifier, a cross-referencer, your personal skeptic.
The Big Picture
Let’s finish by taking a broader perspective again. As Yuval Noah Harari points out, A.I.s are still in their infancy and have the potential to evolve rapidly and unexpectedly, surpassing human capabilities in just a few years or decades, eventually reaching Artificial General Intelligence (A.G.I.). A.I. is the first technology ever that can make decisions by itself, taking power and responsibility away from humans. Crucially, A.I. has the potential to destroy democratic conversation; dictatorships rely on one person or entity, while democracy is messy and inherently works via debate. So, with algorithms capable of spreading fake news and false conspiracy theories, incentivized by increased online engagement brought on by hatred, outrage, fear, and tribalism, the dangers of the battle for human attention in the digital age are starkly highlighted. To be clear, in the short term, the power of algorithms to shape our beliefs and opinions is much more worrying than the fantasy of an A.I. takeover. Despite our intelligence, humanity is facing two major threats—ecological collapse and technological disruption—and instead of uniting to face them, we are dividing and fighting each other, with an observant A.I. along for the ride. To quote Harari, “one of the biggest challenges facing humanity now is our inability to cooperate; if we can cooperate, we can deal with A.I., if we can’t cooperate it’s hopeless.”
Going forward, I think we have some deep reckoning to do as a society to make sure that we effectively manage A.I. There is no doubt that A.I.s will replace many jobs. But they will also create many new opportunities and capabilities—it is not as if there is a finite bank of all possible jobs from which to choose. Stuart Russell, a British A.I. researcher based at UC Berkley, has indicated that ChatGPT-like tools have the distinct potential to replace most scholastic material up until the end of high school. That would surely be transformative. If we get it right, that style of personalized teaching could quite possibly enrich education. We are still operating our education systems with an industrial revolution mindset, preparing students for “the workforce.” Already, at my son’s high school, each pupil has a personalized Chromebook where they are encouraged to do their homework. As additional homework, we use the wonderful app Brilliant together. It’s visual, intuitive, interactive, and non-judgmental. In addition, OpenAI has just announced a partnership with the Khan Academy to pilot a virtual GPT-4 tutor. So, while I highly recommend employing these new and emerging technologies for the betterment of education and research, we do need to normalize thinking critically about their outputs. Ask questions, but have fun!
For the foreseeable future, A.I.s probably won’t be able to fully replace the care and compassion that is so natural for humans. While they may be able to communicate valuable information and knowledge, i.e., to teach, they will at least not be able to replace the deep passion and inspiration of an effective teacher. However, we only must assume two things for even that to status to be disrupted: 1) that intelligence is in fact substrate independent (i.e., it can manifest in organic brains or in silico), and 2) that optimization improvements in artificial intelligence will continue at some rate to eventually arrive at super-intelligence. If we succeed in reaching that level, all bets are off as to the future of humanity.