Man and Machine: A new Era
With the release of ChatGTP, OpenAI's latest feat, the world is waking up to new possibilities in AI. Its impact it will transformative. Let's buckle up folks. Disruption is coming.
In 2018, when AI was at the top of Gartner's hype cycle, it was already on the agenda of the corporate elite at the World Economic Forum in Davos. “The technology will be better than fire” or it will “lead to World War III” is how beliefs diverged. It was just around the time GPT-1 opened. 1 Mainstream folks were still far away.
Today, in a time when much of the tech industry seems to be down in the dumps, AI is again experiencing a golden age. Twitter is swamped with techies showing off their latest experiments with ChatGPT, a recently released natural language processing model (davinci-003) developed by OpenAI, specifically trained to facilitate conversation.
The Silicon Valley elite, such as Marc Andreessen, Packy McCormick, Paul Graham, or Elon Musk, are eagerly tweeting about their latest experimeChatGPT. McCormick, for example, made ChatGPT “build an app that links to essays and produces 10-bullet summaries using GPT-3”. It worked. Someone made it solve an IQ test, on which it scored 83.
Or check out this one:
Or this one:
In 2022, investment and innovation in AI have exploded. Earlier this year, Google’s DeepMind was able to decipher the structure of virtually every protein known to science with a program called AlphaFold. The latest versions of AI image generation programs (e.g. Midjourney, Stability AI) are dazzling users with their abilities. I just recently started testing them myself and was blown away. You can check it out in this article.
“Crypto and the metaverse are out. Generative A.I. is in”, wrote the New York Times a few weeks ago. Just recently, Stability AI, the start-up behind the popular Stable Diffusion image-generating algorithm, raised $101m and threw a party in the San Francisco Exploratorium “that felt a lot like a return to prepandemic exuberance.” Meanwhile, Jasper, a company that uses AI to generate written content, raised $125 million. The buzz is spreading like wildfire.
It seems that we’ve collectively underestimated the speed of progression in AI and are just about to realize what’s coming. Or maybe we were too busy thinking about things from the “old world”, like Trump, Covid-19, GameStop, Dodge Coin or Elon Musk.
The rapid spread of powerful AI tools for everyday users, culminating with ChatGPT released last week, has certainly fueled the hype about the “new world”. 2
I think that we’re witnessing a point in history where several technologies are advancing exponentially all at once:
AI (several forms, autonomous vehicles, robots, thousands of utilities)
AR (Apple will present its big bang AR experience powered by AI & Apple Silicon)3
Decentralized computing (Web3)
All of them may be fundamental for the “Metaverse” as we conceive it. Today, we’ll focus on the first one.
“Better than fire” or “world war III”?
Whilst some see tremendous opportunity ahead, others, such as Stephen Hawkins, Bill Gates, Peter Thiel, and Elon Musk warn about a future in which AI will gain the upper hand, plunging humanity into a hopeless competition against machines. Musk describes AI as our “biggest existential threat” and warns:
And mark my words, AI is far more dangerous than nukes. Far. So why do we have no regulatory oversight? This is insane.
In AI circles, the two camps are called “Orthodoxs” and the “Reformists”. Here’s an excellent overview of the two philosophies. Debates evolve around:
Economic disruption and inequality through job replacement: This could lead to widespread unemployment and could exacerbate existing economic inequalities. Some argue that we’re approaching a critical ‘tipping point’; one that is poised to make the world economy significantly less labour intensive. Will AI complement and amplify our innate abilities, such as intuition, emotional intelligence, empathy, creativity, or contextual awareness? Or will it replace it? Our ability to create new jobs and find alternative working models will determine the real effects (as it always has with new technologies until now).4
Safety and security risks: AI systems have the potential to make decisions and take actions that could result in physical or psychological harm to humans, either intentionally or unintentionally. A real fear of an arms race for AI-powered autonomous weapons and “robo-wars” exists.
Ethical and moral challenges: AI raises complex ethical and moral questions, such as how to ensure that AI systems are fair and unbiased, and that they respect fundamental human rights and values (cf. AI alignment).5
Loss of control: When AI systems get more advanced, they could develop different goals and motivations than the humans who created them. This could lead to a loss of control over AI systems, and could potentially result in harmful outcomes.
Are the worries justified or do we just need a healthy dose of techno optimism?
Meanwhile, progress continues. Among the leading experts in AI, the median estimate is a 50% chance that high-level machine intelligence will be developed around 2040–2050, rising to a 90% chance by 2075. They estimate the chance is about 30% that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity. Ray Kurzweil, one of the leading scientists in the field, predicts that by 2045, we’ll be able to multiply our intelligence many millions fold.
And we’re right on track of fulfilling Ray Kurzweil predictions.6
For the next five to ten years, we don’t need a crystal ball. We know that:
Progress is fast and exponential. The computational power of neural networks is doubling every 5.7 months.7 GPT-4 will ship next year.8 Just yesterday, Apple released optimization for their Apple Silicon machines that cut the processing time with Stable Diffusion, one of the leading generative AI applications, in half. The ongoing consolidation of different fields within AI further speeds up progress. The tools that everyone is playing around with today will become 10x-100x more powerful in the next years.9 (I recommend you to take a look at the graphs in the footnotes)
Within the next few years, intelligent machines will become powerful and capable enough to do most of the work we do today.
Let's buckle up folks. Disruption is coming.
A new dawn
If you haven’t realized how powerful this stuff is going to be, NOW is the time to wake up.
This is one of those rare moments in technology where you see a glimmer of how everything is going to be different going forward. I felt the same when I first saw an iPhone in 2008. Or connecting to the internet for the first time in 1996. In the future, we might compare it to the advent of the airplane or the car.
This technology will change the game for everything within the next few years. It already does.10 11
The philosophical and hypothetical question about our relationship with machines suddenly becomes brutally real. In the next few years, humans might see more change than we have ever experienced.
What if…
… students can write essays… engineers can code programs… lawyers can write contracts… investment bankers can build Excels… artists can create images… illustrators can create illustrations…
… in a few seconds instead of hours?
(Oh, and you can apply these examples to pretty much every modern white collar profession or “knowledge” or “service” worker.)
That’s almost possible today. Now extrapolate that with 10 or 100.
Every engineer, writer, designer, coder, artist - and pretty much every other “knowledge worker” - will need to rethink how they work. Every new graduate may need to rethink their best career choice.
Even though we’re not yet talking about “superintelligence” or the “singularity” yet, we will soon have superpowers.12 13
Or, in the prosaic words of Scott Aaronson, American Computer Scientist:
I regard the fact that such systems will have transformative effects on civilization, comparable to or greater than those of the Internet itself, as “already baked in”—as just the mainstream position, not even a question anymore. That doesn’t mean that future AIs are going to convert the earth into paperclips, or give us eternal life in a simulated utopia. But their story will be a central part of the story of this century.
The time start thinking about it is now.
The next 5 years
Let's look at the short-term impact of those technologies with a list of bold predictions that are incomplete but hopefully representative. Whether they're accurate is less important than whether they inspire action.
At work:
Sharp increase in productivity of existing workforce through automation or increased efficiency (more work done in the same time; less people needed to do the same amount of work): It may be that productivity starts to increase exponentially in line with the progress on AI, instead of linearly. I found no studies that offered clear evidence on this. ARKInvest estimates that by 2030, artificial intelligence is likely to boost the output of global knowledge workers by 9% at an annual rate, from $41 trillion in expected human labor output to roughly $97 trillion in AI + human output.
Proliferation of AI-powered tools that take over tasks or replace jobs entirely: every knowledge worker will be able to use AI to automate parts of their tasks. Many jobs will disappear. New jobs will be created in highly specific areas around AI. AI will enable business opportunities that people will work on.
Human traits, personal interactions, and strategic thinking will become more significant in certain areas; in others they will become less relevant. Repetitive service work, low-level support, and administrative tasks will be done by AI. The perceived value of services provided by humans will rise (e.g. hotel check-in; sales). Personality, social and leadership skills, emotional intelligence, strategic thinking, creativity, intuition and empathy will become more important across all “knowledge” jobs. E.g. A teacher will automate a lot of existing tasks like working through essays, grading exams, etc., while personal relationships in the classroom will become more important. General rule: Jobs that involve a lot of complex social relationships will still need a lot of human input.
In life:
Proliferation of “generic content” (text, images & videos): Flood of general knowledge and general opinions. You might read essays, blogposts, art or even books entirely created with AI. They will likely lack personal feeling, emotions, or empathy. Some of it will be indistinguishable from human creations. I don’t necessarily agree that AI won’t create any new ideas, as Paul Graham tweets below:
If AI turns mediocre writing containing no new ideas into a commodity, will that increase the "price" of good writing that does contain them? History offers some encouragement. Handmade things were appreciated more once it was no longer the default.Brands and creators with authentic, ingenious, original, creative, unique, human content will succeed. Personal opinions, distinctive styles, and live formats will be valued highly. Influencers who talk about personal topics or share knowledge in a personal way will succeed over those who merely share information. Media brands and publications who merely aggregate information or write generically will become irrelevant. We will see lower value in knowledge and information aggregation (e.g. Google) and higher value in character, opinion and “craft”.
Need for curation: The more content there is, the more we’ll depend on curation. Knowledge-based curation is already done by ChatGPT, for example. While Google still aggregates and curates, ChatGPT curates too, but smarter. It still lacks context-specific knowledge though. People who curate beyond knowledge in areas that are highly context and culture specific such as art, fashion, or literature will be in high demand.
Need for originality in digital content: Since digital content can be easily copied or recreated in seconds, it will become more important to prove the content's originality and prevent plagiarism. NFTs will ensure ownership and allow creators to protect and monetize their content. We will have to deploy AI to fight AI plagiarism.
Stronger identity verification mechanisms: The value of real identities will rise. People will want to know whether they interact with a bot or a real person. Impersonations and fake identities will become more common. Social media platforms might require ID verification.
Proliferation of closed networks with known members and personal relationships: Low-level interactions (i.e. interactions that require low levels of social and emotional intelligence) will be outsourced to AI. The number of people who interact with AI systems, and the number of interactions with AI systems, will rise sharply. People will counterbalance this with a need for personal relationships and safe spaces that ensure human-to-human interactions (e.g. in Web2: WhatsApp Communities, Discord, etc.; Web3: Guild, Geneva, etc.)
Democratization of education: The Internet revolution democratized access to information. The AI revolution will democratize access to education. It will make it hyper-personal with immediate feedback. Education will become.g. AI tutors for learning new languages.
The future is already here – it will just not be evenly distributed: AI will spread like wildfire, but it will spread unevenly. People with access to it will have an unfair advantage. 99% of people still have no idea AI is here.
What have I missed?
Taking action
If the metaverse turns out how we imagine it to be, it will combine AI, VR and decentralized computing. This will enable those immersive, authentic, personalized experiences that people will seek. 3D virtual worlds and an “individual sense of presence” will allow us to be more human and connected than ever before, while blockchains will ensure the “continuity of data”, such as identity, ownership and payments.
But that’s for another day. What we’re experiencing now is unprecedented and requires immediate action:
For now, I see the following action steps:
Everyone:
Regulation: We need clear regulation on where, by whom and how AI can be applied. Considering its implications and strategic importance, it should be at the top of the political agenda. An EU Artificial Intelligence Act (AIA) was proposed by the European Commission in April 2021 and could become a global standard. In September 2021, Brazil’s Congress passed a bill that creates a legal framework for artificial intelligence. While the UK Government has not yet released a legal framework, it has laid out a 10-year National AI Strategy for becoming a “global AI superpower”. China introduced regulation in March this year. The US has no regulation in place. In Switzerland, the Federal Council advised The Federal Department of Foreign Affairs to look into the topic. No legislation has been passed yet.
Institutional reforms: We need to fundamentally rethink education and what value it will provide in the future. Skills for the future will change dramatically. Teaching “knowledge” will be a waste of money and time. Clarity is needed on how we prevent plagiarism.
AI alignment: Clarify how we ensure AI systems behave in ways that are beneficial to humans and aligned with our goals and values; clarify the ethics of AI systems.
Open sourcing & equal access: Ensure equal access to AI technology, such as OpenAI.
Brands, creators & businesses:
Brands and creators will have to become more original, personal, and authentic; they can leverage immersive, digital experiences to connect with their fans in meaningful ways.
Businesses have to evaluate and restructure their task and job profiles; and ultimately automate or retrain.
Newspapers, media outlets, and aggregators will have to find ways to become more opinionated, distinct, curated, and human to provide value beyond the “generic.
You:
Get up to speed with the latest AI developments; try a few tools yourself
Think about how you can integrate AI into your workflow already today
Nurture your creativity
Invest time in developing your soft skills and relationships
The forgotten gift
Thinking about our future relationship with machines, we’ll have to learn to be more human again. Albert Einstein saw it long ago:
“The intuitive mind is a sacred gift and the rational mind is a faithful servant. We have created a society that honours the servant and has forgotten the gift.”
Asking ChatGPT about how this relationship will look like, it answers:
I can tell you that many experts believe that the relationship between man and machine will continue to evolve and become increasingly complex. As machines become more advanced and capable, they will likely play an increasingly important role in our lives. It is important for us to consider the potential consequences of this trend and to ensure that we are using technology in a way that is beneficial to society.
Thanks for the advice, buddy.
Let’s ask Immanuel Kant instead:
Treat humanity […] as an end and never simply as a means.
Now that’s cunning.
See you on the other side.
– Marc
PS: I wrote most of this essay while listenting to the Interstellar original motion picture soundtrack. If this sounds all too futuristic for you, that’s my excuse.
Further linkes & readings:
GTPChat: https://chat.openai.com/chat
Comprehensive list of Large Language Models: https://crfm.stanford.edu/helm/v1.0/
OpenAI Playground: https://beta.openai.com/playground
Lee, K.-fu. (2017, June 24). The real threat of Artificial Intelligence. The New York Times. Retrieved December 4, 2022, from https://www.nytimes.com/2017/06/24/opinion/sunday/artificial-intelligence-economic-inequality.html
Guardian News and Media. (2016, January 20). What does it mean to be human in the age of technology? The Guardian. Retrieved December 4, 2022, from https://www.theguardian.com/technology/2016/jan/20/humans-machines-technology-digital-age
Guardian News and Media. (2018, January 19). Post-work: The radical idea of a world without jobs. The Guardian. Retrieved December 4, 2022, from https://www.theguardian.com/news/2018/jan/19/post-work-the-radical-idea-of-a-world-without-jobshttps://www.ft.com/content/9943bee8-7a25-11e
Reform AI alignment. Shtetl-Optimized. (2022, November 22). Retrieved December 4, 2022, from https://scottaaronson.blog/?p=6821
Roser, M., Ritchie, H., & Mathieu, E. (2013, May 11). Technological change. Our World in Data. Retrieved December 4, 2022, from https://ourworldindata.org/technological-change
Sonya Huang, P. G. and G. P. T.-3. (2022, December 6). Generative AI: A creative new world. Sequoia Capital US/Europe. Retrieved January 14, 2023, from https://www.sequoiacap.com/article/generative-ai-a-creative-new-world
The future of employment: How susceptible are jobs to computerisation? (2016, September 29). Technological Forecasting and Social Change. Retrieved December 4, 2022, from https://www.sciencedirect.com/science/article/abs/pii/S0040162516302244
Interview with OpenAI CEO Sam Altman:
ChatGPT is a natural language processing (NLP) model developed by OpenAI. It is a variant of the GPT-3 language model that has been specifically trained to facilitate conversation. This means that it is able to understand and respond to human input in a way that is more natural and conversational than other NLP models.
GPT-3 (Generative Pretrained Transformer 3) is a state-of-the-art language processing model developed by OpenAI. It is trained on a massive amount of data and is capable of generating human-like text in a variety of styles and formats. GPT-3 is a significant advancement in the field of natural language processing and has been used for a wide range of applications, such as language translation, question answering, and text summarization. It is one of the most powerful language models currently available. Further reading: https://medium.com/walmartglobaltech/the-journey-of-open-ai-gpt-models-32d95b7b7fb2
On Wednesday, Apple released optimizations that allow the Stable Diffusion AI image generator to run on Apple Silicon using Core ML, Apple's proprietary framework for machine learning models. The optimizations will allow app developers to use Apple Neural Engine hardware to run Stable Diffusion about twice as fast as previous Mac-based methods.
Sources indicate the presentation could happen in January 2023, with a release in July.
Throughout history, tectonic technological advances unilaterally showed that human society is astonishingly adept to adapt. When raw muscle power was replaced by oxen, donkeys and horses, our dexterity gained importance. After the invention of advanced production machines, our intellectual abilities kept us relevant. In general, old jobs were replaced by new jobs.
Also check out this analysis by McKinsey&Co:
Check out this study here for even more data on the risks of job replacement through computerization.
AI alignment refers to the problem of aligning the goals and objectives of an artificial intelligence (AI) system with the values and preferences of human users. This is considered a difficult and complex problem in the field of AI, because it requires finding a way to ensure that the AI system behaves in ways that are beneficial and desirable to humans, without introducing any conflicts or unintended consequences.
AI alignment is an active area of research and development in the field of AI with many different approaches. Some of these approaches include incorporating ethical and moral considerations into the design of AI systems, developing AI systems that can learn and adapt to the preferences of human users, and developing mechanisms for ensuring that AI systems remain transparent and accountable to human users.
One of his most notable predictions is that computing power will continue to increase at an exponential rate, following what he calls the "law of accelerating returns." This law states that the rate of technological progress increases exponentially over time, and as a result, the power of computers will increase exponentially as well.
Exponential progress in computer power:
Exponential progress in computing efficiency:
Expontential progress in computational capacity:
Exponential progress of of neural networks by Ray Kurzweil:
In 1998, Yann LeCun's breakthrough neural network, LeNet, contained 60,000 parameters (measures the machine's capability to do useful things). After 20 years, OpenAI produced a version of GPT with 110 million parameters. GPT-2 has 1.75 billion and GPT-3, now two years old, has 175 billion. More parameters mean better results. Multimodal networks, which combine text with images, text with text or any combination of the three, are even more complex. The biggest are approaching 10 trillion parameters.
Scaling down transistors in the 2D space of the plane of the silicon has been a smashing success: Transistor density in logic circuits has increased more than 600,000-fold since 1971. Reducing transistor size requires using shorter wavelengths of light, such as extreme ultraviolet, and other lithography tricks to shrink the space between transistor gates and between metal interconnects. Going forward, it’s the third dimension, where transistors will be built atop one another, that counts. This trend is more than a decade old in flash memory, but it’s still in the future for logic (see “Taking Moore’s Law to New Heights.”) Source: IEEE Spectrum
In a recent study, BCG found that 64% of workers derive at least moderate value from AI.
Great overview by Sequoia on the impact of generative AI:
“Superintelligence” refers to the hypothetical future development of AI that is vastly more intelligent than any human being. This could be achieved through advanced machine learning algorithms, neural networks, or other technologies that allow computers to process and understand information in ways that are currently beyond the capabilities of human intelligence.
“Singularity”, a term coined by Ray Kurzweil, refers to a hypothetical future event in which technological progress reaches a point of acceleration that leads to rapid and exponential changes in society. This could be driven by the development of superintelligent AI, but it could also be the result of other technological advances, such as breakthroughs in biotechnology or nanotechnology.
Already today, human-machine collaboration proves increasingly successful. For example, Garry Kasparaov, chess grandmaster and world champion, claims to play better when collaborating with a computer. In Siemens’ factories, humans already work alongside intelligent machines – a setup that McKinsey & Company sees as key to future growth. Professor Philipp Theisohn, science fiction researcher and head of the Department of German Studies at University of Zurich, says that a fusion of humans and machines would finally complete us, as it could counterbalance our emotional thinking.