Long Take: Now is the time to learn Generative AI, not after the knowledge worker layoffs
What is more magic β the human brain or the LLM algorithm?
Gm Fintech Architects β
We are still in conference mode, but want to share with our latest learnings.
In this analysis, we open up several frameworks for the Generative AI revolution, look into the reportedly-leaked Google memo on how open source will win the AI wars, and summarize the Stephen Wolfram coverage of ChatGPT math. If IBM is planning to replace 7,800 jobs with AI, certainly now is the time to figure out why.
Tags: Google, OpenAI, Wolfram, IBM, ChatGPT, generative AI,
If you got value from this article, let us know your thoughts! And to recommend new topics and companies for coverage, leave a note below.
Long Take
Maybe you have been sitting on the sidelines waiting for this Generative AI thing to blow over. In the everything-bubble of the forever-Internet, it is hard to know what is real and what is not worth the attention.
But we keep coming back to this topic. It will determine the next 10 years of economic development, and transforms labor, money, and blockchains alike. To that end, todayβs writing is our continued exploration of the topic, and learning of the state of the art. We think there is a desperate rush to understand the implications, and now is the time when all of us should be investing in figuring this out.
If you havenβt yet read our definitive framework pieces, allow us to point you at the following β
Podcast: Powering the AI revolution from DeepDream to Dall-E, with Lambda Labs CEO Stephen Balaban
The Twittersphere is brimming with people trying to attention farm AI content. Some of it is pretty good. You can pick up tips on how to use various AI tools and engineer prompts to go beyond the basic, which boils down to having the AI model roleplay as a particular character. You can follow every twist and turn of the news, as the competition evolves from personal to corporate to national to global levels.
Sensational headlines are constant. IBM is planning to βreplaceβ 7,800 jobs with AI, largely in the back office. We have said before that the combination of robotic process automation with machine-scale intelligence will be lethal for a large portion of while collar work.
Unlike many of the earlier automation revolutions, machine intelligence actually targets intelligence and creativity. While top human experts are, perhaps, more original than their computer simulacra, most people will not be able to tell the difference in output. The sounds of a piano synthesizer or an actual piano, for example, are similar to the ear of a lay person. The distinction between art and illustration is interesting to the curator, not someone being entertained.
To that end, knowledge work is going to be severely impacted. We have recently talked to the
, a fantastic newsletter covering the space, about these principles as they have emerged in the visual arts.Here are the key bits:
I used to think of AI as a counterpart to a human brain. Once we have mapped an entire human brain, in an Accelerando fashion, then we can copy/paste that intelligence and scale up our processing. But it feels more like AI has been recreating human senses at the scale of the population, of humanity. We see how neural networks used to ingest some local data set about cats and that was sufficient to train that network to see cats. Now, the entire container of digitized human knowledge is pumped into a mystery box, which structures that information into abstractions we cannot touch or understand.
If anything with deep learning, we could trace the abstractions in the beginning as Shakespearean letters turn to words, then words to sentences, then sentences to paragraphs, then paragraphs into new books. But the new models have an order of magnitude more data, and the abstractions and clumps pass away from our intuition, several levels up to the clouds. So I think of it as interacting with a modeled sense, a disconnected digital feeling averaged out across all human experience.
How do you see the balance between open source vs. proprietary models when comes to generative art?There are two dimensions I am worried about here: (1) the closing / opening of the model itself, and whether the manufacturers of the AI engine try to close down access to its use and re-use, and (2) the ability of people to own and transact around the outputs of the models in a way that advantages human dignity.
The generation of these models is a race, and spoils will accrue to the fastest movers at scale. Once that race is over, the technology will be available to all, and its protections will diminish and the economic profits will be gone. To that end, while it may feel hyper-competitive in the moment, I think the long term outcome will be large open-source models that are tied to the evolution of humanityβs data exhaust. The Internet will *think*.
The other problem seems more dangerous. If we again opt into infinite content and zero cost, nothing good will happen to society β dopamine addiction will continue to rise, people will opt into robot friendships and relationships over messy human ones, and so on. To that end, I hope we find economic models for these AI-produced digital goods that look more likely functional market economies, and less like infinite streams.
Are we navigating towards a world in which we end up with a dozen of foundation generative computer vision models that are used to create the vast majority of digital art in the world? What is the probability, risks and potential alternatives to that future?I think we will end up with an oligopoly of AI conversational interfaces, which become deeply functional like operating systems. The OpenAI plug-in strategy is very powerful, and could kick off a race in terms of economic competition that largely benefits a single AI owner. I hope that the open source community is able to fork many of these benefits, and then create decentralized ownership and governance models that allow people to maintain their dignity (i.e., rights) as well as manageable financial models. Β
We have also recently shared some of these views with Real Vision, which you can see here, after the ETH staking bit β
A couple of things are starting to stand out.
First, personalization is starting to emerge. Having a digital twin of yourself that participates in your activities with granular access is not a pipe dream. Automated social media replies in a company style, or renderings of various professional services already are being built. We expect further mass customization. Economic markets and the invisible hand of capitalism will select between those simulacra that are valuable to clone and persists, and those whose uses are narrow.