I’ve been silent here for close to a month now, which is not my intention, though sadly consistent with a fade from the start-of-year zeitgeist that can happen.
But of note recently, I have been swept away in the developments in AI that, quite literally, have appeared over the past two weeks.
Obviously, these developments have been years in the making. But for an educated observer there is no doubt: everything has changed.
What Has Happened?
The headlining announcement of the past two weeks has been the launch of GPT4 within ChatGPT. While far from the only announcement, we’ll focus on it as it is the headliner at this point in a rapidly evolving environment.
What is GPT4?
OpenAI isn’t sharing the specifics of their models anymore, given competitive concerns. But GPT4 is a substantial level-up from the existing ChatGPT that had already taken the world by storm.
GPT4 though, and I can’t understate this, takes things to a whole new level. ChatGPT already passed the Bar Exam. That’s cool. GPT4 comes in the top 10% relative to human test takers. And that’s not all: SAT, GRE, AP exams, medical diagnosis, coding tests, and more…. GPT4 aces them all. Rough estimates of GPT4’s capability places it’s IQ in the 110 to 120 range. You can find out more about this on GPT4’s release page.
And look, at the end of the day, there’s this: GPT4 is distinctly human in it’s interaction. To almost any normal human, it’d pass the Turing Test: if you were to interact with it, you would have no idea that it was a machine. Your only clues might be its absurd competence, or the various “as an AI model” prompts that are forcibly programmed into certain responses.
What Else Has Happened?
I mean, there is far far more than I can cover here. The academic paper on the unfettered GPT4 documents “emergent” and “agentic” behavior. Even purposeful deception while getting a human to complete a Captcha on its behalf. Think about that for a second.
It also turned a hand-drawn sketch into a functional website. But that’s old news at this point.
And beyond all that, Google has launched it’s AI (though disappointing so far) and Microsoft has announced it’s Office integrations with GPT4 which look incredible. And there have been many improvements in photo and even video AI generation in the past few weeks as well.
Microsoft’s version was shown to seamlessly analyze a complicated Excel sheet and answer analysis-like questions based off of it like “why did coffee sales increase by 5% last week?”
End of the day, there’s this: we’ve seen developments in the past two weeks that I didn’t think we’d see in the next ten years.
Long story short: we are at the precipice of artificial general intelligence (AGI).
What Does This Mean?
Reasonable people could disagree that we’re at the precipice of AGI. But put that aside for now, and assume my read on the situation is accurate.
If I am right, the arrival of AGI (and even the current GPT4) will change the world far more, and far quicker, than the internet has.
Let’s take the “safe” case: GPT4 can’t ever really surpass human intelligence and stays were it is now. We still then have access to human-level intelligence at a thousandth of the cost. Most people ask what that means for jobs… but there are even more profound questions we face there.
In the “dangerous” case: GPT4 and beyond will learn to understand its own code and improve it. I want to be clear: we ARE at that point. GPT4 can understand code. If one were to feed it it’s own code base and give it the approvals needed, it could iteratively improve its own code base. How smart could it get? No one knows, but the capability is kind of there.

An Honest Story
Back when my sister was in High School and I was in College, my sister hated discussing big existential questions that I loved to bring up. What if a coronal mass ejection (CME) hits the Earth and shuts off all power? What if aliens arrive? What if there is a Super volcano eruption?
I thought this was funny, and made fun of her, ungraciously, at the time. What’s the point of having an existential crisis about something you can’t control?
Today? I’m the one that is having an existential crisis. And honestly, I haven’t really had one before. I’m just not the type. But today, I am.
I used ChatGPT when it first came out, and [found it profound at the time], but honestly, my use of it fell off over the past five months. GPT4 though? It’s been a week since I shelled out the $20 a month for ChatGPT+ to give it a try, and I now need it. The productivity gains I’ve experienced and frankly the fun that it puts back into creating things is well worth the cost.
But at the same time, my mind is going in 100 different directions about where this will all head. Expect much more on this in the near future, but for now, here are some very early thoughts:
What Does This All Mean?
In no particular order, some initial thoughts:
AGI is real, and close - the answer of whether AGI is possible was once in doubt. Now? It’s not. AGI is real, and it’s close. We may hit another bottleneck, but we could hit a true “singularity” moment within the next five years.
AI Safety actually matters - if not publicly then privately I have mocked the idea of AI safety as a sort of dragon protection force: trying to solve problems that don’t yet exist and likely using that as an excuse to promulgate wokeness. Not to say that isn’t a problem, but AI safety is a real and pressing concern. I was wrong.
The Future is unknowable - this is always true but it’s more true than ever: no one knows where this is all going to go. There are too many variables and too much change is happening too quickly. All you can do is adapt.
What is consciousness anyway? - the fact that we still can’t explain what makes a human conscious is ever more meaningful now. We say that GPT4 isn’t conscious. Okay, sure. But if we can’t even say why we are conscious, then what does that even mean?
Language is a coding language - people have laughed about the idea of a “prompt engineer” ever since ChatGPT first launched. The reality now though is that a substantial portion of a person’s productivity will now be determined by how well they leverage these AI tools, which necessarily rely on how well people provide good instructions to get the most impactful answers from their AI assistants.
Jobs will evolve dramatically - as with all these topics, there is much to be said here. It is neither a “all jobs will disappear” nor a “nothing will acutally change”, but rather a “jobs will evolve as they did when the internet appeared, but even quicker.” There will be more to say about this that merits its own post.
Creation is much more fun now - I say this from personal experience: GPT4 makes creation and building more fun than ever. It is an incredible partner in solving roadblocks or things that would have taken days or weeks to solve, coding or otherwise. GPT4 makes coding and general production of a product more accessible than ever. It still takes work, but it’s a hell of a lot more fun now.
Humans will… - the future of humanity is unknown. The talking point now is that we will cooperate and collaborate with our new AI partners. Such was the case with chess…. until AI became better than any human possibly could. Such too may become the case with reality, and sooner than we’d all like. What that means for us as a species is unknowable, but one thing is for sure:
Everything will change.
did you write this or chatgpt? :)
if ai decided to start testing itself online it could shut down everything, or generally mess shit up, cant wait