Neo Vision’s Weekly Rant #34
Where we RANT about what goes exceptionally good and wildly bad when you turn your company digital
Over the past 20 years, dear readers, we (you and me) have been fortunate enough to ride the most exhilarating wave of technological innovation (too bad I don’t surf). I don’t want to make this all about me, but believe it or not, I’m old enough to remember the Palm and even the Pager.
From surfing the Internet and exchanging songs through Infrared and Bluetooth to carrying a whole computer in our pockets and finally incorporating Artificial Intelligence into our daily lives after more than 60 years of research and development, we’ve come a long way. And guess what? This is just the beginning of a new era.

I don’t remember ever being so excited about the release of a new product, but OpenAI got me hooked up from the first version of DALL-E when we first got a glimpse into the future. As a model that generates images from textual descriptions, DALL-E has opened up a world of possibilities in various applications such as art, design, advertising, and more.
Three days ago, with the release of GPT-4, OpenAI continued their commitment to pushing the boundaries of Artificial Intelligence. As a language model, GPT-4 expands on the capabilities of its predecessor, GPT-3, by offering more nuanced and accurate responses, broader contextual understanding, and greater interactivity.
As I did in the twentieth newsletter when ChatGPT was released to the public, I will dedicate this week’s newsletter entirely to GPT-4. Frankly, there is so much content about GPT-4 that would need days to read through, but I try not to overcomplicate an already very complex topic.
I guess the first question that comes to everyone’s mind is how GPT-4 is different and how much better it is compared to GPT-3.5. That’s a valid question and a natural point of curiosity for those interested in the advancements of AI language models. GPT-4 surpasses GPT-3.5 with larger model size, improved contextual understanding, enhanced language generation, increased interactivity, better few-shot learning capabilities, and progress in mitigating biases and addressing ethical concerns.
I think that the most efficient solution for a better understanding is to go directly to the source: openai.com/research/gpt-4
🏁 Google is not to be outdone: The next generation of AI for developers and Google Workspace

This thread almost made me send an emergency newsletter. Speed is the name of the game we are playing.
If you are searching for the perfect video regarding OpenAI’s GPT-4 look no further than this one. A true masterpiece
A great takeaway from Matei’s post: The general rule of thumb is that when people cry wolf, they exaggerate. When people treat things with a dose of skepticism, that’s where you need to watch out since that thing might actually end up changing the world. Remember what Steve Ballmer said about the first iPhone.
In this GPT-4 newsletter, we're excited to feature the insightful and in-depth expertise of Alberto Romero, whose valuable contributions bring a whole new level of understanding to the table. Don't miss out on his unique perspective and deep research: “The world is multimodal (the modes of information go well beyond language and vision), and we humans owe a lot of our unmatched prowess and intelligence to our brain’s multisensory capabilities: if we want AI to understand the world as we do, language alone is insufficient.”