AI is old

Did you know that AI research began in 1943?

AI is old
Photo by Josh Riemer / Unsplash

Everyone is talking about AI like it's new. But here's what most people don't realise: AI research started back in 1943.

In 1943, Warren McCulloch and Walter Pitts proposed a model of artificial neurons, laying the foundation for neural networks, the core technology within AI.

I mean this is so fascinating! Can you imagine the world back in 1943? There was no Internet, no Social Media, the first transatlantic phone call had been made only 16 years prior! The world was completely consumed by World War II and yet groundbreaking AI research was being made.

It's absolutely mind-blowing that McCulloch and Pitts were contemplating artificial neurons and computational thinking while the world was literally on fire around them! Their 1943 paper "A Logical Calculus of the Ideas Immanent in Nervous Activity" laid the mathematical foundation for neural networks - truly visionary work during humanity's darkest hour.

This discovery has prompted me go on a bit of research on AI history and here is what I have found out:

💡
AI development can be grouped in three major eras

Symbolic AI & Early Foundations (1940s-1980s)

This era was all about logic, rules, and symbolic reasoning. After McCulloch-Pitts, we got Turing's famous 1950 paper proposing the "Turing Test," then the legendary Dartmouth Conference in 1956 where the term "artificial intelligence" was actually coined. The field was incredibly optimistic - researchers thought we'd have human-level AI within decades! This period gave us expert systems (computer programs that mimicked human experts in specific domains) and early neural networks like Rosenblatt's perceptron. But progress stalled, leading to two "AI winters" where funding dried up because the promises didn't materialise.

Machine Learning Revolution (1980s-2010s)

The field shifted from hand-coded rules to learning from data. The backpropagation algorithm was rediscovered and popularised in the 1980s, making neural networks trainable again. Statistical approaches flourished - Support Vector Machines, Random Forests, and other algorithms that could find patterns in data. The internet explosion provided massive datasets to train on. Big milestone: IBM's Deep Blue defeated chess champion Kasparov in 1997, showing AI could master complex strategic games.

Deep Learning & Generative AI Era (2010s-Present)

The current revolution started with AlexNet winning ImageNet in 2012, proving that deep neural networks + big data + powerful GPUs could achieve superhuman performance on specific tasks. Then came the Transformer architecture in 2017 ("Attention Is All You Need"), which enabled today's large language models. ChatGPT's launch in late 2022 brought AI to the masses and kicked off the current generative AI boom we're living through right now.

Pretty interesting stuff, right? In my opinion, the greatest mass enabler of the GenAI Era was the chat interface popularised by OpenAI's ChatGPT. Most people won't care or even understand what a Transformer Architecture is, however, when they can chat and talk to a human-like system, magic happens.

Here's what this history teaches us as Technology Leaders and Innovators:

The pattern is clear - interface breakthroughs drive mass adoption, not just technological advances. Deep Blue was impressive but niche. ChatGPT made AI accessible to everyone through conversation.

In my 20+ years building digital ventures, I've seen this repeatedly: the technology that wins isn't always the most sophisticated, it's the one with the most intuitive interface. The companies we're building at Triniti always focus on this principle - making complex technology feel simple and natural.

The lesson for CTOs and tech leaders? Don't just chase the latest AI models. Ask yourself: How can we make this technology feel as natural as having a conversation? That's where the real venture opportunities lie.

What interface breakthrough do you think will drive the next wave of AI adoption?