I see an interesting tension happening right now…
The generative AI side
On oneside we have generative AI capturing almost every bit of the software development lifecycle and effectively becoming a new high-level abstraction, possibly the highest level we have ever seen in our industry.
There seems to be no stopping this phenomenon… just looking at the latest announcements from GitHub Universe, it’s clear that we will have to adopt AI in one way or another and this is not necessarily a bad thing if it truly makes us all more productive and focused on generating business value.
If you don’t know what I am talking about you should check out this video:
And if that’s not enough you should watch the full GitHub Universe 2023 keynote.
2023, Day 1. Thomas Dohmke on stage with a slide in the background saying “ One more thing”, mimicking the Steve Job’s launch of the iPad
The part that impressed me the most is that now Copilotcan also help you lay out the structure of a project, breaking down requirements into potential tasks. Once you are happy with the result, it can start to work on the individual tasks and submit PRs. This is called Copilot workspace, if you want to have a look.
It seems too good to be true and it’s probably going to be far from perfect for a while, but there’s great potential for efficiency here, and I am sure GitHub(and other competitors) will keep investing in this kind of products and maybe in a few years , we’ll be mostly reviewing and merge AI-generated PRs for the most common use cases.
If you think that ChatGPTwas launched slightly less than 1 year ago , what will we be seeing in 5 or 10 years from now ?
The low-level side
On the other side, we have a wave of new low-level languages such as Go, Rust, Zig (and Carbon, and Nim , and Odin , and VLang , and Pony , and Hare , and Crystal , and Julia , and Mojo , and.. I could keep going here… 🤷♀️ ).
OK, I really wanted to put the lovely Harelanguage mascot here. Hare is a systems programming language designed to be simple, stable, and robust. Hare uses a static type system, manual memory management, and a minimal runtime. It is well-suited to writing operating systems, system tools, compilers, networking software, and other low-level, high-performance tasks.
All these languages take slightly different trade-offs, but, at the end of the day, they are built on the premise that we need to go lower level and have more fine-grained control over how we use memory, CPU , GPU and all the other resources available on the hardware. This is perceived as an important step to achieve better performance, lower production costs, and reach the dream of “greener” computing.
If you are curious to know why we should be caring about green computing, let’s just have a quick look at this report: Data Centres Metered Electricity Consumption
2022( Republic of Ireland ).
The report findings state that in 2022, in Ireland alone, data centers’ energy consumption increased by 31% . This increase amounts to an additional 4,016 Gigawatt/hours . To put that in perspective we are talking about the equivalent of an additional 401.600.000.000 ( 402 billion !) LED light bulbs being lit every single hour . If you divide that by the population of the Republic of Ireland this is like every individual is powering ~80.000 additional LED light bulbs in their home, all day and night! And this is just the increase from 2021 to 2022 … How friggin’ crazy is that?! 🤯
Photo by D A V I D S O N L U N A on Unsplash
Ok, now one could argue that we had low-level languages pretty much since the software industry was invented. So why isn’t that the default and why do we bother wasting energy with higher-level programming languages?
That’s actually quite simple: because coding in low-level programming languages such as C and C++is hard! Like really really hard! And it’s also time-consuming and therefore expensive for companies! And I am not even going to mention the risk of security issues that come with these languages.
So why should this new wave of low-level programming language change things?
Well, my answer is that they are trying to make low-level programming more accessible and safe. They are trying to create paradigms that could be friendly enough to be used for general computing problems (not just low-level), which could potentially bring the benefits of performance and efficiency even in areas where historically we have been using higher-level languages and made the hard tradeoff of fast development times vs sub-optimal performance.
Take for example Rust. It was historically born to solve some of the hard problems that Mozillahad to face while building Firefox . But now it’s being used in many other areas, including embedded systems, game development, and even web development. Not just on the backend, but even on the frontend using WebAssembly !
So there might be many cases where we will be able to use these new languages to achieve better performance and efficiency without having to pay a massive development price for using a low-level language.
And I would go as far as saying that these use cases exist in the industry todayand there’s a staggering lack of talent in these areas.
Why the tension?
So, is there really a tension here between generative AI-driven development and using low-level languages or are these just twovery disjoint things?
I would personally say yes, there’s a tension.
Again, generative AIis pushing us to care less about the details. We trade our time and attention for the ability to focus on the business value and let the AI do the rest. This is a trend that has been going on for a while now and it’s not going to stop anytime soon.
Investing in using a low-level language goes in the opposite direction. It’s a bet that we can achieve better performance and efficiency by going lower level and deciding to be explicit about the minutiaof how we want to use the hardware at best.
But, wait… Am I saying that AI is not going to be able to write efficient and hyper-optimised low-level code? 🤔
Maybe! Or, at least my belief is that, as with any abstraction, there’s always a price to pay. And the price of using AIis that we are going to be less explicit about the details and therefore we are going to be less efficient.
But I also expect this equation to change with time. As AI improves, it might be able to generate more efficient code. Possibly even better than code we would write manually, even with tons of expertise on our side.
What can we do as software developers
Where does that leave us?
As individual software engineers, we can’t expect to be able to change these trends. We can only try to understand them and adapt.
Investing in learning a new language is a multi-year effort, and although it might be fun (if you are a language nerd like me), it is time that you might be taking away from other activities that might be more rewarding in the long term or just more valuable to you. For instance, you could be learning more about generative AI, right? 🤓
My personal bet is to invest in both! I am currently learning Rust and I am also trying to keep up with the latest developments in the AIspace.
For instance, Eoinand I just released a new episode of AWS Bites where we explore Bedrock , AWS generative AI service… Check it out if you are curious to find out what we built with it!
I am not sure how much I will be able to keep up with both, but I am going to try my best.
I tend to be a generalist and it’s only natural for me to try to explore a wide space of possibilities rather than going super deep on one specific topic.
But I am also aware that this is not the best strategy for everyone. So, if you are a specialist, you might want to focus on oneof these two areas and try to become an expert in that. It might come with a risk, but it might also come with a great reward.
I am also of the belief that the more we learn the more we are capable of learning. So regardless if you decide to go wide or if you put all your eggs in onebasket, the important thing is to always keep learning and keep an open mind.
If the future takes an unprecedented turn and we all end up writing code in a new language that is generated by AI, I am sure that the skills we have acquired in the past will still be valuable and will help us to adapt to the new paradigm.
What do you think?
So what’s your opinion and what’s your strategy for the future? I’d love for you to strongly disagree with me… or not?! Either way, let me know what you think here in the comments or on X, formerly Twitter.
See you around and happy coding! 🤓