Lots has been happening in the GPT world over the past couple of months. If you have not played around with ChatGPT, been awed by its generative capabilities, or even worse, have not heard about it, you must be living on a different planet.
The Twitter trends on what people have been asking ChatGPT, and its answers, are very fascinating. For most people, the curiosity around GPT has been about asking the tool what job it can do and what jobs are at risk in the immediate future. Survival instincts!
‘Hey, nice to meet you, stranger. I see that you seem to have an answer for everything, and I hear you can do lots of things, but just checking, can you do my job better?’
‘Oh, so you are only good at these? Thank God for that.’
If that wasn’t enough, add to that the recent kids on the block – GPT-4 and the launch of Copilot from Microsoft. They write documents, create presentation slides, summarize meeting notes, and answer questions from data, among other things. At least the cool videos claim to do all that.
Just Awesome.
In short, Copilot reminds us of what we as humans were once good at, and what we may have lost in a highly distracted, fast-paced world. It’s a case of lost art or capabilities, or forgotten basics in a world of misplaced priorities.
Without question, artificial intelligence (AI) is going to have a huge impact on our lives, resulting in increased productivity in various fields, including programming. Many low-code and no-code platforms are examples of this. It no longer takes an army to produce great things these days. However, large enterprises still do and spend a lot of money on modernization efforts.
If not tomorrow, very soon AI is going to leave a huge impact on pure-play software jobs, and we need to be aware, evolve rapidly, or face irrelevance very soon. There is no rule of nature that states that only humans can write code.
However, there are other factors that still make humans relevant. Simply throwing AI at problems may not always help. Consider Copilot’s ability to analyze data and provide insights as an example.
Ignorant world
As originally claimed by the renowned Hans Rosling, who, decades ago, made numbers that much more fascinating with his famous TED talks, we are very much an ignorant world. You can attribute multiple reasons to it, from psychological to the various inherent biases we might have.
This is also very much evident in any large digital, data-driven enterprise. Furthermore, the open data initiatives across the globe have not made us that much more knowledgeable. Despite enormous investments in data gathering and interpretation technologies and capabilities, many organizations still struggle to consistently get the basic facts right. In fact, many of them are probably wrong about many things. Just ask any business team about the timely availability of simple data and insights for decision making. It’s not that we lack the technology to make us smarter today; we choose to ignore it and remain ignorant.
Quality of data.
“Garbage in, garbage out” pretty much applies to humans and machines equally. With all my experience working with data teams in large enterprises, I would rather tend to state equivocally that most of the information problems in companies are data quality or metadata (read it as understanding) problems. Adding more technology to the scheme of things in the name of digitization has only complicated things.
This very much applies to any GPT algorithm and this is also widely evident in the inherent bias in some of the responses that ChatGPT or any GPT algorithm would provide. ‘A machine getting biased already?’ Yes, if that’s what it is trained on. Add to that the convenience of As they way, ‘There are lies, damned lies and Statistics’. Now we can add to that list ‘GPT’.
Human Gut and (perhaps Stupidity).
It’s not that companies have not been spending enough on data and insights. Big Data and Analytics is apparently a $250B industry across the globe.
In my early days doing data science work, I always used to get excited about the output from forecasting and regression models, and with all fervour, I would run to share the results with my business folks, mostly to the reaction of ‘Hmm. I already know this. So what?’ Very rarely would my outcome result in any ‘That’s interesting’ moments.
Most often, those scenarios would have been because of the additional capability made possible by technology to process a large volume of data or data that the humans did not have access to. Otherwise, all the analysis and insights would be to confirm what the business teams or senior management already knew or to confirm their beliefs. I wouldn’t call it confirmation bias, but rather tend to believe that, with lots of experience, the human gut knew better.
Or on the flip side, it is not that SVB or Credit Suisse did not have the smartest quants in their risk management teams or technology to produce models that show what is bound to happen. Still, things (may be not unforeseen) happen.
So, what am I saying?
Starting from the days when computers really arrived on the scene to take away many human jobs, technology evolution creates new opportunities for humans to stay relevant and focus on better value-added stuff. New paradigms will continue to emerge and so will we evolve.
Yes, GPT is exciting. We are always fascinated by anything that is “Generative”. It is almost like magic, pulling something out of thin air. The first wave off Conversational AI, primarily driven by Chatbots have cast their own share of unpleasant user experience, with machine-like, transactional responses. A more descriptive response is indeed fascinating. There is no doubt that AI would result in an Nx productivity gain in many of the tasks humans do today. Content generation could be one of the first victims of ChatGPT. However, there are still areas that are heavily human-centric and would continue to be driven purely by intuition and creativity.