Jobs and AutomationMust-Read Reports

More impacts of AI in 2024

Since last semester, I have asked my MBA students to write essays on a certain management topic using the ChatGPT. Several essays were excellent while most others were not interesting readings. Many students were able to submit excellent essays, despite their lack of proficiency in English.  ChatGPT was obviously helping students to write business reports and analysis despite English being a second language.

I have met many entrepreneurs in 2023 who would apply AI models in various business applications for sales and marketing, and stock investment decisions.

Many articles and analysis are predicting that AI will have more impacts in many areas and would upend specific sectors in developed economies and developing countries in Asia and Africa.  

The London Telegraph on December 28th, 2023, in an article written by James Titcomb, noted that employees at OpenAI did not expect much on November 30 2022 when the company unveiled a “low-key research preview” called ChatGPT.

Greg Brockman, OpenAI’s president, told staff that it wouldn’t have much of an impact on day-to-day business, confidently forecasting that it would only get noticed in a few nerdy corners of Twitter.

It quickly became obvious that this was a wild underestimate. Millions of users signed up within days and ChatGPT was dubbed the most important technology in a decade, leading to a worldwide fervor about artificial intelligence.

Employees could be forgiven for failing to predict its popularity, though. ChatGPT, with its ability to conjure up essays and arguments, may have astonished its early users, but to its developers, it was positively medieval. 

The underlying AI system it was based on, known as GPT-3.5, was almost a year old. The company had already developed its successor, GPT-4, and was preparing to release it to the public.

OpenAI described it as being 10 times more advanced, saying it could understand not only text but images; and could pass legal examinations.

Now, just over a year later, the company is taking its first steps toward a vastly more powerful system.

ChatGPT founder Sam Altman has warned over AI’s existential risk to humanity. Those who worry that AI is an existential risk to humanity fret that new systems are being developed before we have got our heads around the existing ones.

Either way, the release of GPT-5 is expected to be the AI event of 2024.

Developing computer software is typically a case of tweaking previous versions to eke out small improvements.

Creating new AI systems – known as large language models – is often a case of starting again. An unprecedentedly vast amount of data is thrown at an unprecedentedly powerful system of next generation microchips, resulting in a model several times more powerful than what came before.

GPT-1, the primordial model created in 2018, was trained on 117 million data points known as parameters. GPT-3 required more than one thousand times that, at 175 billion, and GPT-4 was another 10-fold increase, at 1.7 trillion.

The computing requirements have increased too. GPT-4 reportedly required 16,000 high-end Nvidia A100 chips, against 1,024 for the previous generation. Little is known about the next wave of models, but they are certain to be trained on Nvidia’s new H100 chips, a vastly more powerful successor that is the first to be specifically designed for training AI models.

“The history of computer science and AI has been that increased scale results in substantial improvements,” says Oren Etzioni, the former chief executive of the Allen Institute for AI.

“The step up from GPT-3 to GPT-4 was so dramatic, that you would be a fool not to try it again.”

Google, which unveiled its new model Gemini in December, is preparing to release the more powerful Gemini Ultra in the new year. Anthropic, the Amazon-backed AI lab, may also launch a new system.

Scientists are divided, though, on exactly what more powerful will mean. Today’s large language models are approaching the upper limits on certain tasks. Google’s Gemini already outperforms humans on a widely used language comprehension test and on computer programming exams.  

That does not make it any less prone to common criticisms of today’s AI models: that they lack creativity, only regurgitating what they have been fed; and that they have a poor understanding of truth, making them prone to “hallucinating” facts.

Experts such as Nathan Benaich, the founder of investment firm Air Street Capital and the co-author of the annual State of AI report, says the next generation of systems will be “multimodal” – capable of understanding text, images, videos and audio. That, he says, will bring them closer to understanding the world.

Demis Hassabis, the head of Google’s Deepmind lab, has said this could come to include sensations such as touch, which could lead to the systems being embedded in robots that can understand the world.

The next wave of models could display capabilities akin to reasoning and planning – qualities that we might associate with human intelligence.

AI that can switch from one task to another would be a step towards autonomous “agents” – systems that can carry out tasks on people’s behalf, such as booking a holiday or reading and answering emails.

The consequences of that could be profound. While today’s AI systems have threatened to take jobs in areas like copywriting and design, they must typically be chaperoned through the writing or illustrating process. Those that can turn their words into action – a customer service bot that can book flights, for example – would be more threatening.

These predictions are largely guesses, however. And even today’s AI models are too complex to completely understand.

This is one of the reasons the next wave of models will face increasing government scrutiny. Nine companies – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, Mistral, OpenAI and Elon Musk’s – have agreed to have their systems tested by the UK government’s AI Safety Institute before they are released.

Companies have signed up to similar commitments with the White House in the US.  The most advanced version of Google’s Gemini model is believed to be going through screening by officials before its upcoming release.

Equally, the next wave of AI systems could prove to be a bust. Sceptics believe that most of the low-hanging fruits have already picked, and that improvements from this point will be marginal no matter how much computer power is deployed.

But if the capabilities of next year’s models remain unknown for now, it seems certain that existing AI technologies will become more widely used.

In 2023, AI may have captured the popular imagination, but it might not be until 2024 that its impact really starts to be felt.

Ai models are likely to provide  solutions to problems that typically small- and medium-sized companies face every day: high staff turnover, lack of skills and available manpower, sales staff, accounting, and compliance.  

In 2024, my company, Bison Consulting, will be working with AI partners to offer services using AI models.

We could learn from my wife’s “steno moment”. In 1980’s many young girls in small town learned short-hand writing to become stenographer. When the Wang word-processor was introduced, the demand for stenographers disappeared, and many short-hand writing schools closed. Today, there is no position called stenographer in firms.