ChatGPT and the future

Is ChatGPT a friend or a foe when it comes to knowledge work in law firms? Alex Smith, global product management lead at iManage, examines both sides of the coin.

Alex Smith

Alex Smith

iManage offers leading-edge document management, data security and AI solutions

To find out more, visit https://imanage.com/

A recent Bloomberg report found that 30% of professionals are experimenting with the capabilities of ChatGPT. A simple interface and cost-free nature have made it popular with many professions looking to draft emails or generate code at speed. With its familiar chat-box input and cursor that scrolls across the page, it’s almost like there’s a human at the other end (or is that a

One way or another, ChatGPT will have an impact on knowledge work. Its generated content may have even been consumed by us without our knowledge. Large language models have made huge advancements in the ways that they can string together words in complex sentences to give the impression of intelligence, like a typeahead on steroids . But large language models like ChatGPT do often get things wrong, and when they do, they go ‘all in’, on these alternate facts and realities. In an entertaining piece of marketing, their development teams have labeled these bugs as “hallucinations.”

A token effort in stopping burnout

Ask ChatGPT to draft a fully bulletproof contract for a Sydney real estate purchase and it’s like watching a lawyer on fast forward after four double espressos. When ChatGPT is generating its output, it doesn’t intentionally mimic human typing (unless it’s programmed to slip in the odd mistake to make text appear more human). It’s actually processing chunks at a time using a token system. Words are broken down into multiple “tokens” to predict what text it should output next.

If we see hesitation from a colleague on Slack or Microsoft Teams, we may think that they are unsure of their response, or thinking it through. It’s impossible to identify hesitance or lack of confidence in generated answers from ChatGPT, because its language model is incapable of doubting it’s abilities. In contrast, in a recent survey of 9,615 global knowledge workers, it was reported that seven in ten have experienced imposter syndrome, and that 42% had experienced both imposter syndrome and burnout.

In an industry founded on interpretation of very solid pieces of truth like laws, case law, and citable material, there’s a real danger that ChatGPT could create vastly more work than it can help complete. Although it’s capable of sorting through huge amounts of data to filter out irrelevant information, generative AI is prone to errors – ones that need to be caught by human intervention. That could really add to the burnout statistics.

The human component

For a new generation of knowledge workers, ChatGPT could be one of many tools in the toolbox that fills the gaps in collaboration that were eroded during Covid due to remote working. But for knowledge work overall, humans will still very much be part of the equation, doing their part — in conjunction with any supporting technologies — to make sure legal professionals have all the resources they need to effectively do their jobs.

ChatGPT won’t be (and isn’t capable of) replacing the skills of human lawyers in their ability to provide advice and solve problems any time soon. While generative AI may serve as a useful starting point, when it comes to generating highly accurate documentation based on fact, and in the words of A Few Good Men, ChatGPT “can’t handle the truth!” Not yet, anyway.