A team of researchers from University College Maastricht recently published a study examining the use of GPT-3 as an email manager. As someone with an inbox that can only be described as ridiculous, I am intrigued.
Can GPT-3 help me?
The big idea: We spend hours a day reading and answering emails. What if an AI could automate both processes?
The Maastricht team explored the idea of letting go of GPT-3 in our email systems from a pragmatic point of view. Rather than focusing exactly on how well GPT-3 can respond to certain emails, the team assessed whether it would make sense to try at all.
Her article (read here) breaks down the potential effectiveness of GPT-3 as an email secretary by examining how useful it is compared to finely tuned machines, how financially viable it is compared to human workers and how effective machine-generated errors could be, sender and recipient.
Background: The quest for a better email client is endless, but ultimately, it’s all about getting GPT-3 to respond to incoming emails. According to the researchers:
Our research shows that there is a market for GPT-3-based email rationalization across several economic sectors, only a few of which we will examine. In all sectors, the harm of a small wording error appears to be minor as the content generally does not involve large amounts of money or human security.
The authors describe use cases in the fields of insurance, energy and public administration.
Objection: First of all, it should be pointed out that this is pre-printed paper. Often times this means that the science is good, but the paper itself is still under revision. This particular paper is a little messy right now. For example, three separate sections contain the same information, making it difficult to really get the point of study.
It seems to suggest that it would save us time and money if GPT-3 could be applied to the task of replying to our business emails. But that’s a gigantic “if”.
GPT-3 lives in a black box. A person would have to proofread every email they send, as there is no way they can ever be certain that they are not saying anything that will encourage litigation. Aside from fears that the machine might generate offensive or incorrect text, there is also the problem of figuring out how good a general-knowledge bot would be at the job.
GPT-3 was trained on the internet so it might be able to tell you the wingspan of an albatross or the 1967 World Series winner, but it certainly can’t make up its mind whether to redeem a birthday card for a co-worker or if you’re interested are to chair a new subcommittee.
The point is, GPT-3 is likely to be less responsive to general emails than a simple chatbot trained to pick a pre-generated response.
Take quickly: A little googling tells me that the landline phone wasn’t ubiquitous in the US until 1998. And now, just a few decades later, only a tiny fraction of US households still have landlines.
I wonder if email will be the standard for communication for much longer – especially if the last line of innovation is finding ways to keep us out of our own inboxes. Who knows how long we could be from a hypothetical version of OpenAIs GPT that is trustworthy enough to be worth using on any commercial level.
The research here is commendable and the paper makes interesting reading, but ultimately the usefulness of GPT-3 as an email responder is purely academic. There are better solutions for inbox filtering and automated response than a brute force text generator.
Published on February 8, 2021 – 20:17 UTC