Are the robots coming for our freelance gigs?
Hopefully not, because while large language trained models like chatGPT can do extremely impressive things, they can't add value in the uniquely human way that we can - however quickly they can spit out paragraph after paragraph of apparently impressive copy, or uniquely creative artwork.
After all, we've all been interacting with different aspects of machine learning tooling for years, in different applications, and indeed they have abstracted away some entire job roles.
For the most part these were dull and tedious jobs in any case - if you're old enough, do you remember calling round different car insurers once a year, to try and get a better quote? Each call would involve a lengthy conversation about your demographics and driving habits with a call centre worker, who had to read you verbatim terms and conditions and a quotation number, before you rang the next one. Now we just pop on a price comparison website and generate rafts of hopefully competitive deals.
We don't weep for those call centre workers, because they're hopefully doing less soul-destroying work - perhaps providing second level customer service or claims analysis, once another automated bot has filtered out the routine enquiries from their queue
Large language models have naturally captured the public imagination in new ways, but as this week's podcast episode discusses, they have huge flaws and risks, in professional use.
One of the biggest of these is simply inaccuracies.
If it doesn't have a ready answer, it makes one up - instantly, highly plausibly, and frequently imperceptibly, reminding you that it's real skill is predicting the next word/phrase/sentence/paragraph that will follow well from the prompt. And it will weave those fabrications smoothly into its responses in ways you'd never unpick.
To demonstrate this, I asked GPT3.5 (the current default model) “What were Maya Middlemiss's early career influences, and to what extent do they reflect in her contemporary writing?”
Here's what it wrote:
It spat these paragraphs out in seconds, and it reads quite well.
The thing is, the areas highlighted in yellow are basically correct, and easy to confirm from public domain information, that was presumably part of its training dataset (huge swathes of internet content, up to September 2021
The phrases highlighted in cyan however are completely and utterly made up.
Looking at how the colours shift, you can see it started from what it found already, and then... just kinda riffed on from there.
It ended up creating a completely new career for me, by the end of the piece. I have never written one word about gardening or growing food, and any bot which really knew me would be well aware I can't keep a houseplant alive for a month, let alone live sustainably off it.
So that's why this week's podcast episode unpacks the huge risks of passing off AI-generated content as your own, even if you were somehow okay with the ethics of it.
However, I also explore the ways these tools can add value to your business as a solopreneur, such as:
🤖 Automating your schedule and enhancing time management
🤖 Supporting your research and personal knowledge archiving
🤖 Enabling you to collaborate effectively with associates in different disciplines
🤖 Analysing your own business intelligence and research data
AI tools, like all good tools, can make you more effective and productive at what you already do - provided you take the time to understand and master them.
At the same time, you also need to maintain your skills and knowledge and due diligence, to be able to continue to offer your clients the uniquely human attributes you bring to the table - such as reliability, being great to work with, and alignment with their business and personal goals.
Focusing on what you do best will help you stay relevant and effective, while you master the latest tools to automate the repetitive and less interesting activities of any business, that an AI can do better.