How one bad AI prompt could send you bankrupt


It’s easy to think artificial intelligence, like ChatGPT or Google Gemini, is just another tool in your business toolkit, much like Excel or PowerPoint or Xero. And in some ways, it is, but in other ways, AI is something completely different.

Related Article Block Placeholder

Article ID: 325439

With Excel, there’s a direct correlation between your inputs and outputs, and so long as your data is correct, you can rely on what the spreadsheet tells you.

AI isn’t like Excel. It makes things up. It can potentially infringe copyright, and it can tell your customers things that are completely wrong. And because of the speed AI generates responses, and the believable tone it wraps those responses in, errors can propagate at scale.

It’s not a stretch to say one bad prompt could have significant financial implications for your business. It might even send you bankrupt.

AI is not an intern. It’s worse

SME owners and managers are time-poor. They’re expected to wear many hats, and with the rise of GenAI, an AI expert is one of the hats you’re going to have to wear.

Smarter business news. Straight to your inbox.

For startup founders, small businesses and leaders. Build sharper instincts and better strategy by learning from Australia’s smartest business minds. Sign up for free.

By continuing, you agree to our Terms & Conditions and Privacy Policy.

“We don’t have the same relationship with AI we have with other apps or even other kinds of technology,” says Anna Harrison, founder and CEO of RAMMP, a digital consulting firm specialising in AI.

The line between human and machine, Harrison says, is blurry, and while AI can create something that’s 95% ready to ship, it’s the last 5% where business owners get tripped up.

“I think the temptation is to take this very powerful tool and give it to an intern who’s earning $20/an hour and tell them to go create your social media comms for the year, and the problem is an intern just doesn’t know what good looks like.”

And this is exactly where the many hats conundrum comes into play. The business owner hands over comms to the intern. The intern uses AI, and the response is off, but the business owner, being busy and time-poor, doesn’t vet the output, and the comms are programmatically distributed.

Cue a heart attack in the MD’s office.

Related Article Block Placeholder

Article ID: 325101

Steven Bahbah is the managing director of Service First Plumbing, a Sydney-based plumbing firm specialising in commercial and residential work. Bahbah and his team make extensive use of AI to generate advertising and advertising copy, along with harnessing it to create other business efficiencies. He’s had direct experience of the negatives associated with treating AI output as gospel.

“We use AI every day to help with scheduling, ad copywriting, and customer responses,” Bahbah says, and after a recent experience with incorrect AI-generated ad copy, Bahbah and his firm are now extremely careful to have humans vet everything created by AI for accuracy and truthfulness.

The problem was caused by a poorly written prompt, he says, which generated an ad exaggerating Service First’s offerings.

“This ad could’ve been interpreted as misleading, and we were lucky because we caught it before publication, but it also highlighted how one vague instruction could easily lead to legal or reputational damage,” Bahbah says.

The mistake many SME owners and managers make, he says, is equating AI’s automation with accuracy.

“The reality is AI can generate false or biased information. The safeguard is human verification,” he adds, “and to treat AI’s work as a draft, not a decision”.

Ultimately you, the business owner, are responsible for AI’s outputs

No one reads software licenses, terms or conditions. Who has time to read multiple pages of dense legalise when there are more important things to do?

But with software, understanding where the toolmaker’s responsibilities end and yours start is something every SME owner or manager should be across, especially when it comes to AI.

As an aside, it’s also worth understanding your cloud service provider’s T&Cs as well, which more or less comes down to “all care, no responsibility.” If something goes wrong, it’s you, the customer, who’s left cleaning up the mess.

The same goes for AI tools. If your AI platform of choice generates an output getting you into legal or reputational hot water, then the service provider will echo Patsy, from Absolutely Fabulous, whose response when someone caught her with drugs was: “not my baggie”.

Those AI terms and conditions mean it’s up to you to ensure the outputs are correct. No one else, and especially not the platform provider, will hold the baggie for you.

But don’t take Patsy’s word for it. Dan Pearce is a general counsel at national law firm Holding Redlich, and his word probably carries more weight than that of a fictional PR executive from a 90s British sitcom.

“If your business is utilising AI tools to help it deal with customer enquiries, you should be alert to the potential liabilities that may arise from the response generated by the AI tool,” Pearce says.

Related Article Block Placeholder

Article ID: 324710

“Ultimately, you are likely to be held responsible if the response gives rise to a claim, for instance, because it is inaccurate or misleading, or defamatory, or infringes another person’s copyright.”

Pearce also reiterates the fact the toolmakers take an “all care, no responsibility” approach, noting “most providers of such tools will exclude any liability for the outputs the tool generates”.

In short, he says, SME owners and managers should always undertake a risk assessment of any AI tool used in their organisation. If AI-generated content on your website defames someone or infringes copyright, simply saying AI did it is not a complete defence.

“If the tool generated a response about your products that was misleading, consumers may have rights under the Australian Consumer Law after buying the products and seeing they didn’t match the description.”
 
Pearce says the financial implications of any of these types of claims could be significant, and in the worst case, could affect the future of your business.

“Content that appears prominently on your public-facing website or social media page will not be immune from consequences just because an AI tool created it, and even using disclaimers will not guarantee full protection.”


Source

Visited 2 times, 1 visit(s) today

Recommended For You

Avatar photo

About the Author: News Hound