Employers warned to guard against “Botsh*t” from “hallucinating” chatbots

Workers using chatbot-generated content which is factually  inaccurate  are likely to produce  'botsh*t' that poses a risk to the  organisation's own output  or work,  Bayes  Dean Andre  Spicer has warned.

Bayes Dean Professor Andre Spicer and Canadian counterparts have developed a framework aimed at managing the workplace risk of errors produced by “hallucinating” chatbots that are primed to “predict” rather than “know”.


Writing in the journal Business Horizons, they warn that workers and managers are using content generated by the likes of ChatGPT and other chatbots without considering how important accuracy is in their industry.


This risks workers using unverified AI-generated content to produce what the academics call “botsh*t”.


They draw a distinction between such content and “bullsh*t” – with the latter being inaccurate content developed by and shared by humans.

Co-author Professor Tim Hannigan, from the University of Alberta, said: “The word ‘bullsh*t’ is now accepted in management theory for understanding how we can comprehend, recognise, act on and prevent acts of communication that have no basis in truth…Workplace decision-making based on uncritical acceptance of AI ‘hallucinations’ will also inadvertently result in errors – hence our suggestion of the term ‘botsh*t’.”


As well as assessing the risk around their staff relying on potentially inaccurate information, the academics argue, employers should consider how easily their staff can confirm the accuracy of the AI-generated content they are using.

Predicting, not knowing


Co-author Professor Andre Spicer, Dean of Bayes, said: “Generative chatbots can ‘hallucinate’ untruths because they are designed to predict responses rather than understand the meaning of these responses. Like a parrot, these chatbots excel at regurgitating learned content without comprehending context or significance.”


The academics suggest such content is best seen as “provisional knowledge” that should be confirmed before it is used in the workplace.


Professor Hannigan said: “Organisations need to appreciate that these tools are fallible – which is where our concepts such as hallucinations and ‘botsh*t’ come in. On the flip side, there are processes and guardrails that can be implemented to use generative AI in a productive manner. This technology is not inherently good or bad: it is a tool and needs to be used in a careful manner. Our paper helps guide managers to find these more productive uses of AI tools in practice.”


Corresponding author Professor Ian McCarthy, from Simon Fraser University in Vancouver,  said: “Employers should have a code of conduct that clearly sets out how chatbots should be used in the workplace – with an emphasis on accuracy and integrity. Staff need to be trained on the capabilities and limitations of chatbots and encouraged to factcheck and deploy critical thinking. We need a workplace culture where chatbot users have the courage and responsibility to speak up and question the veracity of the chatbot responses they are working with.”

A framework approach


Drawing on risk management research, the academics developed a framework that identifies different types of chatbot work, each of which carries a “botsh*t”-related risk. The risks include:
*Ignorance of the technology’s potentially useful and harmful outputs
*Workers systematically trusting or distrusting chatbot responses, meaning they either do not seek to authenticate them or, conversely, dismiss them as unreliable
*Bored workers engaged in repetitive work with chatbots “falling asleep at the wheel”.


In the paper the trio point to high-profile examples of professionals coming unstuck after relying on chatbot outcomes. These include two lawyers fined by the Federal District Court of New York for submitting a legal brief containing fictitious cases and citations generated by ChatGPT. An Australian parliamentary committee also published a paper using AI-generated content that falsely accused several companies of involvement in non-existent scandals.