What is AI?
Wikipedia defines “Artificial Intelligence” as “the capability of computational systems to perform tasks typically associated with human intelligence, such as learning, reasoning, problem-solving, perception, and decision-making”.
The term could be used to cover a wide range of technologies that have been developed over the last few decades. For example, calculators, chess playing programs and the “paperclip helper” in Microsoft Word. These are all types of “intelligence”, even if they have a more “machiney” feel to them (as opposed to appearing more “human”). They have arisen due to the development of transistors and computer chips.
However, the term “AI” has recently become associated with certain new technologies that have emerged, which, while still requiring computer chips to work, aim to mimic human ability. For example, neural networks, machine learning software and Large Language Models (LLMs). These typically involve a computer model that has been trained on an enormous amount of data, which can then be queried to help find a solution to a problem.
AI in the Workplace
Many workplaces are now being radically affected by the adoption of AI tools. Despite all the hype about their “amazing value”, these have not been universally welcomed by employees. The imposition of AI mandates within companies is creating a backlash, with many people feeling demoralized and devalued by fundamental changes which they are being asked to support.
This issue has come to the fore in roles where creativity and problem solving play a major part. For example, graphic design, copywriting and computer programming. Due to the imposition of AI some employees are looking to switch companies or move into a different field of work.
The use of AI tools doesn’t necessarily speed things up either, and they all come with their own problems and limitations.
Those of us who enjoy our work and who oppose the autocratic imposition of AI need to join together to fight back. We need to do what we can to promote a sensible approach that respects both our autonomy and our talents.
What is a Prompt Monkey?
A Prompt Monkey is a computer user who has turned from an intelligent, thinking, creative and resourceful problem solver into a mere “manager” for an AI engine (such as ChatGPT, to use an obvious example). Whatever the user wants done, he or she feeds their instructions into a prompt box, waits for the response, then cuts and pastes the output where it needs to go – sometimes without even checking.
Agentic AI
An AI Agent is a software system that interacts with an AI engine. It will submit a prompt, then based on the response from the engine it will make a decision about what to do next: either submit another prompt (including information from the conversation history if required), or, if certain criteria are met, provide the final output to the user. It may also perform other tasks as part of the process: for example, perform an internet search, examine a website or send an email.
Using an AI agent is a way of asking a computer system to undertake a series of tasks. which usually includes the agent having some kind of interaction with an AI engine. Although it offers the user the opportunity to automate a task that they would otherwise do themselves, the agent has to be constructed first before it can attempt to perform what’s required. It may require further user input, and of course the output has to be checked once it has been generated.
Performing an AI Test
Most people will have discovered by now that LLMs often produce faulty output. For example, go to ChatGPT and type in something like “tell me about w (person) who graduated from x (university) in y (year) in z (subject)”. Sometimes what you get back will be a mixture of information about two (or more) different people, some of it faulty, along with an incorrect photograph. To get accurate information you would have to go and do your own searching on Google and other websites. The reason the LLM is failing is because it is patchy at linking up data properly, which isn’t surprising since it can’t actually “think” at all.
Responding to AI
AI can be useful in certain circumstances. For example, automating routine tasks or helping to find solutions to problems in disciplines such as medicine or agriculture. One application of AI is to analyze thousands of images from a medical scan. This is obviously of great use, as a human would take a lot longer to analyze the same data.
The problem is not with such applications. It’s when company executives want to push the general use of AI into every aspect of daily work, to “make everything more efficient”. This means mandating the use of AI tools such as ChatGPT or Claude Code to do the majority of the work. Obviously by attempting to make everyone’s work “more efficient”, the end goal is to increase the output and therefore also the profits. It’s the same old story: bosses wanting to squeeze more and more out of employees. Furthermore, CEOs are being fed inaccurate claims from the tech “gurus” pushing AI. AI is not a magic potion that will rewrite their entire company’s software application (often developed over years or decades) in just a few months with a bit of human prompting.
The key issue is that the use of AI should never turn into a substitute for doing one’s own work or applying one’s own mind to a problem. By imposing AI, CEOs are doing their employees a disservice. Our minds were created to be used.
So if you are labouring under an AI mandate, resist turning into a prompt monkey. Instead, aim to do whatever is required in order to reclaim your self-worth.
