Articles

Artificial Intelligence Insights

What exactly do we mean by "AI"?

AI

Artificial Intelligence (or simply AI) seems to be the modern buzz word since around 2022 when ChatGPT and other chatbots made their way into common use. However Artificial Intelligence originally referred a computer's ability to simulate human-like intelligence which has been in place within gaming for many years. Any programmed responses that are informed by and adapted to user input such as the way a player interacts with a game was known as AI. Famous examples are in the FPS videogame F.E.A.R. (First Encounter Assault Recon) where the CPU controlled enemies form strategies against the player dynamically or in Metal Gear Solid V: The Phantom Pain the CPU controlled enemies adapt to your gameplay style if you use a familiar strategy repeatedly, e.g. the soldiers wear helmets if you score too many headshots.

These days AI is used much more to refer to chatbots and any computer program that appears to simulate human-like responses, even if all of these responses are scripted and do not adapt to the way we interact with it. Chatbots such as ChatGPT, Deepseek and Gemini are known as Large Language Models and use machine learning algorithms to parse training data and come up with human-like responses to our inputs on the fly. They use vast amounts of computing power to accomplish this and as such all of the inputs, whether they be audio or typed commands are sent to a remote server for processing and the result is sent back downstream to your device as an output. These servers use CPUs and GPUs to perform their calculatations and work in tandem in massive server farms which gobble up excessive amounts of energy and water which have negative impacts on the environment.

Despite this, since 2022 these tools have become services avilable for everyday use in the following areas:

  • Content Creation: Drafting emails, reports, marketing copy, and creative writing.
  • Information Retrieval: Summarizing complex documents, answering questions, and researching topics.
  • Coding Assistance: Generating code snippets, debugging, and explaining programming concepts.
  • Language Translation: Real-time translation and localization of content.
  • Customer Support: Powering chatbots and virtual assistants for instant responses.

The Rise of Large Language Models

The most popular AI tools in common use are Large Language Models. Examples are OpenAI's ChatGPT and Google Gemini which allow users to interact using natural language. Language Models (LLMs) have rapidly transformed from theoretical concepts to powerful, accessible tools. Driven by advancements in neural networks and vast datasets, they can now understand, generate, and process human-like text with unprecedented fluency. This evolution has democratized access to advanced AI capabilities, empowering individuals and businesses across various sectors.

I've put together a simple guide around the top 6 popular models with prompting tips here

Top AI Companies by Market Capitalization as of 2025
Company Market Cap (USD) Primary AI Focus
Microsoft $3.1T Cloud AI, GPT
NVIDIA $1.5T AI Hardware
Alphabet $1.7T AI Research
Meta $1.2T AI Applications
Amazon $1.8T Cloud AI

Note: The GPU manufacturers, such as NVIDIA are the only profitable companies in the AI space. While the companies above are the biggest in the AI development space, their business models currently and historically have relied on other services for their market capitalisation. Due to their massive asset base they are the ones financially propping up the AI space almost entirely, allowing for companies like OpenAI and Anthropic to focus on building their LLMs and training data. However DeepSeek is a worthy competitor coming from China.

Image & Video Generation

While there are image generating models such as Dall-E and Midjourney, due to ethical concerns and copyright I will never recommend using these tools. As a creative I also have a personal contention around duplicating and imitating the style of another artist without putting in the technical groundwork to actually reproduce a style with the respect of giving credit to the originators. I have also found these AI tools to be lacking in quality for their particular outputs currently, with any and all AI generated content being very easy to spot breaking immersion and belief that any human skill went into the creation of such content.

LLM Limitations

While powerful, free LLM tools come with inherent limitations, from usage caps to occasional inaccuracies when responding to prompts. Understanding these helps manage expectations and use them effectively. I have found personally aside from the text based limitation, there are easy giveaways in the outputs that LLM tools give in the creative realms. Most documents, articles and non-administrative content that require a human voice is easily identifiable as AI-written and this typically turns off the audience to your work. For example, this document was not written at all by AI and from the way it has been written, you can tell. The more we use AI, the more valuable flaws and the human touch become as people crave a real connection which can be felt in the areas of creativity.

Usage Limits & Hallucination Frequency

Understanding the technical Trade-offs:

Tool Typical Free Usage Limit Approx. Hallucination Frequency (%)
Gemini (Free) Daily messages/turns 5-10%
Claude (Free) Limited messages/hour 5-10%
ChatGPT (GPT-3.5) High daily messages 10-15%
Llama 3 (Free Hosting) Varies by host/local setup 8-12%
Perplexity AI (Free) Limited "Pro" queries/day 3-7%
Manus IM Varies by plan/usage 5-10%

Note: Hallucination frequency is an approximation and can vary greatly based on prompt complexity and topic. Always verify critical information.


Security Concerns & Ethical Considerations

Using AI tools across devices introduces security and privacy considerations that users should be aware of to protect their data. Many companies are in a stage of discovery, realising that employees when using AI tools especially LLMs are prone to inputting sensitive data which can be of significant risk in any case of breaching, hacking or unauthorised device access.

Privacy & Security
  • Data Privacy: Be cautious about inputting sensitive personal, financial, or confidential company information. Free models may use your input data for training.
  • Malware & Phishing: AI-generated content can be used to create highly convincing phishing emails or malicious code. Always verify sources.
  • Deepfakes & Misinformation: AI can generate realistic but fake audio, video, or images, leading to scams or spread of false information.
  • Account Security: Use strong, unique passwords and enable Two-Factor Authentication (2FA) for all AI service accounts.
  • Device Vulnerabilities: Ensure your devices (laptops, mobile phones) are updated with the latest security patches and have reputable antivirus software.
  • Public Wi-Fi Risks: Avoid accessing sensitive AI tools or inputting confidential data over unsecured public Wi-Fi networks.
  • App Permissions: Review permissions requested by AI apps on your mobile devices; grant only what is necessary.
  • Copyright & IP: Be mindful that AI-generated content may inadvertently infringe on existing copyrights or intellectual property.
Morality

AI tools have been integrated into many big data models and used to parse information that have important intelligence and military application. This is a concern as many governments have been known to support the use of data mining companies leveraging AI tools to capture private and sensitivie data of their own citizens and use this data determine whether one has been involved in incilit activities, mark people as terrorists and otherwise invade one's personal lives even across their own country borders and jurisdictions. They will typically cite these efforts as actions of national security in their attempt to assign a moral right to this, examples of this are companies like Palantir.

Environmental

There are various environmental concerns around the use of AI tools in generally, not limited to just LLMs. The backbone of AI tools is extreme computational power, which is serviced primarily through GPUs which crunch through the prompts and reasoning and deliver the results to your device. The server farms that house these GPUs are extremely environmentally unfriendly, requiring a lot of electricity and water to run constantly. The amount of water used for cooling datacentres is estimated at up to 9 liters of water per kilowatt-hour of energy consumed. Through estimation we can predict this will exceed 4 billion cubic meters per year by 2027 considering we've been running these AI systems since around 2022, this is likely to exacerbate existing water scarcity issues.

Conclusion

While its an exciting time to be alive and witness the onset of such technologies, we have to be hyper-aware of the shortsighted assumptions that these tools are changing our lives for the better. They are indeed convenient, when they can be used to produce boiler plate code, email responses and remove a layer of tedious admin but we do have to think about both the human and environmental costs. Another thing to mention as we wrap up is that so far AI companies are running on a deficit, where the GPU companies have effectively capitalised on heavily on the need for compute but the setup and running costs of this expensive server infrastructure has not yet made significant profits for the five biggest companies in the AI space.