*** UPDATE 12/07/2025 ***
Changed Bard to Gemini and removed references of Claude v2.
I started writing this post by taking a quick look at some of the guides online. I couldn’t find anything that listed out the LLMs that I use, or how to use them. It seems like most people stumble upon the tools themselves through an internet search – at least, that’s how I first started using ChatGPT. I wanted to see what all the fuss was about.
Using the free version of 3.5 wasn’t great. It spat information out fast, but it was mostly wrong information. For example, asking it to solve a math problem, or accepting that the wrong answer is correct. It wasn’t a great start to using LLMs for me.
What it is, and what it isn't
LLMs aren’t AI in the sense that people generally think of AI. If I was to describe what it really was in simple terms, I’d say that it’s autocorrect on steroids. It’s been trained on human writing and texts, and manually corrected by humans when it went off track. For example, showing bias. To generate amazing output for us, it needs amazing input. Prompt engineering, as it’s come to be known, is the ability to craft LLM prompts that generate exactly what we’re after.
It’s quite rare to get something useable after a single prompt, but it can provide ideas and frameworks where your brain might be stuck otherwise. This makes it a useful work co-pilot.
Data data data!
When you’re using these systems, you’ve got to be careful. You should never put data into it that is privileged or confidential, because your data can be used to train the model in most cases (read those Ts and Cs carefully). So for people using it for work, never put customer-confidential data, or commercially sensitive data, into the tools. To do so would undoubtedly be a security breach. However, this only applies to the public-facing models when chat history is enabled, not the enterprise models, which do exist.
ChatGPT
To opt out of the ‘history’ feature, go into your ChatGPT account and disable ‘Chat history and training’:

Using ChatGPT with this option disabled means you can now upload sensitive information, however, you’ll lose access to your chat history, and you lose access to the GPT Builder function.
I suggest you switch history on and off as you need to, using the history feature for things that aren’t sensitive, which enables ChatGPT to learn from previous chats and give you more informed answers. Of course, you’re consenting to sharing a lot of information, so bear that in mind.
Claude
Claude v2 is a lot better when it comes to respecting user information – in fact, it does not use user prompts to train its models, and this is the default setting. Essentially, it’s set to the ChatGPT ‘opt out’ setting by default, so there’s no need to do anything.
Gemini
Gemini, like ChatGPT, collects information by default. You need to opt-out by going to the settings in your account:

Again, chats can be accessed if their systems flag their systems (i.e., using it for nefarious purposes) but your data won’t be used to train the model.
Different LLMs - which one should I use?
I’ve already covered the main ones that I use above, there are more. You can even run some of them locally on your machine if you want to – be warned though, they’re slow.
My preferred LLMs, in order, are:
-
- Claude by Anthropic – ~$20 a month but does offer free version and Pro version for $200 a month
-
- ChatGPT by OpenAI – ~$25-ish a month but does offer free version, as well as a Pro version for $200 a month
-
- Google Gemini – free for basic stuff, APIs are chargeable as is Google One, which bundles in a load of Google services together – which are very useful, to be fair
Claude and ChatGPT offer premium plans – in fact, you need to pay for ChatGPT to access GPT4, which is a must in my opinion. I won’t bother mentioning GPT4 with Bing – it’s more of a search engine tool than an actual LLM, but it can be useful for generating images. Its answers are a lot shorter and I would never rely on it to help me solve a problem.
I prefer Claude because it offered much larger token limits before ChatGPT was improved, as well as document uploads. The security also seems to be better, as per the above.
I pay for both Claude and ChatGPT, and I actually use both a lot. I tend to start with ChatGPT to begin tackling a problem, or to ask it a quick question that requires a logic. ChatGPT’s reasoning still seems superior to Claude’s, but they aren’t far apart. How can I make such a claim? I can’t! It’s not actually measurable. It’s easy to test it out, just plug a challenging maths problem in and tell it that it’s wrong and see what it comes back with.
*** UPDATE 12/07/2025 ***
I have stopped using Claude altogether, the usage limits are a joke whereas ChatGPT never limits me unless I’m running the latest model (4.5 as of this edit).