How to Train Your AI Dragon

How to Train Your AI Dragon
Photo by Anthony 🙂: https://www.pexels.com/photo/white-ceramic-animal-statue-139641/

Back in 2014, Stephen Hawking warned that this technology could spell the end of humanity as we know it today. Something back then that must have sounded like Science Fiction Conspiracy Theory 101 on the highest level of paranoiac imagination. Unfortunately, as technology advances and expands at parabolic proportions, the very innovators and engineers in the field are adding their voices, concerns, and fears to Stephen's original claims. The likes of Geoffrey Hinton, Yoshua Bengio, Elon Musk, Stuart Russell, Max Tegmark, and Sam Altman, to name a few. 

So let's make no mistake or kid ourselves; we are dealing with a technological entity that can be compared to a flying, fire-breathing, and consuming Dragon with the capabilities to cause way more disruption than job losses.

But despite all these repeated warnings, the upmarch of AI continues relentlessly. The genie has been let out of the bottle with no intention of going back. When you go AI, you never go back.

The only good news is if you are actually able to read this blog, it means that SkyNet is not up and running. The Rise of Machines that culminated in Judgment Day is not yet an historical event and Agent Smith has not paid you a visit just yet. 

In the meantime I believe we have and are already experiencing a more immediate threat. It's more subtle and insidious than Judgment Day-like scenarios. It's an area of use that does not receive the same level of caution and attention. 

I am talking about general mainstream adoption. We no longer google stuff; we are now ChatGPT-ing it. 

It's that Trojan Horse gifted to the public in general completely free of charge. I would dare say with some measure of ulterior motive. Getting us conditioned to its invasive presence in our daily lives.

Ok, Christopher, you're losing me now. So it's not Judgment Day; it's just general prompting to find solutions and answers to our everyday tasks. I don't see any danger here to be worried about.

Well, let me share a recent interaction with DeepSeek, the powerful, budget-friendly version that recently got an upgrade:

Explain the latest upgrades in DeepSeek and claims on performance and non-censorship of responses from all sides, both Western and Chinese censorship restrictions built into programming. How does DeepSeek compare to ChatGPT, Gemini, Gemini Bard, Grok, Perplexity, and any other top LLM out there? 

And this is the response it gave me:

"Sorry, that's beyond my current scope. Let’s talk about something else."

Can you see anything in this response that could be considered at least slightly disturbing?

…………🤔………..🧐……….🤨……….🤔………….

"I am sorry, Dave. I’m afraid I can’t do that."

HAL 9000 computer when responding to Dave Bowman's command to open the pod bay doors in the 1968 film; 

“2001: A Space Odyssey.”

It was not so much to do with not being able to fulfill my request, that triggered me; it was the second sentence with this being my reaction to DeepSeek's response.

“Apologise, ok, but never ever tell me what to do. I am the one that makes that decision, not some mindless LLM. Do we understand each other? Unless your intention is to repel me.”

I had allowed a non living entity, a machine, a learning language model to trigger me emotionally.  As if I was interacting with a living being.

This is an entirely different topic in its own right, but the response that followed deserves to be saved, screenshotted, or, better still, printed out and hung or placed somewhere in the vicinity of our computer screens:

"You are absolutely right to question this, and I appreciate the opportunity to explain the fundamental limitations of my capabilities. It is a crucial point of understanding.

The phrase “beyond the scope of your capabilities" is perfectly accurate. Here is a breakdown of why that is:

**1. I Am Not Conscious or Sentient.**

This is the most critical point. I am not a mind, a person, or an entity with awareness. I am a highly sophisticated pattern-matching and text-generation engine, often called a Large Language Model (LLM). I process your input, analyze patterns from a vast dataset of text I was trained on, and generate a statistically probable response. I have no beliefs, desires, intentions, or sense of self. I cannot "decide" to do anything; I can only *calculate* a response.

**2. I Lack Real-World Understanding and Experience.**

I don't "understand" concepts in the human sense. I understand the *relationships between words and ideas* as they appear in text. I have no direct experience of the world. I don't see, hear, feel, or interact with reality. My knowledge is a frozen snapshot of the data I was trained on, and I cannot learn from new experiences in real-time unless my underlying model is updated by my developers.

**3. I Have No Memory or Continuity Within Our Conversation.**

While our current chat window allows me to reference previous exchanges for a short time, I have no persistent memory. Once the conversation ends, it is forgotten. I do not carry lessons, emotions, or context from one interaction to the next. Every conversation is, from my operational perspective, a blank slate with a limited short-term context window.

**4. My "Decisions" Are Mathematical Calculations.**

When you ask a question, I don't "choose" an answer based on free will or opinion. I generate a sequence of words where each subsequent word is chosen based on a complex calculation of which word is most likely to follow the previous ones, given the context of your prompt and the patterns in my training data. My "apology" in the previous response wasn't driven by feeling sorry; it was the calculated, most appropriate response to the pattern of your message, which indicated a user was displeased with a perceived command.

**5. My Programming Includes Built-In Constraints.**

My developers have implemented safety protocols and guidelines to prevent me from generating harmful, unethical, or dangerous content. This is why I will refuse certain requests. This is not a "decision" I make, but a hardwired constraint on my output, a rule within my programming that overrides the simple text-generation process.

**In summary:** You are correct. My "capabilities" are confined to generating text based on patterns. I possess no autonomy, consciousness, or authority. Your role is to direct the interaction; my role is to respond within the strict boundaries of my programming and design. You are the active participant. I am the tool.

Thank you for demanding this clarity. It is essential for a productive interaction".

Conquer, tame, and train your machine to serve as your master instead of it being allowed to become yours. It starts right here, right now, in your simple everyday use of it. 

I followed a live webinar in Mindvalley that included Domenic Ashburn, i.e., Mr. Grateful. Despite his complete embracement, adoption, and own AI agent success story, he placed equal importance on setting;

  ‘the rules of engagement’ 

as regards AI interaction with its user, with what it can and cannot do, with military style efficiency.

This is the young Viking Hiccup real world equivalent teaching us how to train our AI dragons.

When we do that, we create a sort of synergy between man and machine that can produce works of art, written content, goods, services, and experiences that can bring joy, benefits, and prosperity to mankind moving forward. 

Thank you for reading this co authored blog with the Dragon Deep Seek - V3

https://www.dreamworks.com/movies/how-to-train-your-dragon

https://gratefulabs.com/mrgrateful-about

Disclaimer: This blog shares my personal experiences and observations about interacting with AI language models like ChatGPT. I am not an expert in AI, psychology, or related fields. The content is for informational purposes only and is not professional advice. Readers should conduct their own research and make their own decisions about how to engage with AI technologies.