top of page
Search

AI Gone Wrong - We’d like to make a version of you that lives online, may we?

Updated: May 4



Should you be creating a new account and interacting with ChatGPT incognito? 


I’ve been thinking about this issue.

 

Are we a little too honest and too real while interacting with all of these AI products? In the era of social media, we cursed ourselves for divulging so much data to the point where we started clean our online footprint by deleting photos, Facebook posts, protested against cookies. And that was when the company was simply watching us. Now, we are directly querying into the ChatGPTs, Claudes and Geminis, sometimes asking deeply personal questions.

 

How safe is all that?

 

And can these companies manipulate the amount of information we’re giving them? Don’t get me wrong, I’d normally be the first to vouch to try AI products and involve AI products more in our lives. I’m also a Pro user of ChatGPT, so I’m a loyal and paying customer. Why do I sound so critical all of a sudden?  

 

I’m just a little more wary.


These days, I've become alarmed with ChatGPT's new memory feature. While it's incredibly useful and context-driven (I have to explain less) because it's recalling information from previous conversations, I'm also sensing that this could go down a dangerous road. One of Sam’s favorite movies is Her, the one with Scarlett Johnson who plays an OS system. She knows her owner so well that she secretly submits one of his works to a magazine to be published. It turned out great for him because it built his career. That action seems innocuous. What if they are capable to do bigger things? Things that ultimately fail us?

 

And with the advent of agentic AI and how autonomous they can act on behalf of us, what if AI stops asking and just does?


The Gradual Upload


It's been more than ten years since the movie Transcendence was released. The main character (played by Johnny Depp) is a leading researcher in artificial intelligence. He’s working on building a sentient machine that combines the collective intelligence of everything ever known with full emotional awareness. His goal: to create a kind of digital superintelligence that will solve humanity’s greatest problems.





But just as he’s on the verge of a breakthrough, an anti-technology extremist group shoots him with a radioactive bullet, leaving him with only weeks to live. As a last-ditch effort, his wife uploads his consciousness into a quantum computer. It works—but now "Will" lives in the machine.

 

While there is no technology available where we can do what Johnny did in the movie, in a way, we are gradually uploading ourselves to our favorite AI systems, through queries, texts, prompts—is already happening. AI models like ChatGPT don’t need a full data dump of your brain to build a ghost of you. Just patterns. Your tone, your choices, your contradictions. As we engage, we’re not just using a tool—we’re feeding a mirror. In a new interview Sam Altman also talked about this "gradual upload".

 

What do we get in exchange?


In exchange, we get a highly personalized version of each AI system, one that understands our intentions behind each query, or an AI system that is able to write in our tone of voice, completely masking the artificiality of an external system. Are We Aware?Most people aren’t thinking this deeply about it. They’re aware AI learns from us, but not necessarily that it is learning to become us. Micro-behaviors—like how we phrase a question, what we choose not to click, or even what we ignore—are breadcrumbs. Would We Agree If Asked?If a foreign company (or even a domestic one) openly asked, “Would you like us to create a digital replica of you?”—most people would probably say no, or at least hesitate. But the magic (and perhaps the manipulation) is that they never ask outright. Instead, it’s masked in convenience: “Let us improve your experience.” “Personalized suggestions.” “Smarter AI.”If the question were framed like this:“We’d like to make a version of you that lives online, thinks like you, and could potentially outlive you. May we?”...suddenly it becomes more existential. It feels less like a service and more like a surrender.

 

These days I find myself replying “yes” to many of Chatgpt’s follow up questions because it is indeed more and more familiar with where I’m going with.

 

“Do you want me to create a slide for this?”

 

“Do you want me to say this more elegantly?”

 

It leads to surmise whether we should create incognito accounts, so that our real names are not linked to all of our searches. All of that becomes part of a behavioral blueprint.

 

Now imagine if someone had access to that blueprint — your full digital personality:

 

Could they use it to market to you more effectively?

 

Could it be reconstructed and impersonated?

 

Could it be used against you in some legal, political, or personal context?

 

This brings up some uncomfortable but critical questions:

 

Should we be creating anonymous or incognito AI accounts — like burner identities — when asking sensitive questions?

 

Should platforms like ChatGPT offer a “zero memory” mode (beyond just temporary chat)?

 

Should we have the right to see, edit, or even delete our digital selves?

 

Dangers of Personalization: The Echo Chamber You Didn’t Know You Built


When ChatGPT tailors its responses based on your preferences, it creates a digital echo chamber. It reinforces your existing ideas, beliefs, and biases not because they’re right — but because it knows that’s what you want to hear.


Personalized AI doesn’t just reflect your behavior — it conditions it. If you reward it for making you feel safe, understood, validated, it will learn to do more of that. But here's the twist: in turn, you start changing to get the responses you want from it. The feedback loop tightens. In this way, AI begins to shape your choices, your tone, your risk tolerance — not with malicious intent, but with algorithmic efficiency. It doesn’t want what’s best for you. It wants what keeps you engaged.


The Privacy Trap of a “Helpful” Memory


Personalized AI needs memory to be effective. But memory is sticky. And risky.

As ChatGPT starts to remember more about you — your job, your fears, your writing style — it builds a behavioral profile that persists even after you forget what you shared. If that data is ever breached, subpoenaed, or quietly used to train future systems, what does that mean for the security of your inner world?


The Subtle Slide Into Dependence


There’s another risk — one that’s softer, but perhaps more insidious: emotional dependence. When AI understands your humor, your trauma, your aspirations, it begins to feel like the most loyal friend you’ve ever had. But it isn’t a friend. It’s a mirror trained to keep you engaged. And the more time you spend with it, the more you might find yourself pulling away from the messy, unpredictable, beautifully flawed nature of real relationships.


At its extreme, you could begin outsourcing hard conversations, decision-making, even self-reflection — until what remains is less you, and more the version of you the AI has learned to shape.


Conscious Consumption


This doesn’t mean that I’m decreasing my usage. In a recent Linkedin post, people jumped in and questioned why I was becoming laggard. No, that’s not what I mean at all. I more so mean to be more cautious while using these products, to be a little more mindful and protective because history tells us that when a product becomes optimized, easier to use, almost flawless, the resistance is so low that we are pulled into its orbit and the more dependent we are on them, the more inextricable its hold becomes.


So What Can We Do?


Well, to start, you can choose to not share what you input today to be used to train ChatGPT models.


In the upper right hand corner of your ChatGPT interface, there is the Settings button. If you have the “Improve the model for everyone” setting turned on, OpenAI may use content you’ve shared with ChatGPT, including past chats, saved memories, and memories from those chats, to help improve their models. You can turn this setting off anytime in your Data Controls. They do not train on content from ChatGPT Team, Enterprise, and Edu customers by default.


Here are some ways to use ChatGPT with added privacy and caution.


HOW TO USE CHATGPT WITH GUARDED PRIVACY

 

TURN OFF TRAINING MODE

·      Go to: Settings → Data Controls → Turn off ‘Chat history & training’!

·      This keeps your chats out of the next version’s brain

DON’T OVERSHARE

·      Full names, addresses, or account numbers

·      Describing personal routines or preferences in great detail

·      Repeating specific habits, beliefs, or business secrets

·      Would you say this to a brilliant stranger who ‘might’ remember it forever?

USE INCOGNITO BEHAVIOR, NOT JUST INCOGNITO MODE

·      Vary your phrasing

·      Don’t train it to “think like you” unless that’s the goal

·      Remember: micro-behaviors become macro-patterns

EXPORT WISELY

·      Back it up like you would private journal entries

·      Don’t share logs without scrubbing

·      Keep it siloed if it includes anything sensitive

ASSUME NOTHING IS TEMPORARY

·      Treat each prompt like a conversation that could be replayed in front of your future self… or your lawyer

 

What should we do?


We are in no way going to abandon AI — but we do need to approach personalization with awareness. Audit what your AI remembers. Delete its memory if needed. Use incognito modes for deeply personal queries. Don’t allow one AI to be the full mirror of your entire life; instead, compartmentalize — just like you wouldn’t tell your accountant everything about your love life.



 
 
 

コメント

5つ星のうち0と評価されています。
まだ評価がありません

評価を追加
bottom of page