What Doctors Should Never Do in ChatGPT

0
8



AI is everywhere right now. ChatGPT, Gemini, Perplexity, you name it. These tools are coming fast and changing how we work, think, and document. And let’s be honest… It’s tempting to use them for everything.

And sure, it feels like we’ve entered a new era. But here’s the catch: while the tech has taken off, the rules haven’t changed much.

USC researchers recently found that many doctors using ChatGPT were unknowingly violating HIPAA, often by pasting in what felt like “anonymized” patient data, but without stripping all 18 identifiers. And a recent JAMA study showed that when ChatGPT was used for clinical recommendations, its responses were often incomplete or inaccurate, raising major concerns around safety and reliability.

Now, we know most clinicians are already doing the right thing. Being careful. Being thoughtful. Absolutely wanting to protect the patient’s privacy and their own peace of mind. So if you’re already playing it safe, amazing. We’re just here to make that a little easier.

Let’s walk through 10 specific practices to avoid when using ChatGPT (or any AI platform), with helpful links and context to make sense of the risks.


Disclaimer: While these are general suggestions, it’s important to conduct thorough research and due diligence when selecting AI tools. We do not endorse or promote any specific AI tools mentioned here. This article is for educational and informational purposes only. It is not intended to provide legal, financial, or clinical advice. Always comply with HIPAA and institutional policies. For any decisions that impact patient care or finances, consult a qualified professional.

Top 10 Absolute Don’ts for Doctors Using ChatGPT

1. Don’t put patient info into ChatGPT without a BAA

Even if ChatGPT is encrypted, that doesn’t mean it’s authorized to handle protected health information (PHI). Under HIPAA, any vendor that handles PHI must have a Business Associate Agreement (BAA) in place. ChatGPT does not.

If you want to use AI with patient data, only use institution-approved tools that meet HIPAA standards, or fully de-identify first.

2. Don’t assume encryption makes a tool HIPAA-compliant

It’s easy to assume that encryption is enough, but even encrypted PHI is still PHI. According to the HIPAA Security Rule, encryption doesn’t eliminate your legal obligation to protect patient data or your need for a BAA.

Ensure any AI platform handling clinical information is not only encrypted but also legally authorized to receive it.

3. Don’t paste full patient charts into ChatGPT

Sharing an entire patient record with an AI tool often violates HIPAA’s Minimum Necessary Rule, which requires that disclosures be limited to the smallest amount of information needed.

Instead, extract only what’s essential, or summarize first and ask AI for help on language or structure, not content.

4. Don’t rely on quick redactions and call it de-identified

HIPAA outlines two methods for de-identification: expert determination or Safe Harbor, which requires removal of 18 specific identifiers. Most quick redactions fall short. According to HHS guidance, simply removing names or dates is not enough.

Use proper tools for de-identification, or avoid entering PHI altogether.

5. Don’t use ChatGPT to make clinical decisions you can’t verify

If an AI output can’t be fully explained, FAQs from the FDA on clinical decision support software suggest it may be regulated as a medical device.

ChatGPT is best used for non-clinical tasks: summaries, drafts, educational content… not direct clinical decision-making.

6. Don’t prescribe or manage meds through ChatGPT

Prescribing meds, especially controlled substances, requires secure, certified systems. The DEA’s rules on electronic prescriptions for controlled substances (EPCS) lay out all the safeguards, and ChatGPT isn’t compliant.

Use trusted, secure platforms built for prescribing, like DrFirst or SureScripts.

7. Don’t use AI to copy-paste or exaggerate documentation for billing

The OIG and CMS have flagged the practice of cloning or “copy-pasting” notes as a serious compliance issue. In one high-profile case, Somerset Cardiology Group paid over $422,000 after OIG found it cloned patient progress notes and improperly billed Medicare based on falsified documentation

Let AI help you outline or format, but make sure the final note reflects actual care provided and your personal medical judgment.

8. Don’t use AI in ways that cross state licensure boundaries

Even when AI is involved, delivering clinical care to a patient located in another state still triggers that state’s licensure requirements. According to the Center for Connected Health Policy (CCHP), care via telehealth is always considered rendered at the patient’s physical location, which typically means the provider must be licensed there unless exceptions apply.

If you’re using AI to support care, make sure you’re practicing within your licensed jurisdictions, or keep the output strictly educational and non-clinical.

9. Don’t blur boundaries with patients through AI

Even online, professional responsibilities remain the same. Annals of Internal Medicine reminds that boundaries between personal and professional realms can easily blur online, and physicians should actively work to keep them separate to maintain trust and ethical standards in the patient–physician relationship.

Avoid using AI in casual patient chats or DMs. Stick to secure, formal communication platforms.

10. Don’t make misleading AI-powered marketing claims

The FTC is stepping up against deceptive AI claims. In a 2025 enforcement effort, the agency fined DoNotPay for marketing itself as “the world’s first robot lawyer,” even though its AI lacked adequate training to deliver accurate legal advice.

While that case involved legal services, the message carries over to healthcare, where the stakes are even higher. Avoid vague or inflated claims. Use honest terms like “AI-assisted” or “AI-enhanced,” and clearly explain what AI does and what it doesn’t.


Unlock the Full Power of ChatGPT With This Copy-and-Paste Prompt Formula!

Download the Complete ChatGPT Cheat Sheet! Your go-to guide to writing better, faster prompts in seconds. Whether you’re crafting emails, social posts, or presentations, just follow the formula to get results instantly.

Save time. Get clarity. Create smarter.


Final Thoughts: Do Your Due Diligence

AI is moving fast. The tools are powerful, accessible, and honestly… kind of fun to use. For many of us, it feels like we’re standing on the edge of something game-changing in healthcare. And we are.

But with that opportunity comes responsibility.

It’s easy to get caught up in what AI can do and forget to pause and ask what it should do, especially in sensitive environments.

So this isn’t about fear or rigid rules. It’s about awareness. It’s about taking a beat to double-check and to lean on the resources around us when we’re unsure. None of us is expected to have every answer. That’s why legal and compliance exist. They’re on our side.

Btw, this also isn’t legal advice, or a substitute for your institution’s policies. It’s just a helpful nudge, a shared reminder as we all try to navigate this tech thoughtfully and responsibly. Do your own diligence as always.

Here’s a quick checklist we’ve found useful to keep close:

Default to de-identify or don’t share.
Prefer institution-approved AI with a BAA and proper admin/technical safeguards.
Always apply clinician oversight and document your judgment.
When in doubt, check with privacy, compliance, or legal.

We’re learning together. So let’s keep asking the right questions, challenging assumptions, and building habits that keep us protected.

If you want to learn more about AI and other cool AI tools, make sure to subscribe to our newsletter! We also have a free AI resource page where we share the latest tips, tricks, and news to help you make the most of technology.

To go deeper, check out PIMDCON 2025 — The Physician Real Estate & Entrepreneurship Conference. You’ll gain real-world strategies from doctors who are successfully integrating AI and business for massive results.

See you again next time! As always, make it happen.

Disclaimer: The information provided here is based on available public data and may not be entirely accurate or up-to-date. It’s recommended to contact the respective companies/individuals for detailed information on features, pricing, and availability. This article is for educational and informational purposes only. It is not intended to provide legal, financial, or clinical advice. Always comply with HIPAA and institutional policies. For any decisions that impact patient care or finances, consult a qualified professional.

IF YOU WANT MORE CONTENT LIKE THIS, MAKE SURE YOU SUBSCRIBE TO OUR NEWSLETTER TO GET UPDATES ON THE LATEST TRENDS FOR AI, TECH, AND SO MUCH MORE.

Peter Kim, MD is the founder of Passive Income MD, the creator of Passive Real Estate Academy, and offers weekly education through his Monday podcast, the Passive Income MD Podcast. Join our community at the Passive Income Doc Facebook Group.

Further Reading