
ZDNET’s key takeaways
- Generally an AI could cause you or your organization irreparable hurt.
- Sharing confidential knowledge with an AI might have authorized penalties.
- Do not let an AI discuss to prospects with out supervision.
Just a few weeks in the past, I shared with you “9 programming tasks you shouldn’t hand off to AI – and why.” It is stuffed with well-reasoned options and proposals for learn how to keep away from having an AI produce code that would break your complete day.
Then, my editor and I received speaking, and we realized the entire concept of “when to not use an AI” might apply to work normally. On this article, I current to you 9 stuff you should not use AI for whereas at work. That is removed from a complete listing, but it surely ought to make you suppose.
Additionally:Â This one feature could make GPT-5 a true game changer (if OpenAI gets it right)
“At all times understand that AI is not going to learn you your Miranda Rights, wrap your private info in authorized protections like HIPAA, or hesitate to reveal your secrets and techniques,” mentioned LinkedIn Studying AI teacher Pam Baker, the bestselling writer of ChatGPT For Dummies and Generative AI For Dummies.
“That goes double for work AI, which is monitored carefully by your employer. No matter you do or inform AI can and certain will likely be used towards you sooner or later.”
To maintain issues attention-grabbing, learn on to the top. There, I share some enjoyable and terrifying tales about how utilizing AI at work can go terribly, horribly, and amusingly fallacious.
With out additional ado, listed below are 9 stuff you should not do with AI at work.
1. Dealing with confidential or delicate knowledge
That is a simple one. Each time you give the AI some info, ask your self how you’d really feel if it have been posted to the corporate’s public weblog or wound up on the entrance web page of your trade’s commerce journal.
Additionally: The best AI for coding in 2025 (and what not to use)
This concern additionally contains info that may be topic to disclosure rules, akin to HIPAA for well being info or GDPR for personal data for folks operating in the EU.
No matter what the AI firms let you know, it is best to easily assume that all the pieces you feed into an AI is now grist for the model-training mill. Something you feed in might later wind up in a response to any person’s immediate, some other place.
2. Reviewing or writing contracts
Contracts are designed to be detailed and particular agreements on how two events will work together. They’re thought-about governing paperwork, which implies that writing a foul contract is like writing unhealthy code. Baaad issues will occur.
Don’t ask AIs for assist with contracts. They may make errors and omissions. They may make stuff up. Worse, they are going to accomplish that whereas sounding authoritative, so that you’re extra seemingly to make use of their recommendation.
Additionally: You can use Google’s Math Olympiad-winning Deep Think AI model now – for a price
Additionally, the phrases of a contract are sometimes ruled by the contract. In different phrases, many contracts say that what’s really within the contract is confidential, and that if you happen to share the particulars of your settlement with any exterior occasion, there will likely be dire penalties. Sharing with an AI, as mentioned above, is like publishing on the entrance web page of a weblog.
Let me be blunt. In the event you let an AI work on a contract and it makes a mistake, you (not it) will likely be paying the value for a protracted, very long time.
3. Utilizing an AI for authorized recommendation
You recognize the trope the place what you share together with your lawyer is protected info and cannot be used towards you? Yeah, your pleasant neighborhood AI shouldn’t be your lawyer.
As reported in Futurism, OpenAI CEO (and ChatGPT‘s principal cheerleader) Sam Altman informed podcaster Theo Von that there isn’t any authorized confidentiality when utilizing ChatGPT on your authorized considerations.
Additionally: Even OpenAI CEO Sam Altman thinks you shouldn’t trust AI for therapy
Earlier, I mentioned how AI firms would possibly use your knowledge for coaching and embed that knowledge in immediate responses. Nonetheless, Altman took this assertion up a notch. He steered OpenAI is obligated to share your conversations with ChatGPT if they’re subpoenaed by a court docket.
Jessee Bundy, a Knoxville-based legal professional, amplified Altman’s assertion in a tweet: “There isn’t any authorized privilege whenever you use ChatGPT. So if you happen to’re pasting in contracts, asking authorized questions, or asking it for technique, you are not getting authorized recommendation. You are producing discoverable proof. No legal professional/shopper privilege. No confidentiality. No moral obligation. Nobody to guard you.”
She summed up her observations with a very damning assertion: “It’d really feel non-public, protected, and handy. However legal professionals are certain to guard you. ChatGPT is not, and can be utilized towards you.”
4. Utilizing an AI for well being or monetary recommendation
Whereas we’re on the subject of steerage, let’s hit two different classes the place extremely skilled, licensed, and controlled professionals can be found to offer recommendation: healthcare and finance.
Look, it is most likely wonderful to ask ChatGPT to clarify a medical or monetary idea to you as if you happen to have been a five-year-old. However when it comes time to ask for actual recommendation that you simply plan on contemplating as you make main choices, simply do not.
Let’s step away from the legal responsibility danger points and give attention to widespread sense. First, if you happen to’re utilizing one thing like ChatGPT for actual recommendation, it’s a must to know what to ask. In the event you’re not skilled in these professions, you may not know.
Additionally:Â What Zuckerberg’s ‘personal superintelligence’ sales pitch leaves out
Second, ChatGPT and different chatbots might be spectacularly, overwhelmingly, and almost unbelievably wrong. They misconstrue questions, fabricate solutions, conflate ideas, and usually present questionable recommendation.
Ask your self, are you keen to guess your life or your monetary future on one thing {that a} people-pleasing robotic made up as a result of it thought that is what you needed to listen to?
5. Presenting AI-generated work as your personal
While you ask a chatbot to jot down one thing for you, do you declare it as your personal? Some of us have informed me that as a result of they wrote the prompts, the ensuing output is a results of their creativity.
Additionally: I found 5 AI content detectors that can correctly identify AI text 100% of the time
Yeah? Not a lot. Webster’s defines “plagiarize” as “to steal and cross off (the concepts or phrases of one other) as one’s personal,” and to “use (one other’s manufacturing) with out crediting the supply.” The dictionary additionally defines plagiarize as “to commit literary theft: current as new and unique an concept or product derived from an current supply.”
Does that not sound like what a chatbot does? It positive does “current as new and unique an concept…derived from an current supply.” Chatbots are skilled on current sources. They then parrot again these sources after including a little bit of spin.
Let’s be clear. Utilizing an AI and saying its output is yours might price you your job.
6. Speaking to prospects with out monitoring the chatter
The opposite day, I had a technical query about my Synology server. I filed a assist ticket after hours. A bit later, I received an e mail response from a self-identified assist AI. The cool factor was that the reply was full and simply what I wanted, so I did not need to escalate my ticket to a human helper.
Additionally:Â Is AI overhyped or underhyped? 6 tips to separate fact from fiction
However not all AI interactions with prospects go that properly. Even a yr and a half later, I am nonetheless chuckling about the Chevy dealer chatbot that supplied a $55,000 Chevy Tahoe truck to a buyer for a buck.
It is completely wonderful to offer a skilled chatbot as one assist choice to prospects. However do not assume it is at all times going to be proper. Guarantee prospects have the choice to speak with a human. And monitor the AI-enabled course of. In any other case, you may be making a gift of $1 vans, too.
7. Making closing hiring and firing options
In response to a survey by resume-making app Resume Builder, a majority of managers are utilizing AI “to find out raises (78%), promotions (77%), layoffs (66%), and even terminations (64%).”
“Why are you firing me?”
“It isn’t my fault. The AI made me do it.”
Yeah, that. Worse, apparently no less than 20% of managers, most of whom have not been skilled within the rights and wrongs of AI utilization, are utilizing AIs to make closing employment choices with out even bothering to supervise the AI.
Additionally:Â Open-source skills can save your career when AI comes knocking
However here is the rub. Jobs are sometimes ruled by labor legal guidelines. Regardless of the present anti-DEI push coming from Washington, bias can nonetheless result in discrimination lawsuits. Even when you have not technically finished something fallacious, defending towards a lawsuit might be costly.
In the event you trigger your organization to be on the receiving finish of a lawsuit since you could not be bothered to be human sufficient to double-check why your AI was canning Janice in accounting, you may be the subsequent one being handed a pink slip. Do not do it. Simply say no.
8. Responding to journalists or media inquiries
I will let you know slightly secret. Journalists and writers don’t exist solely to advertise your organization. We would like to assist, definitely. It feels good understanding we’re serving to of us develop their companies. However, and you may want to sit down down for this information, there are different firms.
We’re additionally busy. I get hundreds of emails on daily basis. Tons of of them are concerning the latest and by far most progressive AI firm ever. A lot of these pitches are AI-generated as a result of the PR of us could not be bothered to take the time to focus their pitch. A few of them are so unhealthy that I can not even inform what the PRs try to hawk.
However then, there’s the opposite aspect. Generally, I will attain out to an organization, keen to make use of my most precious useful resource — time — on their behalf. After I get again a response that is AI-driven, I will both transfer on to the subsequent firm (or mock them on social media).
Additionally:Â 5 entry-level tech jobs AI is already augmenting, according to Amazon
A few of these AI-driven solutions are actually, actually inappropriate. Nonetheless, as a result of the AI is representing the corporate as a substitute of, you recognize, possibly a considering human, a chance is misplaced.
Take into account that I do not like publishing issues that may price somebody their job. However different writers usually are not essentially equally inclined. A correctly run enterprise is not going to solely use a human to reply to the press, however may also restrict the people allowed to characterize the corporate to these correctly skilled in what to say.
Or go forward and lower corners. I at all times want enjoyable fodder for my Facebook feed.
9. Utilizing AI for coding and not using a backup
Earlier, I wrote “9 programming tasks you shouldn’t hand off to AI,” which detailed programming duties it’s best to keep away from passing alongside to an AI. I’ve lengthy been nervous about ceding an excessive amount of accountability to an AI, and fairly involved about managing codebase upkeep.
However I did not actually perceive how far silly might go when it got here to delegating coding accountability to the AI. I imply, sure, I do know AIs might be silly. And I positive know people might be silly. However when AIs and people work in tandem to advance the reason for their stupidity collectively, the outcomes might be really awe-inspiring.
In “Bad vibes: How an AI agent coded its way to disaster,” my ZDNET colleague Steven Vaughan-Nichols wrote a couple of developer who fortunately vibe-coded himself to an almost-complete piece of software program. First, the AI hard-coded lies about how unit checks carried out. Then the AI deleted his whole codebase.
It isn’t essentially fallacious to make use of AI that can assist you code. However if you happen to’re utilizing a instrument that may’t be backed up, or you do not hassle to again up your code first, you are merely doing all of your greatest to earn a digital Darwin award.
Bonus: Different examples of what to not do
This is a lightning spherical of boneheaded strikes utilizing AI. They’re simply too good (and by good, I imply unhealthy) to not recount:
- Letting a chatbot handle job applicant knowledge: Bear in mind how we informed you to not use an AI for hiring and firing? McDonald’s makes use of a chatbot to display screen candidates. Apparently, the chatbot exposed millions of applicants’ personal information to a hacker who used the password 123456.
- Changing assist employees with an AI, and gloating: A CEO of e-commerce platform Dukaan terminated 90% of his assist employees and changed them with an AI. Then he bragged about it. On Twitter/X. The general public response was lower than optimistic. Means much less.
- Produce a studying listing consisting of all pretend titles: The Chicago Sun-Times, usually a really well-respected paper, printed a summer season studying listing generated by an AI. The gotcha? Not one of the books have been actual.
- Suggesting terminated staff flip to a chatbot for consolation: An Xbox producer (sure, that is Microsoft) suggested that ChatGPT or Copilot might “assist cut back the emotional and cognitive load that comes with job loss” after Microsoft terminated 9,000 staff. Achievement unlocked.
What about you? Have you ever seen an AI go off the rails at work? Have you ever ever been tempted to delegate a job to a chatbot that, in hindsight, most likely wanted a human contact? Do you belief AI to deal with delicate knowledge, talk with prospects, or make choices that have an effect on folks’s lives? The place do you draw the road in your work? Tell us within the feedback beneath.
You’ll be able to observe my day-to-day mission updates on social media. You’ll want to subscribe to my weekly update newsletter, and observe me on Twitter/X at @DavidGewirtz, on Fb at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, on Bluesky at @DavidGewirtz.com, and on YouTube at YouTube.com/DavidGewirtzTV.





