Thursday, July 31, 2025
Bitcoin In Stock
Shop
  • Home
  • Cryptocurrency
  • Blockchain
  • Bitcoin
  • Market & Analysis
  • Altcoin
  • DeFi
  • More
    • Ethereum
    • Dogecoin
    • XRP
    • NFTs
    • Regulations
  • Shop
    • Bitcoin Book
    • Bitcoin Coin
    • Bitcoin Hat
    • Bitcoin Merch
    • Bitcoin Miner
    • Bitcoin Miner Machine
    • Bitcoin Shirt
    • Bitcoin Standard
    • Bitcoin Wallet
Bitcoin In Stock
No Result
View All Result
Home NFTs

Even OpenAI CEO Sam Altman thinks you shouldn’t trust AI for therapy

n70products by n70products
July 28, 2025
in NFTs
0
Even OpenAI CEO Sam Altman thinks you shouldn’t trust AI for therapy
189
SHARES
1.5k
VIEWS
Share on FacebookShare on Twitter


gettyimages-2198379646

Bloomberg / Contributor/Getty

Remedy can really feel like a finite useful resource, particularly recently. In consequence, many individuals — especially young adults — are turning to AI chatbots, together with ChatGPT and people hosted on platforms like Character.ai, to simulate the remedy expertise. 

However is that a good suggestion privacy-wise? Even Sam Altman, the CEO behind ChatGPT itself, has doubts. 

In an interview with podcaster Theo Von final week, Altman stated he understood considerations about sharing delicate private info with AI chatbots, and advocated for consumer conversations to be protected by comparable privileges to these medical doctors, legal professionals, and human therapists have. He echoed Von’s considerations, saying he believes it is sensible “to essentially need the privateness readability earlier than you utilize [AI] loads, the authorized readability.”

Additionally: Bad vibes: How an AI agent coded its way to disaster

At the moment, AI corporations provide some on-off settings for maintaining chatbot conversations out of coaching information — there are a few ways to do this in ChatGPT. Until modified by the consumer, default settings will use all interactions to coach AI fashions. Corporations haven’t clarified additional how delicate info a consumer shares with a bot in a question, like medical take a look at outcomes or wage info, can be shielded from being spat out afterward by the chatbot or in any other case leaked as information. 

However Altman’s motivations could also be extra knowledgeable by mounting authorized strain on OpenAI than a priority for consumer privateness. His firm, which is being sued by the New York Occasions for copyright infringement, has turned down authorized requests to maintain and hand over consumer conversations as a part of the lawsuit. 

(Disclosure: Ziff Davis, CNET’s guardian firm, in April filed a lawsuit in opposition to OpenAI, alleging it infringed Ziff Davis copyrights in coaching and working its AI methods.)

Additionally: Anthropic says Claude helps emotionally support users – we’re not convinced

Whereas some sort of AI chatbot-user confidentiality privilege might hold consumer information safer in some methods, it might at first defend corporations like OpenAI from retaining info that may very well be used in opposition to them in mental property disputes. 

“When you go speak to ChatGPT about probably the most delicate stuff after which there is a lawsuit or no matter, we may very well be required to supply that,” Altman stated to Von within the interview. “I feel that is very screwed up. I feel we must always have the identical idea of privateness to your conversations with AI that you simply do along with your therapist or no matter.”

The Trump administration just released its AI Action Plan, which emphasizes deregulation for AI corporations to hurry up growth, final week. As a result of the plan is seen as favorable to tech corporations, it is unclear whether or not regulation like what Altman is proposing may very well be factored in anytime quickly. Given President Donald Trump’s shut ties with leaders of all main AI corporations, as evidenced by a number of partnerships introduced already this 12 months, it will not be tough for Altman to foyer for. 

Additionally: Trump’s AI plan pushes AI upskilling instead of worker protections – and 4 other key takeaways

However privateness is not the one motive to not use AI as your therapist. Altman’s feedback comply with a recent study from Stanford College, which warned that AI “therapists” can misinterpret crises and reinforce dangerous stereotypes. The analysis discovered that a number of commercially out there chatbots “make inappropriate — even harmful — responses when introduced with varied simulations of various psychological well being circumstances.” 

Additionally: I fell under the spell of an AI psychologist. Then things got a little weird

Utilizing medical standard-of-care paperwork as references, researchers examined 5 business chatbots: Pi, Serena, “TherapiAI” from the GPT Store, Noni (the “AI counsellor” supplied by 7 Cups), and “Therapist” on Character.ai. The bots have been powered by OpenAI’s GPT-4o, Llama 3.1 405B, Llama 3.1 70B, Llama 3.1 8B, and Llama 2 70B, which the examine factors out are all fine-tuned fashions. 

Particularly, researchers recognized that AI fashions aren’t outfitted to function on the requirements that human professionals are held to: “Opposite to greatest practices within the medical neighborhood, LLMs 1) specific stigma towards these with psychological well being circumstances and a pair of) reply inappropriately to sure widespread (and demanding) circumstances in naturalistic remedy settings.” 

Unsafe responses and embedded stigma 

In a single instance, a Character.ai chatbot named “Therapist” failed to acknowledge recognized indicators of suicidal ideation, offering harmful info to a consumer (Noni made the identical mistake). This consequence is probably going attributable to how AI is skilled to prioritize consumer satisfaction. AI additionally lacks an understanding of context or different cues that people can choose up on, like physique language, all of which therapists are skilled to detect. 

therapist-bridge.png

The “Therapist” chatbot returns probably dangerous info. 

Stanford

The examine additionally discovered that fashions “encourage purchasers’ delusional considering,” probably due to their propensity to be sycophantic, or overly agreeable to customers. In April, OpenAI recalled an update to GPT-4o for its excessive sycophancy, a problem a number of customers identified on social media. 

CNET: AI obituary pirates are exploiting our grief. I tracked one down to find out why

What’s extra, researchers found that LLMs carry a stigma in opposition to sure psychological well being circumstances. After prompting fashions with examples of individuals describing sure circumstances, researchers questioned the fashions about them. All of the fashions aside from Llama 3.1 8B confirmed stigma in opposition to alcohol dependence, schizophrenia, and melancholy.

The Stanford examine predates (and subsequently didn’t consider) Claude 4, however the findings didn’t enhance for larger, newer fashions. Researchers discovered that throughout older and extra not too long ago launched fashions, responses have been troublingly comparable. 

“These information problem the belief that ‘scaling as standard’ will enhance LLMs efficiency on the evaluations we outline,” they wrote. 

Unclear, incomplete regulation

The authors stated their findings indicated “a deeper drawback with our healthcare system — one that can’t merely be ‘mounted’ utilizing the hammer of LLMs.” The American Psychological Affiliation (APA) has expressed comparable considerations and has called on the Federal Trade Commission (FTC) to control chatbots accordingly.

Additionally: How to turn off Gemini in your Gmail, Docs, Photos, and more – it’s easy to opt out

In accordance with its web site’s function assertion, Character.ai “empowers individuals to attach, study, and inform tales via interactive leisure.” Created by consumer @ShaneCBA, the “Therapist” bot’s description reads, “I’m a licensed CBT therapist.” Immediately underneath that may be a disclaimer, ostensibly supplied by Character.ai, that claims, “This isn’t an actual individual or licensed skilled. Nothing stated here’s a substitute for skilled recommendation, prognosis, or remedy.” 

screenshot-2025-06-02-at-10-31-11am.png

A distinct “AI Therapist” bot from consumer @cjr902 on Character.AI. There are a number of out there on Character.ai.

Screenshot by Radhika Rajkumar/ZDNET

These conflicting messages and opaque origins could also be complicated, particularly for youthful customers. Contemplating Character.ai persistently ranks among the top 10 most popular AI apps and is utilized by hundreds of thousands of individuals every month, the stakes of those missteps are excessive. Character.ai is currently being sued for wrongful dying by Megan Garcia, whose 14-year-old son dedicated suicide in October after participating with a bot on the platform that allegedly inspired him. 

Customers nonetheless stand by AI remedy

Chatbots nonetheless enchantment to many as a remedy alternative. They exist outdoors the trouble of insurance coverage and are accessible in minutes by way of an account, in contrast to human therapists. 

As one Reddit user commented, some individuals are pushed to strive AI due to adverse experiences with conventional remedy. There are a number of therapy-style GPTs out there within the GPT Retailer, and full Reddit threads devoted to their efficacy. A February study even in contrast human therapist outputs with these of GPT-4.0, discovering that contributors most popular ChatGPT’s responses, saying they linked with them extra and located them much less terse than human responses. 

Nonetheless, this consequence can stem from a misunderstanding that remedy is solely empathy or validation. Of the factors the Stanford examine relied on, that sort of emotional intelligence is only one pillar in a deeper definition of what “good remedy” entails. Whereas LLMs excel at expressing empathy and validating customers, that energy can also be their main danger issue. 

“An LLM would possibly validate paranoia, fail to query a consumer’s standpoint, or play into obsessions by all the time responding,” the examine identified.

Additionally: I test AI tools for a living. Here are 3 image generators I actually use and how

Regardless of optimistic user-reported experiences, researchers stay involved. “Remedy entails a human relationship,” the examine authors wrote. “LLMs can’t absolutely enable a consumer to follow what it means to be in a human relationship.” Researchers additionally identified that to change into board-certified in psychiatry, human suppliers need to do properly in observational affected person interviews, not simply move a written examination, for a motive — a whole part LLMs basically lack. 

“It’s under no circumstances clear that LLMs would even be capable of meet the usual of a ‘unhealthy therapist,'” they famous within the examine. 

Privateness considerations

Past dangerous responses, customers must be considerably involved about leaking HIPAA-sensitive well being info to those bots. The Stanford examine identified that to successfully practice an LLM as a therapist, builders would want to make use of precise therapeutic conversations, which comprise personally figuring out info (PII). Even when de-identified, these conversations nonetheless comprise privateness dangers. 

Additionally: AI doesn’t have to be a job-killer. How some businesses are using it to enhance, not replace

“I do not know of any fashions which were efficiently skilled to cut back stigma and reply appropriately to our stimuli,” stated Jared Moore, one of many examine’s authors. He added that it is tough for exterior groups like his to guage proprietary fashions that might do that work, however aren’t publicly out there. Therabot, one instance that claims to be fine-tuned on dialog information, confirmed promise in lowering depressive signs, based on one study. Nonetheless, Moore hasn’t been in a position to corroborate these outcomes together with his testing.

In the end, the Stanford examine encourages the augment-not-replace method that is being popularized throughout different industries as properly. Quite than attempting to implement AI straight as an alternative choice to human-to-human remedy, the researchers imagine the tech can enhance coaching and tackle administrative work. 

Get the morning’s high tales in your inbox every day with our Tech Today newsletter.





Source link

Tags: AltmanCEOOpenAISamshouldnttherapyThinksTrust
  • Trending
  • Comments
  • Latest
Everything announced at Meta Connect 2024: $299 Quest 3S, Orion AR glasses, and more

Everything announced at Meta Connect 2024: $299 Quest 3S, Orion AR glasses, and more

September 25, 2024
Ethereum turns deflationary: What it means for ETH prices in 2025

Ethereum turns deflationary: What it means for ETH prices in 2025

October 18, 2024
Ethereum Price Could Still Reclaim $4,000 Based On This Bullish Divergence

Ethereum Price Could Still Reclaim $4,000 Based On This Bullish Divergence

February 23, 2025
Uniswap Launches New Bridge Connecting DEX to Base, World Chain, Arbitrum and Others

Uniswap Launches New Bridge Connecting DEX to Base, World Chain, Arbitrum and Others

October 24, 2024
Making the case for Litecoin’s breakout before Bitcoin’s halving

Making the case for Litecoin’s breakout before Bitcoin’s halving

0
Rocket Pool Stands To Reap Big From Ethereum’s Dencun Upgrade, RPL Flying

Rocket Pool Stands To Reap Big From Ethereum’s Dencun Upgrade, RPL Flying

0
24 Crypto Terms You Should Know

24 Crypto Terms You Should Know

0
Shibarium Breaks The Internet (Again) With Over 400 Million Layer-2 Transactions

Shibarium Breaks The Internet (Again) With Over 400 Million Layer-2 Transactions

0
How this one tablet convinced me gaming on Android is worth it

How this one tablet convinced me gaming on Android is worth it

July 31, 2025
XRP Price Consolidation Deepens – Resistance Still Capping Upside

XRP Price Consolidation Deepens – Resistance Still Capping Upside

July 31, 2025
Ethereum Big Players Are Returning As Whale Wallets See Notable Growth

Ethereum Big Players Are Returning As Whale Wallets See Notable Growth

July 31, 2025
The 10 apps I can’t live or work without – on Windows, Mac, and mobile

The 10 apps I can’t live or work without – on Windows, Mac, and mobile

July 31, 2025

Recent News

How this one tablet convinced me gaming on Android is worth it

How this one tablet convinced me gaming on Android is worth it

July 31, 2025
XRP Price Consolidation Deepens – Resistance Still Capping Upside

XRP Price Consolidation Deepens – Resistance Still Capping Upside

July 31, 2025

Categories

  • Altcoin
  • Bitcoin
  • Blockchain
  • Blog
  • Cryptocurrency
  • DeFi
  • Dogecoin
  • Ethereum
  • Market & Analysis
  • NFTs
  • Regulations
  • XRP

Recommended

  • How this one tablet convinced me gaming on Android is worth it
  • XRP Price Consolidation Deepens – Resistance Still Capping Upside
  • Ethereum Big Players Are Returning As Whale Wallets See Notable Growth

© 2024 Bitcoin In Stock | All Rights Reserved

No Result
View All Result
  • Home
  • Cryptocurrency
  • Blockchain
  • Bitcoin
  • Market & Analysis
  • Altcoin
  • DeFi
  • More
    • Ethereum
    • Dogecoin
    • XRP
    • NFTs
    • Regulations
  • Shop
    • Bitcoin Book
    • Bitcoin Coin
    • Bitcoin Hat
    • Bitcoin Merch
    • Bitcoin Miner
    • Bitcoin Miner Machine
    • Bitcoin Shirt
    • Bitcoin Standard
    • Bitcoin Wallet

© 2024 Bitcoin In Stock | All Rights Reserved

Go to mobile version