Groups | Search | Server Info | Login | Register


Groups > comp.ai.neural-nets > #67235

Hugging Face, the GitHub of AI, hosted code that backdoored user devices

From lol <lol@apple.com>
Subject Hugging Face, the GitHub of AI, hosted code that backdoored user devices
Message-ID <faa1b4aff424eabe7e7d85f5abb49014@dizum.com> (permalink)
Date 2024-03-05 22:27 +0100
Newsgroups comp.ai.neural-nets, comp.ai.philosophy, misc.phone.mobile.iphone, talk.politics.guns, talk.politics.misc
Organization dizum.com - The Internet Problem Provider

Cross-posted to 5 groups.

Show all headers | View raw


Code uploaded to AI developer platform Hugging Face covertly installed 
backdoors and other types of malware on end-user machines, researchers 
from security firm JFrog said Thursday in a report that’s a likely 
harbinger of what’s to come.

In all, JFrog researchers said, they found roughly 100 submissions that 
performed hidden and unwanted actions when they were downloaded and loaded 
onto an end-user device. Most of the flagged machine learning models—all 
of which went undetected by Hugging Face—appeared to be benign proofs of 
concept uploaded by researchers or curious users. JFrog researchers said 
in an email that 10 of them were “truly malicious” in that they performed 
actions that actually compromised the users’ security when loaded.

Full control of user devices
One model drew particular concern because it opened a reverse shell that 
gave a remote device on the Internet full control of the end user’s 
device. When JFrog researchers loaded the model into a lab machine, the 
submission indeed loaded a reverse shell but took no further action.

That, the IP address of the remote device, and the existence of identical 
shells connecting elsewhere raised the possibility that the submission was 
also the work of researchers. An exploit that opens a device to such 
tampering, however, is a major breach of researcher ethics and 
demonstrates that, just like code submitted to GitHub and other developer 
platforms, models available on AI sites can pose serious risks if not 
carefully vetted first.

“The model’s payload grants the attacker a shell on the compromised 
machine, enabling them to gain full control over victims’ machines through 
what is commonly referred to as a ‘backdoor,’” JFrog Senior Researcher 
David Cohen wrote. “This silent infiltration could potentially grant 
access to critical internal systems and pave the way for large-scale data 
breaches or even corporate espionage, impacting not just individual users 
but potentially entire organizations across the globe, all while leaving 
victims utterly unaware of their compromised state.”

https://arstechnica.com/security/2024/03/hugging-face-the-github-of-ai-
hosted-code-that-backdoored-user-devices/

Back to comp.ai.neural-nets | Previous | Next | Find similar


Thread

Hugging Face, the GitHub of AI, hosted code that backdoored user devices lol <lol@apple.com> - 2024-03-05 22:27 +0100

csiph-web