As the standoff between the United States government and Minnesota continues this week over immigration enforcement operations that have essentially occupied the Twin Cities and other parts of the state, a federal judge delayed a decision this week and ordered a new briefing on whether the Department of Homeland Security is using armed raids to pressure Minnesota into abandoning its sanctuary policies for immigrants.
Meanwhile, minutes after a federal immigration officer shot and killed 37-year-old Alex Pretti in Minneapolis last Saturday, Trump administration officials and right-wing influencers had already mounted a smear campaign, calling Pretti a “terrorist” and a “lunatic.”
As part of its surveillance dragnet, Immigration and Customs Enforcement has been using an AI-powered Palantir system since last spring to summarize tips sent to its tip line, according to a newly released Homeland Security document. DHS immigration agents have also been using the now notorious face recognition app Mobile Fortify to scan the faces of countless people in the US—including many citizens. And a new ICE filing provides insights on how commercial tools, including for ad tech and big data analysis, are increasingly being considered by the government for law enforcement and surveillance. And an active military officer broke down federal immigration enforcement actions in Minneapolis and around the US for WIRED, concluding that ICE is masquerading as a military force, but actually uses immature tactics that would get real soldiers killed.
WIRED published extensive inside details this week of the inner workings of a scam compound in the Golden Triangle region of Laos after a human trafficking victim calling himself Red Bull communicated with a WIRED reporter for months and leaked a massive trove of internal documents from the compound where he was being held. Crucially, WIRED also chronicled his own experiences as a forced laborer in the compound and his attempts to escape.
Deepfake “nudify” technology and tools that produce sexual deepfakes are getting increasingly sophisticated, capable, and easy to access, posing more and more risk for millions of people who are abused with the technology. Plus, research this week found that an AI stuffed animal toy from Bondu had its web console almost entirely unprotected, exposing 50,000 logs of chats with kids to anyone with a Gmail account.
And there’s more. Each week, we round up the security and privacy news we didn’t cover in depth ourselves. Click the headlines to read the full stories. And stay safe out there.
According to a document released by the Department of Justice on Friday, an informant told the FBI in 2017 that Jeffrey Epstein had a “personal hacker.” The document, first reported by TechCrunch, was released as part of a large trove of material the DOJ is legally required to release related to the investigation into the late sex offender. The document does not provide an identity for the alleged hacker, but it includes some details: They were allegedly born in Italy in the southern region of Calabria, and their hacking focused on discovering vulnerabilities in Apple’s iOS mobile operating system, BlackBerry devices, and the Firefox browser. The informant told the FBI that the hacker “was very good at finding vulnerabilities.”
The hacker allegedly developed offensive hacking tools including exploits for unknown and/or unpatched vulnerabilities and allegedly sold them to several countries, including an unnamed central African government, the UK, and the US. The informant even reported to the FBI that the hacker sold an exploit to Hezbollah and received “a trunk of cash” in payment. It is unclear whether the informant’s account is accurate or whether the FBI verified the report.
The viral AI assistant OpenClaw—which was previously called Clawdbot and then, briefly, Moltbot—has taken Silicon Valley by storm this week. Technologists are letting the assistant control their digital lives: connecting it to online accounts and letting it complete tasks for them. The assistant, as WIRED reported, runs on a personal computer, connects to other AI models, and can be given permission to access your Gmail, Amazon, and scores of other accounts. “I could basically automate anything. It was magical,” one entrepreneur told WIRED.
They haven’t been the only ones intrigued by the capable AI assistant. OpenClaw’s creators say more than 2 million people have visited the project over the last week. However, its agentic abilities come with potential security and privacy trade-offs—starting with the need to provide access to online accounts—that likely make it impractical for many people to operate securely. As OpenClaw has grown in popularity, security researchers have identified “hundreds” of instances where users have exposed their systems to the web, the Register reported. Several included no authentication and exposed full access to the users’ system.





