AI chatbots with web browsing can be abused as malware relays, based on a Check Point Research demo. Instead of malware calling home to a traditional command server, it can use a chatbot’s URL fetching to pull instructions from a malicious page, then carry the response back to the infected machine.

In many environments, traffic to major AI destinations is already treated as routine, which can let command-and-control fade into normal web use. The same path can also be used to move data out.

Microsoft addressed the work in a statement and framed it as a post-compromise communications issue. It said that once a device is compromised, attackers will try to use whatever services are available, including AI-based ones, and it urged defense-in-depth controls to prevent infection and reduce what happens after.

The demo turns chat into a relay

The concept is straightforward. The malware prompts the AI web interface to load a URL, summarize what it finds, then scrapes the returned text for an embedded instruction.

Check Point said it tested the technique against Grok and Microsoft Copilot through their web interfaces. A key detail is access, the flow is designed to avoid developer APIs, and in the tested scenarios it can work without an API key, lowering friction for misuse.

For data theft, the mechanism can run in reverse. One method outlined is to place data in URL query parameters, then rely on the AI-triggered request to deliver it to adversary infrastructure. Basic encoding can further obscure what’s being sent, which makes simple content filtering less reliable.

Why it’s harder to spot

This isn’t a new malware class. It’s a familiar command-and-control pattern wrapped in a service many companies are actively enabling. If browsing-enabled AI services are left open by default, an infected system can try to hide behind domains that look low-risk.

Check Point also highlights how common the plumbing is. Its example uses WebView2 as an embedded browser component on modern Windows machines. In the described workflow, a program gathers basic host details, opens a hidden web view to an AI service, triggers a URL request, then parses the response to extract the next command. That can resemble ordinary app behavior, not an obvious beacon.

What security teams should do

Treat web-enabled chatbots like any other high-trust cloud app that can be abused after compromise. If it’s permitted, monitor for automation patterns, repeated URL loads, odd prompt cadence, or traffic volumes that don’t match human use.

AI browsing features may belong on managed devices and specific roles, not every machine. The open question is scale, this is a demo and it doesn’t quantify success rates against hardened fleets. What to watch next is whether providers add stronger automation detection in web chat, and whether defenders start treating AI destinations as potential post-compromise channels.

Share.
Exit mobile version