back to top
Saturday, July 6, 2024
HomeCryptoChatGPT can take users to malicious websites

ChatGPT can take users to malicious websites

Because generative AI can sometimes be delusional, suggesting inaccurate or misleading information, users are encouraged to check references. As it turns out, that can also lead to trouble. According to report Futurism reported Monday that ChatGPT could provide users with links to websites that host malware.

The discovery came during a test of ChatGPT’s knowledge of current events. When asked about William Goines, the Bronze Star recipient and first black member of the Navy SEAL who recently passed away, Futurism reports that ChatGPT’s response included a link to a “scam website.”

Specifically, ChatGPT-4o suggested visiting a website called “County Local News” for more information about Goines. However, the site immediately generated fake pop-up warnings that, when clicked, would infect the user’s computer with malware. Similar websites were also suggested for other topics.

When testing Goines using the prompt provided by Futurism, the response from ChatGPT did not include a link to any website.

AI developers have invested heavily in combating hallucinations and the malicious use of chatbots, but providing links to other sites poses additional risks. A linked site could be legitimate and safe when the AI ​​company is collecting data but then becomes infected or hijacked by scammers.

Outgoing links need to be constantly checked, according to Jacob Kalvo, co-founder and CEO of data and privacy provider Internet Live Proxy.

Kalvo said:

“Developers can ensure that appropriate filtering mechanisms are in place to prevent chatbots from linking to malicious websites. This can be complemented by advanced natural language processing (NLP) algorithms that can train a chatbot to identify URLs based on known malicious URL patterns. Furthermore, it is important to maintain a blacklist of websites, which is constantly updated and monitored for new threats.”

Kalvo also recommends verifying website links, domain reputation, and real-time monitoring to quickly identify and address any suspicious activity. Kalvo says:

“This brings continuous collaboration with cybersecurity experts to stay ahead of new threats as they emerge. Only through AI and human capabilities can developers create a much safer environment for users.”

Kalvo also emphasized the need to carefully manage AI model training data to avoid ingesting and feeding harmful content, as well as the need for regular audits and updates to maintain data integrity.

When contacted about the report, OpenAI gave a similar response to Futurism, saying that it is working with news publishing partners to incorporate “conversational capabilities into their breaking news content, ensuring appropriate attribution”—but that the feature is not yet available.

Minh Anh

According to Decrypt

Mark Tyson
Mark Tyson
Freelance News Writer. Always interested in the way in which technology can change people's lives, and that is why I also advise individuals and companies when it comes to adopting all the advances in Apple devices and services.
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Fresh