AI tool or scammers’ playground? ChatGPT exploited for fraudulent activities: Expert

Published: Updated:
Enable Read mode
100% Font Size

Despite ChatGPT’s potential to benefit daily lives including helping users create well-written texts and letters, some of its users are targeting internet users for personal information and monetary gain.

Since its launch in November of last year, ChatGPT has rapidly gained widespread popularity among users across the world. However, several reports have emerged in recent weeks suggesting that scammers are misusing the language-based AI platform.

For all the latest headlines follow our Google News channel online or via the app.

Criminals can feed examples of text from entities or people they seek to emulate, and ChatGPT is equipped to create convincing messages based on them.

ChatGPT is being used by scammers to create phishing attacks and write convincing scam emails, leading to a rise in concerns among the public and cybersecurity experts about identifying these scams before people fall prey to them.

But this concept is not a new one, said Maher Yamout, Senior Security Researcher at Kaspersky.

AI has been used by scammers for some time now, especially with deepfakes, a type of synthetic media that is generated using AI techniques such as machine learning and neural networks to create a video or audio clip that appears to show a person saying or doing something that they did not actually say or do.

“Scammers, and bad actors in general, like to use technologies to give them an advantage when it comes to creating things that may take time or that may require a long process that is more complex, making things a bit easier for them,” Yamout told Al Arabiya English on the sidelines of the GISEC cybersecurity conference in Dubai this week.

“Now, that doesn’t mean that ChatGPT can create a virus for a phone or a bulletproof phishing campaign on its own. It has its limitations.”

If a scammer or hacker is trying to create an algorithm or a virus, they would need to know what they are aiming to do in the first place. Using ChatGPT requires human intervention because it is a query-based platform.

“ChatGPT will give them the advantage of time, making them more efficient by expediting the process of making complex stuff, but it cannot create things on its own, it needs a human on top of it to make sanity checks and to make sure that the process of what they are doing is bringing about the results they are looking for.”

One of the features of ChatGPT that has been exploited by criminals is its coding capabilities. It can adapt to different programming languages and generate pieces of code that can be used in apps. This means that amateur programmers can generate code for malicious tasks without even learning how to code.

Using AI to identify AI-generated text

The popularity of ChatGPT is increasing, and it is important to be cautious when receiving texts or emails that appear too well-written.

Traditionally, people have been advised to look out for poorly written texts and emails, but the emergence of AI tools like ChatGPT has changed the game. To avoid phishing attacks, it is advisable to delete any unsolicited correspondence and verify any communication from companies or government agencies by calling them directly.

One way to identify these scams is to use ChatGPT itself to identify AI-generate text, said Yamout.

“You can use ChatGPT or AI itself to detect AI-based text. Apparently, it does work, so you can use it to detect AI-based text terms of context and language. But at the end of the day, we need to know like with language-based models, such as ChatGPT, they have limitations in the sense that it’s a query-based system so you ask them the question, they reply back to you. It’s not like it can interact with the real world.”

When asked about whether it was aware of the fact that it may be helping scammers without realizing it, ChatGPT responded: “Yes, I am aware that scammers can use my responses to try to manipulate or deceive people. As an AI language model, I do not have the ability to distinguish between genuine and fraudulent inquiries or intentions. It is important for users to be cautious and use critical thinking when interacting with information online, including information provided by AI language models like myself.”

It also went on to suggest ways to identify “red flags.”

“There are certain red flags that users can look out for to identify potential scams or fraudulent activities, such as unsolicited messages asking for personal or financial information, requests for payment or wire transfers, or promises of large sums of money in exchange for a small investment. If you suspect that you are being targeted by a scam, it is important to report it to the appropriate authorities and take steps to protect your personal information and finances,” ChatGPT said in a conversation.

The developers of ChatGPT, OpenAI, have expressed their commitment to preventing the misuse of their tool. They have created the Classifier tool, which helps differentiate between AI-written and human-written text.

However, as AI technology advances, it is imperative to stay up-to-date on its capabilities and potential risks.

The deepfake threat

Yamout said that he believes deepfakes are still the most prominent threat when it comes to AI-assisted scams.

“The problem with AI-based systems that are able to mimic or impersonate the voice or video of a person; that can be a bit tricky to identify.”

However, Yamout added that some key anomalies can be identified in deepfakes.

“AI usually has issues with eyes, ears and fingers so you can detect anomalies in the picture and you can even detect anomalies in the videos, so it can be like someone trying to repeat themselves in sound or behavior.”

Scammers can use deepfakes to mimic someone’s voice, call their bank and eventually steal money.

“I think the trickiest part will be the voice and that is because you cannot see or sense it, so then it will be difficult catching the mistakes on the phone.”

Read more:

AI Breakthrough: ChatGPT can almost pass US Medical Licensing Exam, study finds

ChatGPT app update more ‘human-like’: Company

OpenAI’s ChatGPT Plus now available in UAE for $20 per month

Top Content Trending