web analytics

Digital Impersonation Fraud: a Growing Challenge for Brands – Source: securityboulevard.com

Rate this post

Source: securityboulevard.com – Author: Nathan Eddy

Website impersonation scams are on the rise, and many companies are struggling to counter them effectively, according to a Memcyco and Global Surveyz Research survey.

The majority (53%) of companies said their current cybersecurity measures do not stop these scams, and 41% said their protection is only partial, according to the report. Just 6% of respondents said they believe their organization has an effective solution, despite 87% recognizing website impersonation as a significant issue and 69% experiencing such attacks.

The report revealed that while nearly three-quarters (72%) of companies monitor for fake websites, 66% rely on customer reports to detect these attacks and 37% find out through social media complaints.

Beyond Scan and Takedown Tactics

Current solutions are usually of the “scan and takedown” variety, explained Israel Mazin, CEO of Memcyco. They scan the web periodically for lookalike domains. When they detect such a site, they issue a takedown request to the relevant registrar.

Because they don’t operate in real-time, those operations don’t protect the company’s customers from falling into impersonation traps until the site is taken down. Nor do they provide the organization with any visibility of attacks and victims. “They don’t know when a digital impersonation attack took place, what its magnitude was, and which customers were actually hit,” Mazin said.

AIE

Techstrong Podcasts

In addition, customers are not protected in any way when they visit an impersonating site. This “window of exposure” starts when the offending site goes up until it is taken down – and it continues even after takedown, when stolen data still lurks in the darknet, waiting to be used in future attacks.

Mazin recommended organizations adopt digital impersonation protection platforms with real-time detection capabilities and alerts when a customer visits a fake site. “These solutions should have active customer protection methods that prevent customers from falling into phishing traps in the first place and protect their personal information in case they inadvertently fall into such a trap,” Mazin said.

Using AI to Fight Impersonation Fraud

AI-based techniques could allow a “fight fire by fire” strategy against bad actors, Mazin asserted. The bad actors are themselves using AI to launch more sophisticated attacks, making them harder to detect and act upon.

From Mazin’s perspective, AI will help organizations fight threats by helping them recognize fake emails, text messages, and voice calls for what they are. It will also allow analyzing the huge amounts of security alerts reaching the organization, inferring from them what exactly happened and prioritizing remediation tasks.

The technology could improve the ability to detect anomalies by comparing new incoming data to large existing data sets. This would allow for faster, earlier, and more accurate decisions about potential threats. “The potential impact of these attacks is not only on customer trust and brand reputation, but also—and some would say more importantly—on financial metrics,” Mazin said.

For example, one could look at the increase in budgets set up for customer reimbursement, as well as at the growth in expenses on large incident-handling teams (customer support, fraud, SOC). “Last but not least, in addition to customer trust, brand reputation and financial issues, companies also face the danger of regulatory non-compliance,” Mazin added.

An Evolving Regulatory Landscape

The report also found nearly half (48%) of companies are aware of potential regulations that may require them to reimburse customers.

As Mazin noted, the regulatory landscape is already changing, with the goal of giving more power to customers rather than organizations in cases of digital impersonation fraud. The UK has already approved regulation that holds businesses accountable for reimbursement of customers affected by this type of fraud, he pointed out, and similar legislation is under way in additional countries, including in the U.S.

“These developments can cost organizations a lot of money – unless they implement solutions that help combat digital impersonation fraud in real-time,” Mazin said.

The future of digital impersonation will include increased use of AI and deep fakes to create highly convincing impersonations of individuals and brands – realistic videos, voice clones, and text that mimics legitimate communications, Mazin predicted. Other evolutions include enhanced social engineering techniques, which use personalized information gathered from social media and other sources to craft targeted and convincing scams.

Another threat is the proliferation of instant fake websites and phishing campaigns, which can be set up and dismantled quickly. They are difficult to detect and combat. “To stay ahead of such threats, organizations should adopt advanced solutions that allow detection, protection and response in real-time,” Mazin said.

Photo credit: Hartono on Unsplash

Recent Articles By Author

Original Post URL: https://securityboulevard.com/2024/05/digital-impersonation-fraud-a-growing-challenge-for-brands/

Category & Tags: Cyberlaw,Data Privacy,Featured,Governance, Risk & Compliance,News,Security Boulevard (Original),Social – Facebook,Social – X,Threats & Breaches,AI,brand marketing,digital impersonation,Phishing,website impersonation – Cyberlaw,Data Privacy,Featured,Governance, Risk & Compliance,News,Security Boulevard (Original),Social – Facebook,Social – X,Threats & Breaches,AI,brand marketing,digital impersonation,Phishing,website impersonation

Views: 0

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post

More Latest Published Posts