Executive impersonation on social media is at an all-time high as threat actors take advantage of AI to improve and scale their attacks. In Q3, accounts pretending to belong to high-ranking executives on social media climbed to more than 54% of total impersonation volume, surpassing brand attacks for the first time since Fortra began tracking this data. The volume and composition of these attacks strongly indicates they are crafted using generative AI.

AI and Social Media
In the second half of 2023, impersonation attacks as a whole grew to represent the number one threat-type on social media, according to Fortra’s Social Media Protection solutions. These threats manifest as fraudulent stores or brand pages, with the majority of the volume materializing as fake executive profiles (see graph above). Fake accounts are easily created via AI, with tools capable of generating the entire lifecycle of a scam.
Modern social impersonations are becoming more sophisticated and convincing. Executive threats are especially concerning, as today’s consumers often turn to social platforms to research organizations and engage directly with individuals seen as the “face” of the brand.
AI is adding another layer of risk. While advanced AI isn’t always financially feasible for scammers to deploy end-to-end, widely available tools make it easy to automate activity at scale. This leaves bad actors free to focus on creating accounts and conducting one-to-one interactions, amplifying the reach and credibility of their campaigns.
By automating much of these attacks, criminals are better able to execute large-scale fraud with the same or greater efficiency than they would through manual means.
Executive Impersonation on Social Media
Threat actors exploit the trust people place in high-profile figures, using AI to generate realistic images or videos that promote offers, giveaways, or other enticements. These posts often include instructions to click a link or engage directly with the fake executive via direct message, manipulating victims into taking actions that compromise their security.
Below is an example of a fake executive account impersonating the CEO of a global financial institution. The profile contains a professional photo and “about” information that is a near exact description of the executive.

Malicious pages and posts appearing identical to that of a legitimate executive are often indistinguishable from one another due to AI’s ability to mimic imagery and speech. This leaves brands scrambling to mitigate the fallout of an attack, which can range widely due to the broad user base and rapid nature of communication indicative of social platforms.
The most critical components of executive impersonation attacks are the posts. If the intent of the actor is ultimately to communicate with the victim, any comments will be replied to either by AI or the actor themselves. By replying to the comment, the account is further legitimized and the user will then be encouraged to engage in direct messages.
Below is a fake executive account impersonating the CEO of a global financial institution. The profile contains links to bitcoin sites, with the intent to lure unsuspecting users to visit the provided destinations, based on the recommendations of the fake executive.

The window for a social scam to succeed is short, and once the legitimacy of a post or direct message is questioned, the entire account can be discredited. To maximize impact, threat actors craft content that looks established and validated by others.
They often start with private accounts, allowing the profile to “age” without public scrutiny. AI is then used to generate additional fake accounts that comment on and interact with the original posts. By creating realistic copies of real people, these networks of accounts can mislead both users and automated security systems, amplifying the perceived credibility of the scam.
Executive Impersonation by Industry
Executive impersonation pages will look different depending on the industry that is targeted. For instance, attacks on financial institutions will focus on stealing credit card information, account data, or taking money directly from your account. Retail is a prime target for counterfeit, as threat actors recognize that consumers increasingly research brands and product lines through the lens of influencers on social platforms. Other popular industries targeted include cryptocurrency, computer software, and ecommerce.
Situational Risks for Executive Impersonation
Security teams need to track whether executives maintain an active social presence while simultaneously monitoring for impersonation attempts. Certain scenarios create opportunities for bad actors, often without the executive or brand realizing it:
Former executives with active old accounts: Profiles tied to previous roles can still display outdated logos and brand information, lending credibility to imposters who exploit the connection.
Executives without active profiles: When no legitimate profile exists, consumers have no reliable source for information, leaving a vacuum that impersonators can easily fill.
Proactive management of executive social presence—both current and former—can reduce confusion and limit opportunities for impersonation.
Identifying AI-Generated Content
AI-generated impersonations are hard to spot because AI mimics the subtle nuances of human behavior. Detection tools exist but are limited, especially for deepfake videos, and social platforms rarely catch them. For now, identifying these threats relies on human experts who understand both AI quirks and human communication.
AI identifiers include:
1. URLS for accounts with misspelled or manipulated names for executives, such as:
- https://www.instagram.com/xxx_efra137/
- https://www.instagram.com/xxx_efra485/
- https://www.instagram.com/xxxx_fasrer_7878/
- https://www.instagram.com/xxxx_fr14/
- https://www.instagram.com/xxxx_fra116/
- https://www.instagram.com/xxxx_fra16/
2. Multiple accounts using the same photo of the executive:

3. A canned phrase in the profile of the account that is repeated in other accounts:

Other identifiers that security teams should look for:
Copy
- As a general rule, posts and conversations will lack emotion or empathy traditionally characterizing human speech.
- English is too perfect
- Repetitiveness or repeating the same response to questions. Generally, this behavior is more indicative of a bot. AI is getting better at not repeating.
- Lacking the ability to identify/use common human idioms such as slang, shortform, or buzzwords
- Plagiarism
Images/Videos
- Fuzz around humans
- Anomalies within the image
- Identifiers within the metadata such as location, date/time created
- Metadata may show if the image was edited by a program
- Visual patterns that repeat artistic styles, making images too perfect
- The size of objects in comparison to individuals may be too small or too large.
Threat Creation
Threat actors can abuse legitimate AI tools to launch online threats just as easily as content creators can apply them to benign social media campaigns. There are many tools leveraging artificial intelligence that automate the generation of creatives and tasks on platforms such as Meta, LinkedIn, X, and more. These tools can connect social channels, turn content ideas into multiple posts, generate video and text, schedule campaigns, and more.
Examples of legitimate software that may be abused for malicious purposes:
- FeedHive
- Vista Social
- Buffer
- Flick
- Publer
AI chatbots are also broadly advertised across dark web forums as uncensored large language models capable of bypassing potential restrictions held by services like ChatGPT. These non-restricted chatbots are offered for monthly, yearly, or lifetime subscription fees and, on occasion, for free. These chatbots can assist with many malicious activities including:
- Generating malware
- Ransomware
- Writing language for phishing emails
- Creating phishing pages
- Detecting vulnerabilities
According to Nick Oram, Operations Manager for Fortra’s Dark Web & Mobile App Monitoring Services, the claims made by these tools cannot be confirmed, nor can we determine with 100% certainty their effectiveness in creating working malicious tools or services.
“However, it is important for cyber security research teams to be cognizant of the threats posed by AI chatbots and the actors exploiting these tools for fraudulent endeavors,” adds Oram. “These tools will only continue to become more sophisticated.”
Following is an example of an AI chatbot advertised over a popular dark web forum.

Today, a strong brand isn’t just expected on the major platforms — TikTok, Facebook, Instagram, X, and YouTube — it’s expected to actively engage consumers through both brand and executive pages. Criminals know this and target these channels for impersonation and deceptive interactions. The result ranges from consumer confusion at best to financial loss and lasting damage to brand reputation at worst.