LinkedIn Phishing Scams Hijack Public Comments, Using AI to Impersonate Platform Support
- Cyber Jack

- 22 minutes ago
- 3 min read
A wave of LinkedIn phishing attacks is exploiting the platform’s own public comment sections, blurring the line between legitimate support messages and outright fraud in a way that security researchers say marks a new phase in social engineering.
The campaign surfaced earlier this week when researchers and targeted users began warning that bot-like accounts were replying directly to posts while impersonating LinkedIn itself. The fake comments claim the recipient has violated platform policies and must act immediately to avoid suspension.
The scam works because the messages look and feel authentic. The attackers use LinkedIn-style language, familiar branding, and shortened lnkd.in URLs that appear to be official. The fraudulent profiles, often operating under names such as “Linked Very,” post publicly, telling users their accounts have been temporarily restricted for non-compliance and directing them to external appeal pages.
Those links lead to fake verification portals designed to harvest credentials. Researchers note that the phishing pages closely resemble real LinkedIn login screens, increasing the likelihood that users will enter their usernames and passwords without suspicion.
Security experts say the scale and polish of the operation point to heavy use of automation and AI. Max Gannon, Cyber Intelligence Team Manager at Cofense, says attackers are now able to impersonate trusted brands on social platforms with unprecedented efficiency.
“Although LinkedIn’s process for creating a company page, especially one that appears to be LinkedIn itself, was not previously easy to abuse at scale, the proper application of AI now makes it possible,” Gannon says. He adds that when combined with automated contact methods, these tools allow threat actors to deploy large campaigns that spoof LinkedIn while abusing legitimate infrastructure.
The impact is already visible. One LinkedIn user reported encountering multiple fake “Linked Very” accounts targeting them over a single weekend. After reporting the posts, the user received a follow-up comment from a legitimate LinkedIn support account that looked strikingly similar in tone and structure to the phishing messages, underscoring how difficult it can be for users to tell the difference.
The company confirmed the original messages were fraudulent and said its teams were taking action, while urging users to continue reporting suspicious content.
For defenders, the campaign highlights how social engineering is shifting away from private messages and emails toward more visible, in-platform tactics. Chance Caldwell, Senior Director of the Phishing Defense Center at Cofense, describes the trend as attackers embedding themselves directly inside trusted digital spaces.
“This new LinkedIn phishing campaign highlights a troubling evolution in social engineering tactics, where attackers embed themselves directly into trusted digital spaces and exploit user trust by mimicking legitimate communications,” Caldwell says. By posting comments that appear to come from LinkedIn and include official branding and real URL shorteners, attackers can quickly gain credibility and redirect victims to malicious sites.
Caldwell notes that LinkedIn is not alone. Similar comment-based phishing campaigns are spreading across major social networks, with Facebook frequently abused to funnel users toward credential-harvesting pages. The rise of AI, he says, has made it possible to post thousands of convincing fake comments in a short period of time.
The responsibility, experts argue, now falls heavily on platforms themselves. Stronger verification for brand accounts, improved monitoring of comments, and faster takedowns are increasingly critical to protecting users and maintaining trust. At the same time, individuals are being asked to adopt a more skeptical posture, even when warnings appear to come from official sources.
“As threat actors increasingly use AI and emerging automated methods, legitimate companies like LinkedIn will need stronger verification and validation controls to prevent abuse of their services and protect brand trust,” Gannon says.


