The impact of deepfakes on marketing
While researching AI experts, I came across a deepfake. It wasn’t obvious at first, given his seemingly legitimate profile and social media presence. However, after seeing the same creepy AI-generated photo of Dr. Lance B. Eliot all over the web, it became clear that he was not a real person. So I followed him and found out his scam.
The Omnipresent Dr. Lance B. Eliot
Eliot has over 11,000 followers on LinkedIn and we have two connections in common. Both have thousands of followers on LinkedIn and years of AI experience as an investor, analyst, keynote speaker, columnist, and CEO. LinkedIn members interact with Eliot, despite the fact that all of his posts are repetitive threads that lead to his many Forbes articles.
At Forbes, Eliot posts every one to three days with nearly identical headlines. After reading a few articles, it becomes obvious that the content is technical jargon created by artificial intelligence. One of the biggest problems with Eliot’s extensive Forbes portfolio is that the site limits readers to five free stories a month until they are offered a subscription for $6.99 a month, or $74.99 a year. It’s getting more complicated now that Forbes has officially put itself up for sale at a price tag of around $800 million.
Eliot’s content is also available behind a Medium paywall that costs $5 a month. And Eliot’s thin profile appears in Cision, Muckrack, and the Sam Whitmore Media Survey, paid media services that are expensive and rely on by the vast majority of public relations professionals.
Then there is the sale of Eliot’s books on the Internet. He sells them through Amazon for just over $4 per title, although Walmart offers them for less. On Thriftbooks, Eliot’s Pearls of Wisdom sell for around $27, which is a bargain compared to the $28 price tag on Porchlight. It’s safe to say that fake reviews are driving book sales. However, a few disappointed people bought the books and gave them low ratings, saying the content was repetitive.
Damage to big brands and individual identities
Clicking on the link to Eliot’s profile at Stanford University, I used a different browser and landed on the real Stanford website, where a search for Eliot returned no results. A side-by-side comparison shows that Eliot’s signature red on the Stanford page was not the same shade as the original page.
A similar experience occurred at Cornell’s ArXiv site. With a slight change to the Cornell logo, one of Eliot’s academic papers was published, filled with typos and lower-quality AI-generated content presented in a standard academic research paper format. The document cites an extensive list of sources, including Oliver Wendell Holmes, who apparently published the Harvard Law Review in 1897, three years after his death.
Those not interested in reading Eliot’s content can jump to his podcasts, where the bot spews nonsensical jargon. An excerpt from one listener’s review reads: “If you enjoy listening to someone read word for word a paper script, this podcast is for you.”
A URL posted next to Eliot’s podcasts promotes his self-driving car website, which initially led to a dead end. An update from the same link led to Techbrium, one of Eliot’s fake employer websites.
It’s amazing how Eliot manages to do all of this and still make time to speak at HMG Strategy’s senior executive summits. The fake events feature well-known tech companies listed as partners, with consultants and real executive biographies from Zoom, Adobe, SAP, ServiceNow, and the Boston Red Sox, among others.
Attendance at HMG events is free for senior technology executives upon registration. According to HMG’s terms and conditions: “If for any reason you are unable to attend and are unable to submit a report in your place, you will be charged a $100 no-show fee to cover the cost of food and service personnel.”
The cost of ignoring deepfakes
Going deeper into Eliot led a two-year-old Reddit thread to call him out and quickly devolve into complex conspiracy theories. Eliot may not be an anagram or affiliated with the NSA, but he is one of the millions of deepfakes making money online that are getting harder to spot.
Looking at the financial implications of deepfakes, the question arises as to who is responsible when they generate income for themselves and their partners. That’s not to mention the cost of downloading malware, targeting fake leads, and paying for spammy affiliate marketing links.
Perhaps a keen eye can recognize a deepfake by its fuzzy or missing background, odd hair, oddly set eyes, and mechanical voices that are out of sync with their mouths. But if this were a universal truth, deepfakes would not be worth billions in losses as they spawn financial scams and impersonate real people.
The AI has not solved all the problems that make it difficult to determine the lack of authenticity of a deepfake, but is actively fixing them. It’s these deepfake articles that help AI learn and improve. This leaves the responsibility of detecting deepfakes to individuals, forcing them to be vigilant about who they let into their networks and lives.
Kathy Keating is a real person and the founder of ProsInComms, a public relations consulting company.
California Press News – Latest News:
Los Angeles Local News || Bay Area Local News || California News || Lifestyle News || National news || Travel News || Health News