Join top executives July 11-12 in San Francisco to hear how leaders are integrating and optimizing AI investments for success., learn more
In research A.I Experts, I saw Deepfake, given his seemingly legal profile and engagement on social networks, this was not clear at first. Yet after seeing the same strange AI-generated image of the Doctor. Lance B. Elliott All over the web, it was clear that he wasn’t a real person. So I followed it and learned its grip.
The ubiquitous doctor. Lance B. Elliott
Elliott has over 11,000 followers on LinkedIn and we have two connections in common. Both have thousands of LinkedIn followers and decades of experience in AI with roles as investor, analyst, keynote, columnist and CEO. LinkedIn members continue to engage with Elliott even though all of his posts are frequent threadjackings that lead to his many Forbes articles.
At Forbes, Elliott publishes every one to three days with roughly the same headlines. After reading a few articles, it’s clear that the content is AI-generated tech jargon. One of the biggest problems with Elliott’s extensive Forbes portfolio is that the site limits readers to five free stories per month unless directed to purchase a $6.99-a-month or $74.99-year subscription. It gets complicated now that Forbes officially Put yourself up for sale With a price tag in the neighborhood of $800 million.
Elliott’s content is also available behind a Medium paywall, which charges $5 a month. And a slimmer profile of Elliott appears in Cision, Muckrack and the Sam Whitmore Media Survey, paid media services that are expensive and relied on by a large majority of PR professionals.
Then Eliot’s books are sold online. He sells them through Amazon, getting a little more than $4 per title even though Walmart offers them for less. At Thriftbooks, Elliott’s pearls of wisdom sell for about $27, which is a bargain compared to the $28 price tag at Porchlight. A safe bet is that books are sold by fake reviews. Yet a few disappointed humans bought the books and gave them low ratings, calling the content repetitive.
Damage to big brands and individual identities
After clicking on the link to Elliott’s Stanford University profile, I used another browser and landed on the real Stanford website, where a search for Elliott returned zero results. A side-by-side comparison shows that the branded color red on Eliot’s Stanford page was not the same shade as the authentic page.
A similar experiment took place on Cornell’s ArXiv site. With only a slight change to the Cornell logo, Elliott’s academic paper was posted, riddled with typos and low-quality AI-generated content presented in the form of a standard academic research paper. The paper cited an extensive list of sources, including Oliver Wendell Holmes, which apparently appeared in an 1897 edition of the Harvard Law Review—three years after his death.
Those not interested in reading Elliott’s content can head over to his podcast, where a bot spouts meaningless lingo. An excerpt from one listener review reads, “If you love hearing someone read word for word from a paper script, this is a great podcast for you.”
A URL posted with Elliott’s podcast promotes his website about self-driving cars, which initially took it down. A refresh on the same link led to Techbrium, one of Elliott’s fake employer websites.
It’s amazing how Eliot is able to do all this and still find time to speak at an executive leadership summit hosted by HMG Strategy. The fake events feature big-name tech companies listed as partners, including Zoom, Adobe, SAP, ServiceNow and Boston Red Sox consultants and real bios.
Attendance at HMG events is free for senior technology executives, provided they register. According to HMG’s terms and conditions, “If for any reason you are unable to attend, and are unable to send a live report in your place, a $100 no-show fee to cover food and service staff costs. will be received.”
The Cost of Ignoring Deep Fax
Elliott’s deep digging led to a two-year-old Reddit thread calling him out and increasingly harshly pursuing conspiracy theories. Elliott may not be an anagram or connected to the NSA, but he’s one of the millions of deepfakes making money online that are getting harder to track down.
Looking at the financial impact of deepfakes raises questions about who is responsible when they generate revenue for themselves and their partners. That’s not to mention the cost of downloading. Malwaretargeting fake prospects and paying for spam affiliate marketing links.
Of course, a keen eye can see a blurry or missing background, weird hair, weird eyes, and robotic voices that don’t match their mouths. But if this were a universal truth, then deepfakes wouldn’t be worth it In the billions Disadvantages As they create financial scams and impersonate real people.
AI hasn’t solved all the problems that make deepfake’s lack of authenticity difficult to spot, but it is actively solving them. This is the kind of Outing the Deepfake article that helps AI learn and improve. This puts the onus on individuals to spot deepfakes, and forces them to be vigilant about who they let into their networks and lives.
Cathy Keating is a real person and the founder of ProsInComms, a PR consultancy.
Data decision makers
Welcome to the VentureBeat community!
DataDecisionMakers is a place where experts, including technical people working with data, can share data insights and innovation.
If you want to read about cutting-edge ideas and the latest information, best practices and the future of data and data tech, join us at DataDecisionMakers.
You can also meditate. Contribution of an article Yours!