Original Source: wsj.com
New text-to-image generators powered by artificial intelligence, including OpenAI Dall-E 2 and Stability AI DreamStudio, let you type in almost any phrase and get an image.
Not this time, but AI is churning out articles, illustrations, fake product reviews and even videos
You probably haven’t noticed, but there’s a good chance that some of what you’ve read on the internet was written by robots. And it’s likely to be a lot more soon.
Artificial-intelligence software programs that generate text are becoming sophisticated enough that their output often can’t be distinguished from what people write. And a growing number of companies are seeking to make use of this technology to automate the creation of information we might rely on, according to those who build the tools, academics who study the software, and investors backing companies that are expanding the types of content that can be auto-generated.
“It is probably impossible that the majority of people who use the web on a day-to-day basis haven’t at some point run into AI-generated content,” says Adam Chronister, who runs a small search-engine optimization firm in Spokane, Wash. Everyone in the professional search-engine optimization groups of which he’s a part uses this technology to some extent, he adds. Mr. Chronister’s customers include dozens of small and medium businesses, and for many of them he uses AI software custom-built to quickly generate articles that rank high in Google’s search results—a practice called content marketing—and so draw potential customers to these websites.
“Most of our customers don’t want it being out there that AI is writing their content,” says Alex Cardinell, chief executive of Glimpse.ai, which created Article Forge, one of the services Mr. Chronister uses. “Before applying for a small business loan, it’s important to research which type of loan you’re eligible to receive,” begins a 1,500-word article the company’s AI wrote when asked to pen one about small business loans. The company has many competitors, including SEO.ai, TextCortex AI and Neuroflash.
Google knows that the use of AI to generate content surfaced in search results is happening, and is fine with it, as long as the content produced by an AI is helpful to the humans who read it, says a company spokeswoman. Grammar checkers and smart suggestions—technologies Google itself offers in its tools—are of a piece with AI content generation, she adds.
“Our ranking team focuses on the usefulness of content, rather than how the content is produced,” says Danny Sullivan, public liaison for search at Google. “This allows us to create solutions that aim to reduce all types of unhelpful content in search, whether it’s produced by humans or through automated processes.”
AI content services are thriving. They make content creators more productive, but they also are able to produce content that no one can tell was made by a machine. This is also often true of AI-generated content of other kinds, including images, video, audio, and synthetic customer service representatives.
Like other types of automation, there are many potential benefits to having AI handle basic writing tasks that are often mere drudgery for humans. That said, there also are considerable dangers of widespread and undetectable synthetic content. For one, it risks replacing a vast and thriving ecosystem of human workers, as has happened in so many industries subject to automation before, by a shrinking number of big entities that will thereby have greater power to shape what people think. At its worst, it could give bad actors a powerful tool to spread deception in moments of crisis, like war.
The rise of AI-generated content is made possible by a phenomenon known variously as computational creativity, artificial creativity or generative AI. This field, which had only a handful of companies in it two or three years ago, has exploded to more than 180 startups at present, according to data gathered by entrepreneur Anne-Laure Le Cunff. These companies have collected hundreds of millions of dollars in investment in recent months even as the broader landscape for tech funding has become moribund.
A lot of the content we are currently encountering on the internet is auto-generated, says Peter van der Putten, an assistant professor at Leiden Institute of Advanced Computer Science at Leiden University in the Netherlands. And yet we are only at the beginning of the deployment of automatic content-generation systems. “The world will be quite different two to three years from now because people will be using these systems quite a lot,” he adds.
By 2025 or 2030, 90% of the content on the internet will be auto-generated, says Nina Schick, author of a 2020 book about generative AI and its pitfalls. It’s not that nine out of every 10 things we see will be auto-generated, but that automatic generation will hugely increase the volume of content available, she adds. Some of this could come in the form of personalization, such as marketing messages containing synthetic video or actors tuned to our individual tastes. In addition, a lot of it could just be auto-generated content shared on social media, like text or video clips people create with no more effort than what’s required to enter a text prompt into a content-generation service.
Here are a few examples of the coming bounty of synthetic media: Artists, marketers and game developers are already using services like Dall-E, Midjourney and Stable Diffusion to create richly detailed illustrations in the style of different artists, as well as photo-realistic flights of fancy. Researchers at the Meta AI division of Facebook parent Meta Platforms unveiled in September a system that can automatically generate videos from a text prompt, and Google unveiled what appears to be an even more sophisticated version of such a system in October.
Dr. van der Putten and his team have created a system that can write newspaper articles in the style of any paper fed into their software. (The Wall Street Journal has its own AI article-writing tool, created in collaboration with Narrativa, a “language generation AI system” which helps a human writer produce some market updates.)
Automatic text-generation systems are helping novelists speed up their writing process, powering customer service chatbots, and powering a service, Replika, that hundreds of thousands of people treat as their artificial boyfriend or girlfriend—and with whom many say they’ve fallen in love.
One downside of this type of artificial creativity is the potential erosion of trust. Take online reviews, where AI is exacerbating deceptive behavior. Algorithmically generated fake reviews are on the rise on Amazon and elsewhere, says Saoud Khalifah, CEO of Fakespot, which makes a browser plug-in that flags such forgeries. While most fraudulent reviews are still written by humans, about 20% are written by algorithms, and that figure is growing, according to his company’s detection systems, he adds.