In mid-September, YouTube announced a collection of new artificial intelligence tools coming to the platform. The tools touch basically every part of the content creation process, from generating topics to editing and even generating video footage itself through the Dream Screen feature. But even as AI features have caused an uproar in so many other creative industries, the response to YouTube’s new suite of tools has been muted. Instead, YouTubers are sharing other concerns about the ways generative AI is already affecting the platform.
It’s been a watershed year as generative AI tools have made it easier to create images and text, all generated from internet scrapes of others’ art and writing. Artists and writers have typically pushed back, citing issues like copyright and their own work being undermined — in September, high-profile authors including George R.R. Martin and Jodi Picoult filed to sue OpenAI for scraping their books. And then there’s generative AI’s issues with hallucination and inaccuracies.
On the other side of the coin, these tools have been used by many people, either experimentally or professionally. Prizes have been won by AI art, while some news sites cut their staff and put out AI-generated articles. AI has also become a cornerstone of TikTok, particularly AI-powered filters. Creators use the Bold Glamour filter to apply makeup, a Ghibli filter to look like characters from the studio’s films, and even pay a fee for filters that generate themed avatars — like the hugely popular ’90s high school photo filter.
Maybe it’s the fact that YouTube’s tools aren’t available to the general public yet. But the quiet reception still seems to buck the trend. On the YouTube Creators account on X (formerly known as Twitter), the announcement only picked up a few hundred likes, doing similarly to engagement-bait tweets like “how do you make your audience feel seen and heard?” On the main YouTube account, it performed worse than a tweet reading “stars are kinda just sky rocks.”
On the platform itself, it’s difficult to find videos discussing the tools at all, despite a thriving community of YouTubers who explain how to use AI tools in making videos — just not the ones announced by YouTube. Instead, these videos focus on explaining existing tools to generate scripts and voice-overs, and to create and edit together images for the video visuals. YouTube’s new tools basically give creators an in-house option for much of this: Creators will be able to generate video prompts and script outlines, automatically edit clips together, and create AI-voiced dubs into other languages.
The main potential draw is that these AI tools would generate content based off of creators’ own historical output. For example, YouTube says the “insights” tool will be personalized so that new video ideas will take into account what a creator’s audience is already watching, something that other text generators can’t do without access to YouTube’s data. It also aims to recommend music for videos, including royalty-free music that hypothetically should help creators know what won’t get them troublesome copyright strikes.
But existing creators don’t seem particularly interested one way or the other. “No one’s heard of it yet,” says Jimmy McGee, a YouTuber who recently made a video titled “The AI Revolution is Rotten to the Core.” As the title might suggest, he’s not a huge fan of YouTube’s proposed tools, but he says it’s “strange” how they’ve been received.
He thinks it may be that these tools are mainly geared toward creators, and viewers may not notice if, for example, a video is edited with the help of AI. He doesn’t think the more obvious tools, like the melty generated visuals of Dream Screen, will take off in the long run. “People will get sick of those quick enough that it’s not really a problem,” he says. But the other tools might lead to longer-term issues in the creator space.
Viewers might not immediately notice if AI software is used to edit videos, but McGee worries that it will undermine those who actually use it. “It’s going to de-skill newer people on YouTube,” he says. Although he finds it unlikely that it will replace professional editors in its current form, it will prevent newer creators from growing their skills. YouTube is billing the feature as an easier way in for people who might not be as confident in their skills yet. It’s also aimed toward Shorts, YouTube’s vertical-video spinoff, so it might make things easier for those who only have their phones to edit on. But McGee thinks that relying on it may end up discouraging video creators in the long run as they struggle to grow creatively.
“I think the more decisions you can make in your video, the better the video can be,” says McGee. “Maybe it won’t be [at first], but the ceiling is higher. That’s what worries me. If someone goes in earnestly trying to use these tools, it’d be very sad to see them give up.”
That potential pitfall depends on whether YouTube’s tools stick around. Parent company Google has a habit of shuttering things — including features it has hyped up a lot more than this one. And generative AI is currently running at a loss for most companies. “We’re probably going to see a decline in its popularity pretty soon,” says media and fandom critic Sarah Z. “[In the meantime] I hope these tools are helpful to creators and serve as a way of empowering them to better execute videos that serve their visions rather than a way to undercut creators.”
But some creators already feel undercut by AI on the platform. Just before YouTube’s tool announcement, creator Abyssoft released a video about a potential case of plagiarism. In it, he detailed the similarities between a previous video he had put out and a video uploaded by a different channel and speculated on how AI could have been used to perform the theft, including using speech-to-text programs and AI voice-over software.
Contacted for comment, Abyssoft pointed out that this is already a widespread issue on the platform. In May, science communicator Kyle Hill spoke out against YouTube channels using AI to create unverified but attention-grabbing content on the site. These videos are often misleading and in some cases appear to copy topics that Hill himself had made videos on.
In his video, Abyssoft says that he isn’t sure what the solution to these issues is. But one thing he suggests is that YouTube should disclose when AI is being used in video creation. He’d also like to see “a punishment or strike system for people that fail to disclose and are proven to be using AI.”
This would be easier if it were YouTube’s own AI tools that were being used; the platform would already be aware. In response to a request for comment on whether Google was considering implementing this feature or any additional measures to avoid plagiarism and misinformation on the platform, Google policy communications manager Jack Malon stated that all content is subject to the existing community guidelines, and that these are “enforced consistently for all creators on our platform, regardless of whether their content is generated using artificial intelligence.”
Although Abyssoft considered some of the other generative AI tools as potentially useful, like the music tool helping creators avoid copyright issues, he continues to fear what easy access to AI tools might do to YouTube creators. “AI facilitates plagiarism in a way we haven’t seen before, and with a bit of effort it will soon become undetectable,” he says. “Competing in a sea of faceless AI channels will be a tough challenge for creators who make a living this way, as their upload cadence will be greatly outpaced by the AI.”
However, he doesn’t think that AI will necessarily produce interesting videos. “I’m assuming the tool that suggests video topics is only going to suggest ideas that it thinks will do well in the algorithm,” he says. “Things will get incredibly formulaic if [it’s] relied on too much.”
He does acknowledge that channels with technical content, such as his own speedrunning history videos, have the advantage of research and understanding that can’t be carried out by AI. McGee similarly feels somewhat protected by his own style. “My videos are messy and I like them that way,” he says. “I can make all the melty, weird visuals myself and make something I’m actually proud of.”
But other channels might not be able to survive. “Someone that covers current news will see AI upload videos before their editing is finished, since it can just scrape whatever articles have been published for the day and render out a video and voice-over in less than an hour,” says Abyssoft.
YouTube’s tools haven’t yet launched beyond a few test countries, so it’ll be some time until we see the impact they’ll have on the platform. But while creators have concerns that they might add new issues for both existing and upcoming video makers, they also have prior concerns about the use of AI that they feel aren’t being addressed by the platform. It seems to be these that are holding creators’ attention, not any new announcements.