Creating images with generative AI could use as much energy as charging your smartphone according to a new study Friday that measures the environmental impact of generative AI models for the first time. Popular models like ChatGPT’s Dall-E and Midjourney may produce more carbon than driving 4 miles.

“People think that AI doesn’t have any environmental impacts, that it’s this abstract technological entity that lives on a ‘cloud’,” Dr. Sasha Luccioni, who led the study, told Gizmodo. “But every time we query an AI model, it comes with a cost to the planet, and it’s important to calculate that.”

The study from Hugging Face and Carnegie Mellon found that image generation, turning text into an image, took substantially more energy than any other task for generative AI models. Researchers tested 88 models on 30 data sets and found large, multipurpose models, like ChatGPT, are more energy-intensive than task-specific models. The study is the first of its kind to measure the carbon and energy impact of generative AI models. Dr. Luccioni said the study did not look at OpenAI because they don’t share data, and according to her, that’s a big problem.

Dr. Luccioni, who is Climate Lead at Hugging Face, says multipurpose generative AI models, like ChatGPT, are more user-friendly, but more energy-intensive. Luccioni cites a paradigm shift towards these models because they’re easier for consumers to work with. You can just go to your chatbot and ask it to do anything for you, as opposed to having to find the right model.

OpenAI and Midjourney did not immediately respond to a request for comment.

“I think that for generative AI overall, we should be conscious of where and how we use it, comparing its cost and its benefits,” said Luccioni.

The study tested several AI image generation models including Stability.AI’s Stable Diffusion XL which ranked as one of the worst for energy efficiency. Researchers also tested PromptHero’s OpenJourney, a free alternative to Midjourney. The study did not include ChatGPT’s DALL-E or Midjourney, which are the most popular models on the market, but these models are larger and more widely used than those mentioned in the study.

ChatGPT-4 has 1.76 trillion parameters, and that’s a lot of computation every time someone makes a ChatGPT inquiry. Dr. Luccioni sees the benefit of deploying multipurpose generative models in certain areas, but does “not see convincing evidence for the necessity of their deployment in contexts where tasks are well-defined.” Luccioni points out web search and navigation as areas that could use smaller models than ChatGPT, given their large energy requirements.

Share.
Exit mobile version