The training is a huge power sink, but so is inference (I.e. generating the images). You are absolutely spinning up a bunch of silicon that’s sucking back hundreds of watts with each image that’s output, on top of the impacts of training the model.
- 0 Posts
- 3 Comments
Joined 7 months ago
Cake day: February 12th, 2025
You are not logged in. If you use a Fediverse account that is able to follow users, you can follow this user.
Haskell mentioned λ 💪 λ 💪 λ 💪 λ 💪 λ
It depends on the model but I’ve seen image generators range from 8.6 wH per image to over 100 wH per image. Parameter count and quantization make a huge difference there. Regardless, even at 10 wH per image that’s not nothing, especially given that most ML image generation workflows involve batch generation of 9 or 10 images. It’s several orders of magnitude less energy intensive than training and fine tuning, but it is not nothing by any means.