Yes I have.
With a model I fine tuned myself and ran locally on my own hardware.
Suck it
Just curious, do you know even as a rough estimation (maybe via the model card) how much energy was used to train the initial model and if so how do you believe it was done so in an ecologically justifiable way?
Don’t know. Don’t really care honestly. I dont pay for hydro, and whatever energy expenditures were involved in training the model I fine tuned is more than offset by the fact that I don’t and never will drive.
Don’t know. Don’t really care honestly […] offset by the fact that I don’t and never will drive.
That’s some strange logic. Either you do know and you can estimate that the offset will indeed “balance it out” or you don’t then you can’t say one way or the other.
Running a 500W GPU 24/7 for a full year is less than a quarter of the energy consumed by the average automobile in the US (in 2000). I don’t know how many GPUs this person has or how long it took to fine tune the model, but it’s clearly not creating an ecological disaster. Please understand there is a huge difference between the power consumed by companies training cutting-edge models at massive scale/speed, compared to a locally deployed model doing only fine tuning and inferencing.
I specifically asked about the training part, not the fine tuning but thanks to clarifying.
Edit : you might be interested in helping with https://lemmy.world/post/30563785/17397757 please
The point is that OP (most probably) didn’t train it — they downloaded a pre-trained model and only did fine-tuning and inference.