• trungulox@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    25 days ago

    Yes I have.

    With a model I fine tuned myself and ran locally on my own hardware.

    Suck it

    • utopiah@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      25 days ago

      Just curious, do you know even as a rough estimation (maybe via the model card) how much energy was used to train the initial model and if so how do you believe it was done so in an ecologically justifiable way?

      • trungulox@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        25 days ago

        Don’t know. Don’t really care honestly. I dont pay for hydro, and whatever energy expenditures were involved in training the model I fine tuned is more than offset by the fact that I don’t and never will drive.

        • utopiah@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          25 days ago

          Don’t know. Don’t really care honestly […] offset by the fact that I don’t and never will drive.

          That’s some strange logic. Either you do know and you can estimate that the offset will indeed “balance it out” or you don’t then you can’t say one way or the other.

          • jfrnz@lemm.ee
            link
            fedilink
            arrow-up
            0
            ·
            25 days ago

            Running a 500W GPU 24/7 for a full year is less than a quarter of the energy consumed by the average automobile in the US (in 2000). I don’t know how many GPUs this person has or how long it took to fine tune the model, but it’s clearly not creating an ecological disaster. Please understand there is a huge difference between the power consumed by companies training cutting-edge models at massive scale/speed, compared to a locally deployed model doing only fine tuning and inferencing.