• 0 Posts
  • 16 Comments
Joined 2 years ago
cake
Cake day: July 1st, 2023

help-circle






  • I know you are willfully being ignorant here as AI data centers are projected to use more electricity than the entire nation of Japan by 2030.

    Your own hosted LLM is not the problem nor the issue we are even discussing and quite frankly a little insulting you bring it up

    I am not anymore anti-AI than any tool that you can’t determine is accurate nor correct if there is an issue with it. LLM have a long way to go before they are even a fraction of what they claim to be.

    Another problem is they do not cite where they get their answers from. Without the ability to audit the answers you are given you won’t know how accurate they are.

    I have listed several legitimate gripes about LLM. I find your fanboism misplaced and I think you are just playing devil’s advocate at this point. AI is a hype train and I am sick of it already.


  • It takes an enormous amount of energy and processing power to create these shitty snapshots so in many ways it is doom considering it will dramatically increase our energy usage.

    I get it, you are an AI supporter but you fail to critically analyze it or even understand it. What tool would you use that you can’t correct errors to or even determine how it works. You are really operating on faith here that the black box your getting an answer from is giving you the correct answer.

    Perhaps a code snippet works, but after this is where it all falls apart. What if the snippet does not work or causes a problem. The LLM has nothing to offer you here.


  • LLM are poor snapshots of a search engine with no way to fix any erroneous data. If you search something on Stack you get the page with several people providing snippets and debating the best approach. The LLM does not give you this. Furthermore if the author goes back and fixes an error in their code the search will find it whereas the LLM will give you the buggy code with no way to reasonably update it

    LLM have major issues and even bigger limitations. Pretending they are some panacea is going to disappoint.






  • There is a huge difference between an algorithm using real world data to produce a score a panel of experts use to make a determination and using a LLM to screen candidates. One has verifiable reproducible results that can be checked and debated the other does not.

    The final call does not matter if a computer program using an unknown and unreproducible algorithm screens you out before this. This is what we are facing. Pre-determined decisions that human beings are not being held accountable to.

    Is this happening right now? Yes it is, without a doubt. People are no longer making a lot of healthcare decisions determining insurance coverage. Computers that are not accountable are. You may have some ability to disagree but for how long?

    Soon there will be no way to reach a human about an insurance decision. This is already happening. People should be very anxious. Hearing United Healthcare has been forging DNRs and has been denying things like treatment for stroke for elders is disgusting. We have major issues that are not going away and we are blatantly ignoring them.