• 0 Posts
  • 15 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle





  • I’d say that those details that vary tend not to vary within a language and ecosystem, so a fairly dumb correlative relationship is enough to generally be fine. There’s no way to use logic to infer that it’s obvious that in language X you need to do mylist.join(string) but in language Y you need to do string.join(mylist), but it’s super easy to recognize tokens that suggest those things and a correlation to the vocabulary that matches the context.

    Rinse and repeat for things like do I need to specify type and what is the vocabulary for the best type for a numeric value, This variable that makes sense is missing a declaration, does this look to actually be a new distinct variable or just a typo of one that was declared.

    But again, I’m thinking mostly in what kind of sort of can work, my experience personally is that it’s wrong so often as to be annoying and get in the way of more traditional completion behaviors that play it safe, though with less help particularly for languages like python or javascript.



  • To your point, at least for third party voters, only two states had enough third party participation to even theoretically move the end result: Michigan and Wisconsin. So even if every person that voted third party instead voted for Harris, she would have still lost 287:251 (though she would have won the symbolic victory of popular vote).

    Of course, there’s more than just a single election in the country, so more important to keep active in down ballot races.

    The biggest potential complaint of consequence would be non-voters/people who boycotted the election, but no way of knowing anything about it.

    Still it is utterly obnoxious when someone seems to act all high and mighty that they didn’t vote for the lesser of two evils.




  • GPTs which claim to use a stockfish API

    Then the actual chess isn’t LLM. If you are going stockfish, then the LLM doesn’t add anything, stockfish is doing everything.

    The whole point is the marketing rage is that LLMs can do all kinds of stuff, doubling down on this with the branding of some approaches as “reasoning” models, which are roughly “similar to ‘pre-reasoning’, but forcing use of more tokens on disposable intermediate generation steps”. With this facet of LLM marketing, the promise would be that the LLM can “reason” itself through a chess game without particular enablement. In practice, people trying to feed in gobs of chess data to an LLM end up with an LLM that doesn’t even comply to the rules of the game, let alone provide reasonable competitive responses to an oppone.