If you were running a LLM locally on android through llama.cpp for use as a private personal assistant. What model would you use?
Thanks for any recommendations in advance.
If you were running a LLM locally on android through llama.cpp for use as a private personal assistant. What model would you use?
Thanks for any recommendations in advance.
Running an llm on a phone will absolutely destroy your battery life. It also is imperative that you understand that the comfort of ai is bought with killing of innocents (through expediency of climate catastrophe, exploitation of the planet and the poorest on it).
I think using ai to experiment on a home server which already exists wouldnt be problematic IN A VACUUM but you would still normalize using the tech which is morally corrupt.
Not trying to instigate a fight but if you make this argument, I hope you’re also vegan
Oh yeah. So i cant ask for one thing and not do another. Classic bad faith argument. Good try.
I’m not even trying to argue against you, I’m arguing for veganism. The same arguments that you used for why the use of AI is bad can be used for why not being vegan is bad. The production of animal products even has a way bigger impact.