LLMs are big, so you either need a powerful PC to run them or use cloud services. Linux users tend to not be fans of either, so it’ll probably take a while before anything big happens.
Besides, for the things where an LLM actually makes sense (like a copilot-style code generator), there are already implementations.
I am a Debian user, and I can’t really say I am not a fan of “Big”. I have a laptop as my production machine but I also have as big a file server as I can afford. I would not want an AI that is part of my OS unless it is local. I do use ChatGPT and Stable Diffusion, but only for non-critical functions.
LLMs are big, so you either need a powerful PC to run them or use cloud services. Linux users tend to not be fans of either, so it’ll probably take a while before anything big happens.
Besides, for the things where an LLM actually makes sense (like a copilot-style code generator), there are already implementations.
I am a Debian user, and I can’t really say I am not a fan of “Big”. I have a laptop as my production machine but I also have as big a file server as I can afford. I would not want an AI that is part of my OS unless it is local. I do use ChatGPT and Stable Diffusion, but only for non-critical functions.