I certainly don’t wan to run windows on it :)
I’ve been running llama keep my telemetry out of the hands of Microsoft/Google/"open"AI. I’m kind of shocked how much I can do locally with a half assed video card, and offline model and a hacked up copy of searxng.
You can get a lot done currently with ARC. The mobile ARC versions share system memory, So if you get a mini PC with ARC and upgrade it to 96GB, you can share system ram with the GPU and load decently large models. They’re a little slow it not being vram and all, but still useful (and cheap)
https://www.youtube.com/watch?v=xyKEQjUzfAk
I have it running on a zenbook duo with 32GB so I can’t load the 70B models, but I works shockingly well.