Yeah, and Windows and OS X both do it as well.
Though there being no upper limit to the size is amusing.
Yeah, and Windows and OS X both do it as well.
Though there being no upper limit to the size is amusing.
Install it and use it?
Their PDS is self hosted, but it does still rely on the central relays (though you COULD host that yourself if you wanted to pay for it, I suppose?).
It’s very centralized, but it’s not that different from what you’d have to do to make Mastodon useful: a small/single user instance will get zero content, even if you follow a lot of people, without also adding several relays to work around some of the design decisions made by the Mastodon team regarding replies and how federation works for those kind of things, as well as to populate hashtags and searches and such.
Though really you shouldn’t do any of that, and just use a good platform for discussion, like a forum or a threadiverse platform. (No seriously, absolutely hate “microblog” shit because it’s designed to just be zingers and hot takes and not actual meaningful conversations.)
15 million Series A financing
Maybe shitty corporate search engines are failing me, but has there been a stated valuation for Bluesky? Googling 'Bluesky valuation" or any combination thereof is a problem since that’s a business term so lol, lmao, search engine worthless.
$8m seed + $15m A series may be a shockingly small amount of equity, or it could be the whole damn company but I’m just not seeing it actually posted anywhere.
That’s a wee revisionist: Zen/Zen+/Zen2 were not especially performant and Intel still ran circles around them with Coffee Lake chips, though in fairness that was probably because Zen forced them to stuff more cores on them.
Zen3 and newer, though, yeah, Intel has been firmly in 2nd place or 1st place with asterisks.
But the last 18 months has them fucking up in such a way that if you told me that they were doing it on purpose, I wouldn’t really doubt it.
It’s not so much failing to execute well-conceived plans as it was shipping meltingly hot, sub-par performing chips that turned out to self-immolate, combined with also giving up on being their own fab, and THEN torching the relationship with TSMC before you launched your first products they’re fabbing.
You could write the story as a malicious evil CEO wanting to destroy the company and it’d read much the same as what’s actually happening (not that I think Patty G is doing that, mind you) right now.
Yeah but it’s priced the same as a cheap laptop and/or desktop, which of course doesn’t then require you to pay monthly to actually use the stupid thing.
It feels like another ‘Microsoft asked Microsoft what Microsoft management would buy, and came up with this’ product, and less one that actually has a substantial market, especially when you’re trying to sell a $350 box that costs you $x a month to actually use as a ‘business solution’.
This would probably be a cool product at $0 with-a-required-contract-with-Azure, but at $350… meh, I suspect it’s a hard sale given the VDI stuff on Azure isn’t cheap.
Amazing what happens when your primary competitor spends 18 months stepping on every rake they can find.
And, then, having run out of rakes, they then deeply invest in a rake factory so they can keep right on stepping on them.
This’ll probably be a lot more interesting a year from now, given that the product lines for the next ~9 months or so are out and uh, well…
Yeah, it doesn’t appear that PSSR (which I cannot help but pronounce with an added i) is the highest quality upscaling out there, combined with console gamers not having experienced FSR/FSR2/FSR3’s uh, specialness is leading to people being confused why their faster console looks worse.
Hopefully Sony does something about the less than stellar quality in a PSSR2 or something relatively quickly, or they’re going to burn a lot of goodwill around the whole concept, much like how FSR is pretty much considered pretty trash by PC gamers.
Yeah but all Google needs to do is back up a dump truck of cash to Mar A Lago, and he’ll forget all about whatever it was he didn’t like about Google and immediately start tweeting how he’s the bigliest fan of all the very good things Google is doing, so I’m going to skip the breath holding bit.
really effects performance that much
Depending on the exact flags, some workloads will be faster, some will be identical, and some will be slower. Compilier optimization is some dark magic that relies on a ton of factors, but you can’t just assume that going from like -O2 to -O3 will provide better performance, since the optimizations also rely on the underlying code as to what they’ll actually make happen… and is why, for the most part, everyone suggests you stop at -O2 since you can start getting unexpected behavior the further up the curve you go.
And we’re talking low single digit performance improvements at best, not anything that anyone who is doing anything that’s not running benchmarks 24/7 would ever even notice in real world performance.
Disclaimer: there are workloads that are going to show different performance uplifts, but we’re talking Firefox and KDE and games here, per the OP’s comments.
Also they do default to a different scheduler, which is almost certainly why anyone using it will notice it feels “faster”, but it’s mainlined in the kernel so it’s not like you can’t use that anywhere else.
Two thoughts come to mind:
and
The master-omnibus image bundles all that into a single container is MUCH simpler to deploy.
Literally just used their compose file they provide at https://github.com/AnalogJ/scrutiny/blob/master/docker/example.omnibus.docker-compose.yml and added in the device names and was done.
I mean even if you could remove them, the problem is that beige is not always the same beige: same problem you had 30 years ago.
The colors would never, at any point, remotely match the rest of the beige, and it’d be nice if a premium product that exists only for the aesthetics, would have just you know, done that last itty bitty little thing so it’d look right (from at least a distance) and be color matched.
Not that shocking? It’s pushing 45gb or so on mobile, and my PC install is like 85gb.
I can see how you might end up slightly bigger on a version that might have more than a single version of each resource for the good console and the potato one, since I’m assuming that they’re doing universal downloads and not packaging two versions?
It’s really too bad they couldn’t have made the bay covers either plain or actually look like a floppy drive.
So close, and yet, so far.
Absolutely.
2.0 was 100% not the same game, but it was vastly improved and perfectly playable well before then.
I played at launch, but on PC, and it was… fine. In that, unlike Starfield, it was a game with characters and a story that was interesting enough to carry the buggy world and somewhat less than fleshed out side-quest mechanics.
But, like, there were enough buildings and set pieces and people and stories to actually sit down and spend 200 hours exploring the world without seeing the same stupid PoIs over and over and over again, while trying to care about the least interesting NPC companions I’ve probably ever dealt with.
And Phantom Liberty is fucking fantastic, so they took a bit of a turd at launch and turned it into an amazing game.
Are uptimekuma and whatever you’re trying to monitor on the same physical hardware, or is it all different kit?
My first feeling is that you’ve got some DNS/routing configuration that’s causing issues if you’re leaving your local network and then going through two layers before coming back in, especially if you have split horizon DNS.
Perhaps it was just a little too simple?
Wonder if the hamburders are cold.
I assume the KDE implementation resizes to default when you stop shaking it.
I could totally see someone coding a function that increases the mouse pointer by x% every y mouse shakes, and then neglecting to put in a size cap.