Please keep discussion to the pinned thread. There is a link in the sidebar if pins aren’t showing up on your client of choice.
my porn account
Please keep discussion to the pinned thread. There is a link in the sidebar if pins aren’t showing up on your client of choice.
Report them and we’ll remove them. This community is starting to get a bit more active than I anticipated so I can’t really look at every post by myself like I could at the start.
From IzzyOnDroid or F-Droid proper? If so, consider switching. F-Droid by itself does (intentionally) take longer to update whereas Izzy’s repo will pull the latest release directly from GitHub
That’s strange, it seems to show up on my end. Maybe you’re using an outdated version of Jerboa?
I’ll see what I can do about it in the meanwhile.
Not gonna remove this or anything yet, but I’d appreciate if you kept things like this to the discussion thread in the future.
Not gonna remove this or anything yet, but I’d appreciate if y’all kept any future discussion to the stickied thread.
I mean, as long as you have consent (and considering you’re talking about yourself I think it’s fair to say you do) it should be fine.
I explicitly mention this on the sidebar (which is where the “canonical” version of the rules are), but since most people prefer generating celebrities and other people who definitely have not consented I did decide to phrase it a bit more broadly for this announcement.
You seem to have forgotten to mark this as NSFW. Just a small reminder
Someone had actually DM’d me ahead of time asking about it and I never realized I actually said this publicly until recently.
ROCm is flaky in regular consumer GPUs at the best of times. I’m surprised you could even get that far on a steam deck.
Try the command line arg --opt-sdp-attention
. You might also want to try out --medvram
or --lowvram
(4GBs is considered low when it comes to AI). although I have a feeling it’s just because of the custom nature of the deck’s APU.
Your best bet would be to search for builds of ROCm, PyTorch and Torchvision that are specifically made for the deck, if such things even exist.
Hey, you seem to have forgotten to mark this as NSFW. Just reminding you.
Half my time here’s spent looking for un-NSFW’d posts in my communities (specifically this one, nobody posts in the other) and telling people to NSFW their posts. At this point I don’t think I actually fapped to anything in ths entire instance yet.
The admins are trying their best getting the situation under control by modifying Lemmy, but as mods all we can do is to remind posters when they forget. And the Lemmy devs seem to have shot down the idea of giving mods edit rights. You can go through my comment history for the link to the GitHub issue.
For anybody confused: The Lemmy version of the image (the one with pictrs in the URL) won’t have it, it strips out the EXIF data (or wherever else the prompt is stored). You have to download the image from catbox (click on the title, not the thumbnail), which seems to keep the data.
While I went through the effort I may as well throw it in for future reference:
A blonde 25yo news anchor unexpectedly (riding a vibrator:1.1) to orgasm on live TV, (sweaty, spent, disheveled, clenching muscles:1.1), full body view
Negative prompt: child, childish, teen, easynegative, closeup, doll, penis
Steps: 30, Sampler: DPM++ 2M SDE Karras, CFG scale: 7, Seed: 639212525, Size: 512x512, Model hash: c4625c7cd3, Model: qgoPromptingreal_qgoPromptingrealV1, Version: v1.3.0
I don’t think that’s necessary. Not sure though.
It’s probably “Add python to environment variables.” They must have changed the wording on that at some point.
Not on Windows right now so I can’t confirm, but you probably forgot to pick “Add Python to PATH” or whatever the option is in the installer. Try running the Python installer again, maybe it’ll let you add it without needing to uninstall & reinstall
Edit: If you’re on Nvidia, there seems to be a simpler install method now: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs (Method 1)
how much space was the download(s)?
On my end, it’s sitting at ~64GB (with btrfs compression shenanigans), though 60 of those are from all the models I have installed. The download would probably be ~2GB, even less if you disable downloading the “default” models with --no-download-sd-model
and instead pick models off of Civit or wherever manually.
Edit: Should have mentioned. Most full models are between 2-4 GBs each. Some can be 5+ but they tend to be “full” versions intended for merging & such. LoRAs are generally smaller. Depending on how much they’re pruned they’ll be anywhere between 10-100 MBs each.
how confusing is the software to use?
There’s definitely a learning curve, yes. But there’s plenty of resources (and more importantly, examples) out there.
what kind of limitations does the software have? can i do multiple people? monsters? futa? etc.
As long as you have the correct models set up it can generate basically anything. At least with anime models, monsters and futa are a given. Your main issue will probably be multiple people, although there are solutions to that. (See the multidiffusion upscaler GitHub repo on the main post)
By Web UI it means that the graphical part of it – where you write your prompt and hit generate – is running inside your browser and not as a separate window or command line. Everything is kept on your own computer unless you explicitly tell it to open up remote access.
Yeah, with AI you really have to work towards actively telling it not to generate that kind of stuff, and they slip through the cracks regardless. Report the ones you want gone and I’ll remove 'em (even if it’s one I posted)
This would be impossible without directly modifying Lemmy itself.