• MentalEdge@sopuli.xyzOPM
    link
    fedilink
    arrow-up
    1
    ·
    28 days ago

    The most likely tools being used are Krita AI, or copainter.

    Krita AI is a frankly impressive plugin for Krita that allows for a “collaborative” drawing process, where you are able to manually iterate on every conceivable detail, until essentially no AI brush strokes remain. And it runs locally.

    But it can also go from an empty canvas to a complete piece based on nothing but a prompt. Or anything in-between. How much the AI does is entirely up to who is using it. It actually looks like a potentially amazing way to learn, if you’re willing to turn down how much help you’re getting over time, and not take credit for something you didn’t produce. To develop your own illustrating ability and eventually take over entirely.

    Copainter takes lineart, and produces what looks like a shaded 3d model. There are potentially ethical ways to train such a model. If the dataset was created by essentially turning a bunch 3d models into line-art, easily done in various automated ways, you might use the resulting dataset to train a model to reverse the process even with novel images.

    If this was how it was done, I still find it unlikely the 3D models used for it weren’t just scraped, but 3D assets are a bit more difficult to find than 2d assets.

    The more active user-base seems to be around Krita AI. The artists I could find using it seem to be engaged in a wilful self-delusion that using their own art to “train” the model themselves, circumvents the IP theft aspect of AI training. But they’re not really modifying the model. They’re just fine tuning it using LoRA and pretending it’s more than it is.

    The results are still enabled by the large scale theft of uncredited work for training the base model (Stable Diffusion).

    So unfortunately, I can’t compare these tools to radiance fields, SwitchLight or EbSynth animation.