• candyman337@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    29 days ago

    My personal reason for disliking AI is the tremendous environmental cost, it destroys freshwater and it is for the most part fueled by mega data centers fueled by carbon emitting fuel sources.

    Additionally all widely available AI libraries have been largely trained from stolen work from artists without their permission. Meaning even if this person is using AI “just for the shading” it is still piggybacking off the stolen work of other creators, and, depending on if they run this AI generation locally or through some service, also contributing to the tainting of fresh water and pollution via carbon emissions.

    In my opinion I also see this as generative AI, these line drawings are fed through the same ai generator as fully generated AI content, and it is generating a large aspect of the art.

    Overall I just don’t think this is something that should be promoted because it’s ethically dubious at best. Especially since the creator seemingly is not very open about the use of AI.

    • MentalEdge@sopuli.xyzOPM
      link
      fedilink
      arrow-up
      1
      ·
      28 days ago

      The most likely tools being used are Krita AI, or copainter.

      Krita AI is a frankly impressive plugin for Krita that allows for a “collaborative” drawing process, where you are able to manually iterate on every conceivable detail, until essentially no AI brush strokes remain. And it runs locally.

      But it can also go from an empty canvas to a complete piece based on nothing but a prompt. Or anything in-between. How much the AI does is entirely up to who is using it. It actually looks like a potentially amazing way to learn, if you’re willing to turn down how much help you’re getting over time, and not take credit for something you didn’t produce. To develop your own illustrating ability and eventually take over entirely.

      Copainter takes lineart, and produces what looks like a shaded 3d model. There are potentially ethical ways to train such a model. If the dataset was created by essentially turning a bunch 3d models into line-art, easily done in various automated ways, you might use the resulting dataset to train a model to reverse the process even with novel images.

      If this was how it was done, I still find it unlikely the 3D models used for it weren’t just scraped, but 3D assets are a bit more difficult to find than 2d assets.

      The more active user-base seems to be around Krita AI. The artists I could find using it seem to be engaged in a wilful self-delusion that using their own art to “train” the model themselves, circumvents the IP theft aspect of AI training. But they’re not really modifying the model. They’re just fine tuning it using LoRA and pretending it’s more than it is.

      The results are still enabled by the large scale theft of uncredited work for training the base model (Stable Diffusion).

      So unfortunately, I can’t compare these tools to radiance fields, SwitchLight or EbSynth animation.