• kibiz0r@midwest.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 days ago

    I’m anti-AI and pro-piracy.

    I object to paywalling access to culture and knowledge, because it degrades our society, cuts people out of participating in ongoing cultural conversations, and keeps people from enjoying the fruits of human creativity based solely on their income level.

    I object to AI for basically the same reasons.

      • kibiz0r@midwest.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        3 days ago

        Not really, but I guess it depends whether you’re asking about my personal beliefs or policy positions.

        My concerns about gen AI basically fall into these categories:

        • Environmental impact: water usage, energy usage
        • Harmful output: misinfo, disinfo, reinforced biases, scams, “chatbot psychosis”
        • Signal jamming: gen AI produces so much output based on so little input, it really could cause a communication equivalent of Kessler Syndrome
        • Anticompetitive practices: using the works of creators to compete against them in the same market
        • Labor alienation: what Doctorow calls “chickenized reverse centaurs”
        • Undermining open access: see Molly White’s essay “No, Not Like That”

        FOSS addresses some of those, to some degree. But none of them completely.

        Should a technology be banned just because it’s not perfect? No. (And even if you decide a technology should be banned, you have to consider the practicality of actually enforcing it. It’s not like you can “uninvent” software.)

        My biggest worry is actually the signal jamming. And there’s not really much we can do about that except to just decide not to use AI.

        Edit: Btw, that was a good question and whoever downvoted you is a butt.

      • Xerxos@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        3 days ago

        We don’t have any state-of-the-art open source LLMs. We have open weights models. The reason for this is that for a true open source LLM you would need to open up your sources for training (which opens the possibility for people to sue you for using their content for training) and the techniques how you trained the model (which allows other developers to copy that to advance their own models)

        The last true open source model was probably chatgpt 2 or something of that level.

      • kibiz0r@midwest.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 days ago

        I can understand why. They buy into AI vendors’ premise — that copyright is the only way to fight back.

        But that’s not going to work. Because 1) they win either way, but more importantly: 2) if you zoom out, this is kinda the big tech playbook in general, right?

        “Okay, define what constitutes a ‘taxi service’, so that I can compete against them while avoiding the regulations that apply to them.”

        “Define ‘employment’, so I can use people’s labor without respecting their labor rights.”

        “Define ‘purchase’, so I can charge money for access to something but take it away whenever I feel like it.”

        So when we say “Hey, you’re being a jerk by using people’s own work to compete against them and disconnect them from their audience”, they say “Okay, define that in objective, quantifiable terms, and we’ll stop doing anything that fits that exact definition… but we’ll still continue doing basically that, obviously.”