Lots of people on Lemmy really dislike AI’s current implementations and use cases.

I’m trying to understand what people would want to be happening right now.

Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?

Thanks for the discourse. Please keep it civil, but happy to be your punching bag.

  • kandoh@reddthat.com
    link
    fedilink
    arrow-up
    3
    ·
    47 minutes ago

    My issue is that the c-levels and executives see it as a way of eliminating one if their biggest costs - labour.

    They want their educated labour reduced by three quarters. They want me doing the jobs of 4 people with the help of AI, and they want to pay me less than they already are.

    What I would like is a universal basic income paid for by taxing the shit out of the rich.

  • Vanilla_PuddinFudge@infosec.pub
    link
    fedilink
    English
    arrow-up
    1
    ·
    23 minutes ago

    I’m beyond the idea that there could or would be any worldwide movement against Ai, or much of anything if we’re comparing healthcare, welfare and education reform. People are tuned out and numb.

  • OTINOKTYAH@feddit.org
    link
    fedilink
    arrow-up
    6
    ·
    2 hours ago

    Not destroying but being real about it.

    It’s flawed like hell and feeling like a hype to save big tech companies, while the the enduser getting a shitty product. But companies shoving it into apps and everything, even if it degrades the user expierence (Like Duolingo)

    Also, yes there need laws for that. I mean, If i download something illegaly i will nur put behind bars and can kiss my life goodbye. If a megacorp doing that to train their LLM “it’s for the greater good”. That’s bullshit.

  • daniskarma@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    14
    ·
    3 hours ago

    I’m not against it as a technology. I use it for my personal use, as a toy, to have some fun or to whatever.

    But what I despise is the forced introduction everything. AI written articles and AI forced assistants in many unrelated apps. That’s what I want to disappear, how they force in lots of places.

  • Saleh@feddit.org
    link
    fedilink
    arrow-up
    14
    arrow-down
    3
    ·
    edit-2
    3 hours ago

    First of all stop calling it AI. It is just large language models for the most part.

    Second: immediate carbon tax in line with the current damage expectations for emissions on the energy consumption of datacenters. That would be around 400$/tCO2 iirc.

    Third: Make it obligatory by law to provide disclaimers about what it is actually doing. So if someone asks “is my partner cheating on me”. The first message should be “this tool does not understand what is real and what is false. It has no actual knowledge of anything, in particular not of your personal situation. This tool just puts words together that seem more likely to belong together. It cannot give any personal advice and cannot be used for any knowledge gain. This tool is solely to be used for entertainment purposes. If you use the answers of this tool in any dangerous way, such as for designing machinery, operating machinery, financial decisions or similar you are liable for it yourself.”

    • Hello Hotel@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 hour ago

      "This tool is exclusively built to respond to your chats how a person would. this includes claiming it knows things reguardless of it actually does. it’s knolage is limited to it’s ‘training’ process’ "

    • dimah@crazypeople.online
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      2 hours ago

      It absolutely can be used for knowledge gain it just depends what you are trying to learn, for example they excel at teaching languages. I speak 3 languages, my mother tongue Persian, English for business/most things and Spanish because of where I live now. Using an LLM I’ve been teaching myself French was easier than Duolingo was ever able to do.

    • Psychadelligoat@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      2 hours ago

      First of all stop calling it AI. It is just large language models for the most part.

      Leave it to the anti-AI people to show their misunderstandings fast and early. LLMs are AIs, they’re not general AIs

  • boaratio@lemmy.world
    link
    fedilink
    arrow-up
    10
    ·
    edit-2
    3 hours ago

    For it to go away just like Web 3.0 and NFTs did. Stop cramming it up our asses in every website and application. Make it opt in instead of maybe if you’re lucky, opt out. And also, stop burning down the planet with data center power and water usage. That’s all.

    Edit: Oh yeah, and get sued into oblivion for stealing every copyrighted work known to man. That too.

    Edit 2: And the tech press should be ashamed for how much they’ve been fawning over these slop generators. They gladly parrot press releases, claim it’s the next big thing, and generally just suckle at the teet of AI companies.

  • kittenzrulz123@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    16
    ·
    4 hours ago

    I do not need AI and I do not want AI, I want to see it regulated to the point that it becomes severly unprofitable. The world is burning and we are heading face first towards a climate catastrophe (if we’re not already there), we DONT need machines to mass produce slop.

  • Taleya@aussie.zone
    link
    fedilink
    English
    arrow-up
    17
    ·
    edit-2
    4 hours ago

    What do I really want?

    Stop fucking jamming it up the arse of everything imaginable. If you asked for a genie wish, make it it illegal to be anything but opt in.

    • blackn1ght@feddit.uk
      link
      fedilink
      arrow-up
      5
      ·
      4 hours ago

      I think it’s just a matter of time before it starts being removed from places where it just isn’t useful. For now companies are just throwing it at everything to see what sticks. WhatsApp and JustEat added AI features and I have no idea why or how it could be used for those services and I can’t imagine people using them.

  • Detun3d@lemm.ee
    link
    fedilink
    arrow-up
    8
    ·
    4 hours ago

    Gen AI should be an optional tool to help us improve our work and life, not an unavoidable subscription service that makes it all worse and makes us dumber in the process.

  • MochiGoesMeow@lemmy.zip
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    2 hours ago

    Im not a fan of AI because I think the premise of analyzing and absorbing work without consent from creators at its core is bullshit.

    I also think that AI is another step into government spying in a more efficient manner.

    Since AI learns from human content without consent, I think government should figure out how to socialize the profits. (Probably will never happen)

    Also they should regulate how data is stored, and ensure to have videos clearly labeled if made from AI.

    They also have to be careful and protect victims from revenge porn or general content and make sure people are held accountable.

    • 4am@lemm.ee
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      3 hours ago
      • Trained on stolen ideas: ✅
      • replacing humans who have little to no safety net while enriching an owner class: ✅
      • disregard for resource allocation, use, and pollution in the pursuit of profit: ✅
      • being forced into everything as to become unavoidable and foster dependence: ✅

      Hey wow look at that, capitalism is the fucking problem again!

      God we are such pathetic gamblemonkeys, we cannot get it together.

  • helpImTrappedOnline@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    5 hours ago

    (Ignoring all the stolen work to train the models for a minute)

    It’s got its uses and potential, things like translations, writing prompts, or a research tool.

    But all the products that force it in places that clearly do not need it and solving problems could be solved by two or three steps of logic.

    The failed attempts at replacing jobs, screen resumes or monitoring employees is terrible.

    Lastly the AI relationships are not good.

  • DeathsEmbrace@lemm.ee
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    2
    ·
    5 hours ago

    Ruin the marketing. I want them to stop using the key term AI and use the appropriate terminology narrow minded AI. It needs input so let’s stop making up fantasy’s about AI it’s bullshit in truth.

  • sweemoof@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    6 hours ago

    The most popular models used online need to include citations for everything. It can be used to automate some white collar/knowledge work but needs to be scrutinized heavily by independent thinkers when using it to try to predict trend and future events.

    As always schools need to be better at teaching critical thinking, epistemology, emotional intelligence way earlier than we currently do and AI shows that rote subject matter is a dated way to learn.

    When artists create art, there should be some standardized seal, signature, or verification that the artist did not use AI or used it only supplementally on the side. This would work on the honor system and just constitute a scandal if the artist is eventually outed as having faked their craft. (Think finding out the handmade furniture you bought was actually made in a Vietnamese factory. The seller should merely have their reputation tarnished.)

    Overall I see AI as the next step in search engine synthesis, info just needs to be properly credited to the original researchers and verified against other sources by the user. No different than Google or Wikipedia.