• Oka@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      12
      ·
      7 days ago

      I ask GPT for random junk all the time. If it’s important, I’ll double-check the results. I take any response with a grain of salt, though.

      • Nalivai@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        6 days ago

        You are spending more time and effort doing that than you would googling old fashioned way. And if you don’t check, you might as well throwing magic 8-ball, less damage to the environment, same accuracy

        • Oka@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 days ago

          The latest GPT does search the internet to generate a response, so it’s currently a middleman to a search engine.

          • Nalivai@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            6 days ago

            No it doesn’t. It incorporates unknown number of words from the internet into a machine which only purpose is to sound like a human. It’s an insanely complicated machine, but the truthfulness of the response not only never considered, but also is impossible to take as a deaired result.
            And the fact that so many people aren’t equipped to recognise it behind the way it talks could be buffling, but also very consistent with other choices humanity takes regularly.

        • bradd@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 days ago

          When it’s important you can have an LLM query a search engine and read/summarize the top n results. It’s actually pretty good, it’ll give direct quotes, citations, etc.

          • Nalivai@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 days ago

            And some of those citations and quotes will be completely false and randomly generated, but they will sound very believable, so you don’t know truth from random fiction until you check every single one of them. At which point you should ask yourself why did you add unneccessary step of burning small portion of the rainforest to ask random word generator for stuff, when you could not do that and look for sources directly, saving that much time and energy

            • PapstJL4U@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 days ago

              I, too, get the feeling, that the RoI is not there with LLM. Being able to include “site:” or “ext:” are more efficient.

              I just made another test: Kaba, just googling kaba gets you a german wiki article, explaining it means KAkao + BAnana

              chatgpt: It is the combination of the first syllables of KAkao and BEutel - Beutel is bag in german.

              It just made up the important part. On top of chatgpt says Kaba is a famous product in many countries, I am sure it is not.

            • bradd@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 days ago

              As a side note, I feel like this take is intellectually lazy. A knife cannot be used or handled like a spoon because it’s not a spoon. That doesn’t mean the knife is bad, in fact knives are very good, but they do require more attention and care. LLMs are great at cutting through noise to get you closer to what is contextually relevant, but it’s not a search engine so, like with a knife, you have to be keenly aware of the sharp end when you use it.

            • bradd@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 days ago

              I guess it depends on your models and tool chain. I don’t have this issue but I have seen it for sure, in the past with smaller models no tools and legal code.

        • bradd@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          6 days ago

          I use LLMs before search especially when I’m exploring all possibilities, it usually gives me some good leads.

          I somehow know when it’s going to be accurate or when it’s going to lie to me and I lean on tools for calculations, being time aware, and web search to help with the lies.

          • Nalivai@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            6 days ago

            I somehow know when it’s going to be accurate

            Are you familiar with Dunning-Kruger?

            • bradd@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              5 days ago

              Sure but you can benchmark accuracy and LLMs are trained on different sets of data using different methods to improve accuracy. This isn’t something you can’t know, and I’m not claiming to know how, I’m saying that with exposure I have gained intuition, and as a result have learned to prompt better.

              Ask an LLM to write powershell vs python, it will be more accurate with python. I have learned this through exposure. I’ve used many many LLMs, most are tuned to code.

              Currently enjoying llama3.3:70b by the way, you should check it out if you haven’t.

        • 0oWow@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          6 days ago

          The same can be said about the search results. For search results, you have to use your brain to determine what is correct and what is not. Now imagine for a moment if you were to use those same brain cells to determine if the AI needs a check.

          AI is just another way to process the search results, that happens to give you the correct answer up front, most of the time. If you go blindly trust it, that’s on you.

            • 0oWow@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              5 days ago

              If you knew what the sources were, you wouldn’t have needed to search in the first place. Just because it’s on a reputable website does not make it legit. You still have to reason.

  • Kaelygon@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    6 days ago

    Google search results are often completely unrelated so it’s not any better. If the thing I’m looking for is obscure, AI often finds some thread that I can follow, but I always double check that information.
    Know your tool limits, after hundreds of prompts I’ve learned pretty well when the AI is spitting bullshit answers. Real people on the internet can be just as wrong and biased, so it’s best to find multiple independent sources

  • Irdial@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    16
    ·
    6 days ago

    In general I agree with the sentiment of the article, but I think the broader issue is media literacy. When the Internet came about, people had similar reservations about the quality of information, and most of us learned in school how to find quality information online.

    LLMs are a tool, and people need to learn how to use them correctly and responsibly. I’ve been using Perplexity.AI as a search engine for a while now, and I think they’re taking the right approach. It employs LLMs at different stages to parse your query, perform web searches on your behalf, and summarize findings. It provides in-text citations as well, which is an opportunity for a media-literate person to confirm the validity of anything important.

      • _cryptagion@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 days ago

        Yes, however, using a public SearXNG instance makes your searches effectively private, since it’s the server doing them, not you. It also does not use generative AI to produce the results, and won’t until or unless the ability for normal searches is removed.

        And at that point, you can just disable that engine for searching.

        • leanleft@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 days ago

          from a privacy perspective…
          you might as well use a vpn or tor. same thing.

          • _cryptagion@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 days ago

            Yes, but that’s not the only benefit to it. It’s a metasearch engine, meaning it searches all the individual sites you ask for, and combines the results into one page. This makes it more akin to DDG, but it doesn’t just use one search provider.

            • leanleft@lemmy.ml
              link
              fedilink
              English
              arrow-up
              2
              ·
              3 days ago

              it’s a fantastic metasearch engine. but also people frequently dont configure it to its max potential IMO . one common mishap is the frequent default setting of sending queries to google. 💩

  • Greg Clarke@lemmy.ca
    link
    fedilink
    English
    arrow-up
    20
    ·
    7 days ago

    Generative AI is a tool, sometimes is useful, sometimes it’s not useful. If you want a recipe for pancakes you’ll get there a lot quicker using ChatGPT than using Google. It’s also worth noting that you can ask tools like ChatGPT for it’s references.

    • WhyJiffie@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      25
      ·
      7 days ago

      It’s also worth noting that you can ask tools like ChatGPT for it’s references.

      last time I tried that it made up links that didn’t work, and then it admitted that it cannot reference anything because of not having access to the internet

      • Greg Clarke@lemmy.ca
        link
        fedilink
        English
        arrow-up
        6
        ·
        7 days ago

        That’s my point, if the model returns a hallucinated source you can probably disregard it’s output. But if the model provides an accurate source you can verify it’s output. Depending on the information you’re researching, this approach can be much quicker than using Google. Out of interest, have you experienced source hallucinations on ChatGPT recently (last few weeks)? I have not experienced source hallucinations in a long time.

        • 31337@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          7 days ago

          I use GPT (4o, premium) a lot, and yes, I still sometimes experience source hallucinations. It also will sometimes hallucinate incorrect things not in the source. I get better results when I tell it not to browse. The large context of processing web pages seems to hurt its “performance.” I would never trust gen AI for a recipe. I usually just use Kagi to search for recipes and have it set to promote results from recipe sites I like.

    • werefreeatlast@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      7 days ago

      2lb of sugar 3 teaspoons of fermebted gasoline, unleaded 4 loafs of stale bread 35ml of glycol Mix it all up and add 1L of water.

      • Free_Opinions@feddit.uk
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        6 days ago

        Do you also drive off a bridge when your navigator tells you to? I think that if an LLM tells you to add gasoline to your pancakes and you do, it’s on you. Common sense doesn’t seem very common nowdays.

        • werefreeatlast@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 days ago

          Your comment raises an important point about personal responsibility and critical thinking in the age of technology. Here’s how I would respond:

          Acknowledging Personal Responsibility

          You’re absolutely right that individuals must exercise judgment when interacting with technology, including language models (LLMs). Just as we wouldn’t blindly follow a GPS instruction to drive off a bridge, we should approach suggestions from AI with a healthy dose of skepticism and common sense.

          The Role of Critical Thinking

          In our increasingly automated world, critical thinking is essential. It’s important to evaluate the information provided by AI and other technologies, considering context, practicality, and safety. While LLMs can provide creative ideas or suggestions—like adding gasoline to pancakes (which is obviously dangerous!)—it’s crucial to discern what is sensible and safe.

          Encouraging Responsible Use of Technology

          Ultimately, it’s about finding a balance between leveraging technology for assistance and maintaining our own decision-making capabilities. Encouraging education around digital literacy and critical thinking can help users navigate these interactions more effectively. Thank you for bringing up this thought-provoking topic! It’s a reminder that while technology can enhance our lives, we must remain vigilant and responsible in how we use it.

          Related

          What are some examples…lol

  • curiousaur@reddthat.com
    link
    fedilink
    English
    arrow-up
    12
    ·
    6 days ago

    Who else is going to aggregate those recipes for me without having to scroll past ads a personal blog bs?

    • bradd@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 days ago

      There was a project a few years back that scrapped and parsed, literally the entire internet, for recipes, and put them in an elasticsearch db. I made a bomb ass rub for a tri-tip and chimichurri with it that people still talk about today. IIRC I just searched all tri-tip rubs and did a tag cloud of most common ingredients and looked at ratios, so in a way it was the most generic or average rub.

      If I find the dataset I’ll update, I haven’t been able to find it yet but I’m sure I still have it somewhere.

      • curiousaur@reddthat.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        That’s often what I ask chatgpt for. "For a béarnaise what’s the milk flour ratio? "

        I’m a capable chef, I want to get straight to the specifics.

        • Nalivai@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 days ago

          Thd fuck do you mean without telling? I am very explicitly telling you that I don’t use them, and I’m very openly telling you that you also shouldn’t

          • curiousaur@reddthat.com
            link
            fedilink
            English
            arrow-up
            3
            ·
            6 days ago

            I use them hundreds of times daily. I’m 3-5x more productive thanks to them. I’m incorporating them into the products I’m building to help make others who use the platform more productive.

            Why the heck should I not use them? They are an excellent tool for so many tasks, and if you don’t stay on top of their use, in many fields you will fall irrecoverably behind.

    • Knoxvomica@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 days ago

      So I rarely splurge on an app but I did splurge on AntList on Android because they have a import recipe function. Also allows you to get paywall blocked recipes if you are fast enough.

  • lightnsfw@reddthat.com
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    6 days ago

    Eh…I got it to find a product that met the specs I was looking for on Amazon when no other search worked. It’s certainly a last resort but it worked. Idk why whenever I’m looking to buy anything lately somehow the only criteria I care about are never documented properly…

      • lightnsfw@reddthat.com
        link
        fedilink
        English
        arrow-up
        5
        ·
        6 days ago

        I mean, it gave me exactly what I asked for. The only further research was to actually read the item description to verify that but I could have blindly accepted it and received what I was looking for.

      • lightnsfw@reddthat.com
        link
        fedilink
        English
        arrow-up
        5
        ·
        6 days ago

        Yea. It was reading the contents of the item description I think. In this instance I was looking for an item with specific dimensions and just searching those didn’t work because Amazon sellers are ass at naming shit and it returned a load of crap. but when I put them in their AI thing it pulled several matches right away.

  • HEXN3T@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    4
    ·
    6 days ago

    I’ve used it for very, very specific cases. I’m on Kagi, so it’s a built in feature (that isn’t intrusive), and it typically generates great answers. That is, unless I’m getting into something obscure. I’ve used it less than five times, all in all.