• TropicalDingdong@lemmy.world
    link
    fedilink
    arrow-up
    9
    arrow-down
    4
    ·
    8 months ago

    They posted their methodology and to me, as an unqualified lay person (…)

    So like, if you know the above statement to be true, that’s exactly where you should stop in your reasoning. This is something that I find Americans to be guilty if constantly, which is to have the humility to understand that they shouldn’t have an opinion, and the proceed to arrogantly have the opinion they just acknowledged they shouldn’t have. I think it’s a deeply human thing, that we evolved to have to deal with missing information and so our brain fills in gaps and gives us convincing narratives. However, you have to resist the tendency when you know you really don’t know: and even more so when your beliefs go against what the data is.

    If you can find me some sources of data on special elections, I’ll happily analyze it for you. I think it would be interesting if nothing else to see the offset. I’m not on my desktop machine, but I’ll give you some sources for data since you asked.

    • mozz@mbin.grits.dev
      link
      fedilink
      arrow-up
      7
      arrow-down
      3
      ·
      8 months ago

      Surely as a qualified non lay person you’ll be able to do a detailed takedown of all the criticism I arrived at for the poll’s methodology from like 2 minutes of looking, instead of just making a broad assertion that if the polling was wrong by a certain amount in a previous year we should add that amount to this year’s polling to arrive at reality, and that’s all that’s needed and then this year’s corrected poll will always be accurate.

      Because to me, that sounds initially plausible but then when you look at it for a little bit longer you say, oh wait hang on, if that was all that was needed the professional pollsters could just do that, and their answers would always be right. And you wouldn’t need to look closely at the methodology at all, just trust that “it’s a poll” means it’s automatically equal to every other poll (once you apply the magic correction factor.)

      To me that sounds, on close scientific examination, like a bunch of crap once you think about it for a little bit. But what do I know. I’m unqualified. I’ll wait for you to educate me.

      • TropicalDingdong@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        2
        ·
        8 months ago

        I think the right answer is to do what you described, in the aggregate. Don’t do it on a pollster to pollster basis, do it at the state level, across all polls. You don’t do this as a pollster because that isn’t really what you are trying to to model with a poll, and polls being wrong or uncertain is just a part of the game.

        So it’s important to not conflate polling with the meta-analysis of polling.

        I’m not so much interested in polls or polling but in being able to use them as a source of data to model outcomes that individually they may not be able to to predict. Ultimately a poll needs to be based on the data it samples from to be valid. If there is something fundamentally flawed in the assumptions that form the basis of this, there isn’t that much you can do to fix it with updates to methods.

        the -4, 8 spread is the prior I’m walking into this election year with. That inspire of their pollsters best efforts to come up with a unbiased sample, they can’t predict the election outcome is fine. We can deal with that in the aggregate. This is very similar to Nate Silvers approach.

        • mozz@mbin.grits.dev
          link
          fedilink
          arrow-up
          5
          ·
          8 months ago

          If there is something fundamentally flawed in the assumptions that form the basis of this, there isn’t that much you can do to fix it with updates to methods.

          On this, we 100% agree.