MistyTiger wrote:Those papers are based on results of studies. Do you know how studies start? The researchers ask for participation from residents in the surrounding areas and they try to get participation that is representative of the general population which is not easy to do. They use a sample of responses and write the paper based on responses. Are you calling those respondents liars? If so, you are calling a lot of people liars.
They don't
Study 1, referred to in this thread, had 18 undergrad students. That's it.
Study 2, purporting to confirm study 1, took and reanalyzed the data from a somewhat larger study and had 45 respondents "from the community" (recruited in DMV offices, with an overrepresentation of Hispanics in the source study) and 31 cops recruited from Denver's PD. There was no random selection of the people from either group (they would only be randomly assigned AFTER showing up, which is not exactly random) or a sampling frame to draw from in the study these cases were drawn from. Participation in the original study was obviously voluntary. I have no idea how the subsample in the previous study was selected, why didn't they reuse all of it or how would their treatment be defined.
The paper can be read here:
https://www.researchgate.net/publicatio ... n_to_shootThe original paper they drew their sample for study 2 from is here:
https://www.apa.org/pubs/journals/relea ... 261006.pdfIt is a well known and by now accepted fact these social psychology experiments generally fail to replicate when using larger samples.
Wiki wrote:In psychology
Despite issues with replicability being pervasive across scientific fields, several factors have combined to put psychology at the center of the conversation.[25][26] Some areas of psychology once considered solid, such as social priming, have come under increased scrutiny due to failed replications.[27] Much of the focus has been on the area of social psychology,[28] although other areas of psychology such as clinical psychology,[29][30][31] developmental psychology,[32][33][34] and educational research have also been implicated.[35][36][37][38][39]
In August 2015, the first open empirical study of reproducibility in psychology was published, called The Reproducibility Project: Psychology. Coordinated by psychologist Brian Nosek, researchers redid 100 studies in psychological science from three high-ranking psychology journals (Journal of Personality and Social Psychology, Journal of Experimental Psychology: Learning, Memory, and Cognition, and Psychological Science). 97 of the original studies had significant effects, but of those 97, only 36% of the replications yielded significant findings (p value below 0.05).[11] The mean effect size in the replications was approximately half the magnitude of the effects reported in the original studies. The same paper examined the reproducibility rates and effect sizes by journal and discipline. Study replication rates were 23% for the Journal of Personality and Social Psychology, 48% for Journal of Experimental Psychology: Learning, Memory, and Cognition, and 38% for Psychological Science. Studies in the field of cognitive psychology had a higher replication rate (50%) than studies in the field of social psychology (25%).[40]
A study published in 2018 in Nature Human Behaviour replicated 21 social and behavioral science papers from Nature and Science, finding that only about 62% could successfully reproduce original results.[41][42]
Similarly, in a study conducted under the auspices of the Center for Open Science, a team of 186 researchers from 60 different laboratories (representing 36 different nationalities from six different continents) conducted replications of 28 classic and contemporary findings in psychology.[43][44] The study's focus was not only whether the original papers' findings replicated but also the extent to which findings varied as a function of variations in samples and contexts. Overall, 50% of the 28 findings failed to replicate despite massive sample sizes. But if a finding replicated, then it replicated in most samples. If a finding was not replicated, then it failed to replicate with little variation across samples and contexts. This evidence is inconsistent with a proposed explanation that failures to replicate in psychology are likely due to changes in the sample between the original and replication study.[44]
Results of a 2022 study suggest that many earlier brain–phenotype studies ("brain-wide association studies" (BWAS)) produced invalid conclusions as the replication of such studies requires samples from thousands of individuals due to small effect sizes.[45][46]
As I mentioned, this whole thing exploded when a social psychology paper purported to have shown with several experiments that people could have premonitions about the future. Needless to say, the experiments did not replicate when they were done again using larger samples (much larger than the 90-120 subjects that were originally used, which are larger than those used in the paper at hand and in the ballpark of the sizes considered in the original study where the data from study 2 came from).
Don't blame me for not believing the results of experiments that included few subjects, simply because they were published in journals with editors and referees keen on that type of research (in their defense, there's probably not nearly enough funding for them to get the sample sizes they need for all the studies they want. If they pay people $20 por participating then they would easily need $20,000 in funding just for paying the subjects, not counting the overall costs of recruitment like posting ads, the cost of performing the actual experiments, paying their own salaries and their research assistants, etc and there are hundreds of teams working in this kind of thing nationally).