Research should be important to you. If it isn’t, we as scientists are not doing it right. And sadly we’re not. Most research articles receive fewer than 10 citations—even from their own community of scholarly peers. So rather than assume that we, a bunch of academics, know what you, a unique reader, want to see researched, we ask. In fact, we dedicated an entire research study to it. You’ll see opportunities to not only give us feedback, but to suggest your own research studies too.
Methods:
Research should also be accurate and reliable. That’s why we primarily use randomized controlled trials (i.e., experiments) to study the topics that are important to you. Such experiments are considered to be the “gold standard” in research. But we know that even gold standards fall short.
Recent studies in top academic journals find that approximately 50% of their research cannot be replicated or is greatly over-exaggerated (Bohannon, 2015; Camerer, et al., 2018). Although counter-measures have been encouraged by some of these top journals, their adoption and effectiveness has been overshadowed by long-standing incentive problems (Barrett, 2019; Chevassus-au-Louis, 2019) and biases in the publication process (Bonjean & Hullum, 1978; Strang & Siler, 2015).
To address these issues, we’ve adopted several research best practices:
Report the results of all studies to you, regardless of the outcomes
Provide you with all of the research materials
Provide you with all of the data
Use large samples to help detect smaller effect sizes
Attempt to replicate some of our own studies
Be clear about the limitations of our research
Our studies are conducted with about 400 people on Amazon Mechanical Turk (MTurk), a crowd sourcing platform where people complete human intelligence tasks for pay. Evidence shows that MTurk is a reliable platform for research like ours (Berinsky, et al., 2012; Paolacci, et al., 2010). Participation is limited to MTurkers with at least 1,000 tasks completed and an approval rating of 99%-100%. Participants are paid $0.50 for a 5-minute batch of 5 studies, ordered to prevent spillover effects. Studies with related variables are split into entirely separate batches.
After analysis, the results are then reviewed by 2-3 people before publishing on the AB Labs website. Studies are later updated for subsequent research findings and comments from our audience where appropriate.
Costs:
Last but not least, research should be freely accessible to the public. Publishers of academic research have profited heavily. Universities not only pay for the research, they also pay for publication of the results and then pay again to read them, purchasing access to their publication databases with public tax dollars and alumni donations (Zhang, 2019). These paywalls block out the general public too, denying everyday readers access unless they pay $10-$49 per article (Fox & Brainard, 2019).
We address this issue by making all of our research available to everyone. For free.
Nearly as unsettling is how much the research itself costs. Costs for professor salaries, data licenses, analysis software, and overhead are unfortunately high. And yet, these costs multiply when a paper languishes for years in the publication process. A professor salaried at $100,000 per year who spends just 3 months on a research paper would cost $25,000 alone. And that one paper might be read by fewer than 10 people.
We address this issue by being economical with our research. By using modern-day tools like online experiments we can provide you with rigorous research for a fraction of the cost. No corporate sponsors. No journals. No fees. Just science.
References:
Barrett, Lisa F. (2019). The publication arms race. Observer, 32(7).
Bohannon, J. (2015). Many psychology papers fail replication test. Science, 349(6251), 910-911.
Bonjean, C., & Hullum, J. (1978). Reasons for Journal Rejection: An Analysis of 600 Manuscripts. PS: Political Science & Politics, 11(4), 480-483.
Camerer, C. F., Dreber, A., Holzmeister, F., Ho, T.-H., Huber, J., Johannesson, M.,… Wu, H. (2018). Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015. Nature Human Behaviour, 2(9), 637-644.
Chevassus-au-Louis, N. (2019). Fraud in the Lab: The High Stakes of Scientific Research. Cambridge, MA: Harvard University Press.
Strang, D., & Siler, K. (2015). Revising as Reframing: Original Submissions versus Published Papers in Administrative Science Quarterly, 2005 to 2009. Sociological Theory, 33(1), 71–96.
Zhang, S. (2019). The Real Cost of Knowledge. The Atlantic. March 4, 2019.