This post discusses the importance of incentivizing research participants to enable large scale surveys which have become an important part of academic research enabling small research teams and even individual researchers to perform large scale studies that just a few years ago would have been out of reach for all but a small number of well-funded scholars.
Quantitative surveys with large significant populations historically have been time-consuming, expensive and a laborious to perform often requiring thousands of phone calls and/or distributing survey forms by expensive letter mails or approaching a census face-to-face during Saturday shopping (Callegaro, Manfreda and Vehovar, 2015; Dziuban et al., 2015).
Large scale studies based on a manual process of recruiting participants, therefore only have been accessible to established researchers with access to significant funds from the academic- or the corporate world. But even with available funds, to recruit thousands of participants by phone or letter mails willing to offer maybe an hour of their precious time to answer a survey, in many cases simply is not practically possible which also explain the high number of studies in the social sciences with obvious deficiencies in the statistical validity of the research population which include Hofstede’s Theory of Cultural dimensions (Hofstede, 2017) and Schwartz Seven Dimension (Schwartz, 2006); two of the most influential models of culture in the fields of cross-cultural psychology, international management, and cross-cultural communication.
Hofstede’s theory (2017), as pointed out by Mcsweeney (2002), originates from a survey with more than 117.000 respondents from 66 countries (Mcsweeney 2002, p.94) but only from six the number of respondents is more than 1.000; in fifteen countries the number of respondents is less than 200; in Pakistan only 37 and the number of respondents from Hong Kong and Singapore, respectively, is 71 and 59 (Hofstede 1980, p.73; Hofstede 1980, p.411;). Consequently, the data-set is not statistically significant which is also pointed out by Schmitz and Weber (2014) whom conclude that Hofstede’s research lack validity and that his theory should “neither to be used as a standard of cross-national comparisons, nor as the basis for general description about countries as whole” (Schmitz and Weber 2014, p. 21). Still, this theory is one of the most widely accepted in the social sciences cited more than 72.000 times according to Google Scholar (2017).
Schwartz theory illustrates another common deficiency which surprisingly often is pointed out in the limitation section of academic research reports; Sample Selection Bias and participants recruited only from the academics.
While the high number of academic studies performed exclusively with participants from the academics is understandable considering the comparative simplicity for an academic team to reach out to students of their own institution, that many students have a tight economy happily becoming lab rats for a few dollars or a movie ticket and that most students have an almost infinitive time available to be part of academic studies compared to full working professionals with children to pick up from school, dinner to cook and homework reading to oversee; this does not hide the fact that a research population of only students and teachers by no mean is significantly representative of a general population.
The web and its influence on research
The introduction of the web in the 1990s’ was a paradigm shift in research methodologies making it possible for researchers to send out thousands or even millions of invitations and survey forms with just a click.
The fact that the web has made it easier to perform studies, as noted by Callegaro, Manfreda and Vehovar (2015), however, also have resulted in the number of surveys people are asked to be part of have increased exponentially each year since the millennia and most people active on the Internet today are flooded with requests to be part of surveys by email, in pop ups on their favourite websites and in social networks platforms like Facebook; resulting in a steady decline in both response rate as well as the quality of responses (Nonresponse in Social Science Surveys, 2013). As of this, a growing industry of companies helping researchers in recruiting survey populations which are significant and nonbiased has become an important part of academic research enabling small research teams and even individual researchers to perform large scale studies that just a few years ago would have been out of reach for all but a small number of well-funded scholars.
While methodologies these companies use to recruit participants varies it is now a well-established fact that not offering participants an incentive will result in considerable fewer responses to an extent which makes large scale surveys on general populations practically impossible (Singer and Ye, 2012; Wetzels et al., 2008; Callegaro, Manfreda and Vehovar, 2015) and which also explains the fact that the recruitment process employed by these companies, with very few exceptions, is based on offering survey participants an incentive in exchange for their time.
Not all universities agree to incentivizing research participants
Identifying, scanning and inviting thousands of participants in the short window of time available for many research projects, the only viable alternative, of course, is to outsource the recruitment to a third part like Profilic (2017); one of the leading recruitment companies of participants to academic studies. Founded as an Oxford University Incubator company and with the worlds leading academic institutions as their clients including Harvard, Yale and Cambridge and a recruitment process founded on strict ethical guidelines which include paying research participants fairly for their time.
Not all universities, however, agree to the ethical research standards accepted by the worlds most prominent universities and that research participant should be paid fairly for their time which, in essence, means that these universities do not allow for studies requiring large research populations and an incentivized recruitment process.
As of this, it is important that scholar carefully study the research standards of the academic institutions they apply to, to avoid getting into a situation where research gets declined due to policies of not accepting research participants getting paid fairly for their time.
Mike Andersson
- References
- Callegaro, M., Manfreda, K. and Vehovar, V. (2015). Web survey methodology. Beverly Hills (California) [etc.]: Sage.
- Dziuban, C., Picciano, A., Graham, C. and Moskal, P. (2015). Conducting research in online and blended learning environments. Routledge.
- Hofstede, G. (1980). Culture’s Consequences: International Differences in Work-Related Values. 2nd ed. SAGE Publications.
- Hofstede, G. (1997). Cultures and organizations: Software of the mind. 1st ed. London: McGraw-Hill USA.
- Hofstede, G. (2011). Dimensionalizing Cultures: The Hofstede Model in Context. Online Readings in Psychology and Culture, 2(1), pp.3-5.
- Hofstede, G. and Hofstede, J. (2005). Cultures and organizations. 1st ed. New York: McGraw-Hill.
- Hofstede, G. (2017). Cultural Dimensions – Geert Hofstede. [online] Geert-hofstede.com. Available at: https://geert-hofstede.com/cultural-dimensions.html [Accessed 29 Aug. 2017].
- Mcsweeney, B. (2002). Hofstede’s Model of National Cultural Differences and their Consequences: A Triumph of Faith – a Failure of Analysis. Human Relations, [online] 55(1), pp.89-118. Available at: http://journals.sagepub.com/doi/abs/10.1177/0018726702551004 [Accessed 2 May 2017].
- Nonresponse in Social Science Surveys. (2013). Washington: National Academies Press.
- Prolific. (2017). Prolific. [online] Available at: https://www.prolific.ac/ [Accessed 29 Aug. 2017].
- Prolific.ac. (2017). Ethical rewards. [online] Available at: https://www.prolific.ac/researchers [Accessed 29 Aug. 2017].
- Schwartz, S. (2006). A Theory of Cultural Value Orientations: Explication and Applications. Comparative Sociology, 5(2), pp.137-182.
- Schmitz, L. and Weber, W. (2014). Are Hofstede’s dimensions valid? A test for measurement invariance of uncertainty avoidance. interculture journal: Online-Zeitschrift für interkulturelle Studien, [online] 12(22), pp.11-26. Available at: http://www.ssoar.info/ssoar/handle/document/45472 [Accessed 2 May 2017].
- Singer, E. and Ye, C. (2012). The Use and Effects of Incentives in Surveys. The ANNALS of the American Academy of Political and Social Science, 645(1), pp.112-141.
- Wetzels, W., Schmeets, H., Brakel, J. and Feskens, R. (2008). Impact of Prepaid Incentives in Face-to-Face Surveys: A Large-Scale Experiment with Postage Stamps. International Journal of Public Opinion Research, 20(4), pp.507-516.