Sunday, December 11, 2022

This Blog Has Moved!

Hello friends, I have moved this blog to Substack, which is much easier to use and manage subscriptions. All of the posts from here have been migrated to the new blog, and new posts will only be there. Check it out at https://getsyeducated.substack.com/. You can enter your email to subscribe, getting new posts direct to your inbox--free now, free always. Thanks for reading!....moin

Thursday, September 8, 2022

Do Registered Reports Take Longer to Publish Than Traditional Articles? The Importance of Identifying the Appropriate Counterfactual

Recently, I attended the annual advisory council meeting for an NSF-funded Ethical & Responsible Research (ER2) project focused on Registered Reports, led by Amanda Montoya and William Krenzer. The project seeks to facilitate uptake of Registered Reports among Early Career Researchers by understanding individual, relational, and institutional barriers to doing so. The first paper from the project has now been published (Montoya et al., 2021), with several more exciting ones on the way. This post is inspired by our conversations during the meeting, and thus I do not lay sole claim to the ideas presented here.

A quick primer on Registered Reports before getting to the point of this post (skip to the next paragraph if you are a know it all): Traditional[1] papers involve the process we are all familiar with, in which a research team develops an idea, conducts the study, analyzes the data, writes up the report, and then submits it for publication. We how have plenty of evidence that this process has not served our science well, as it created a system in which publication decisions are based on the nature of the findings of the study, which has led to widespread problems of p-hacking, HARKing, and publication bias (Munafò et al., 2017). Registered Reports are an intervention to the problems created through the traditional publication process (see Chambers & Tzavella, 2021, for a detailed review). Rather than journals reviewing only the completed study, with the results in hand, Registered Reports break up the publication process into two stages. In Stage 1, researchers submit the Introduction, Method, and Planned Analysis sections—before the data have been collected and/or analyzed. This Stage 1 manuscript is reviewed just as other manuscript submissions are, with the ultimate positive outcome being an In-principle acceptance (IPA). The IPA is a commitment by the journal to publish the manuscript regardless of the results, so long as the authors follow the approved protocol and do so competently. Following the IPA, the researchers conduct the study, analyze the data, and prepare a complete paper, called the Stage 2 manuscript, and resubmit that to the journal for review to ensure adherence to the registered plan and high-quality execution. Whereas publication decisions for traditional articles are made based on the nature of the results, with Registered Reports publication decisions are based on the quality of the conceptualization and study design. This change removes the incentive for researchers to p-hack their results or file-drawer their papers (or for editors and reviewers to encourage such), as publication is not dependent on plunging below the magical p-value of .05. In my opinion, Registered Reports are the single most important and effective reform that journals can implement. So, naturally, it is the reform to which we see the greatest opposition within the scientific community[2].

A recurring topic of conversation at our meeting was about the review time for Registered Reports, and how it compares to publishing traditional papers. Traditional papers have a single review process, whereas with Registered Reports the review process is broken up into two stages. Thus, on first glance it seems obvious that Registered Reports would take longer to conduct because they include two stages of review rather than one, and thus it is no surprise that this is a major concern among researchers.

But is this true? As Amanda stated at our meeting, it is hard to know what the right counterfactual is. That is, the sequence and timing of events for Registered Reports are quite clear and structured, but what are the sequence and timing of events for traditional papers? Until she said that, I hadn’t quite thought about the issue in that way, but then I started thinking about it a lot and came to the conclusion that most people almost certainly have the wrong counterfactual in mind when thinking about Registered Reports.

Based on my conversations and observations, it seems that most people’s counterfactual resembles what is depicted in Figure 1. Their starting point for comparison is the point of initial manuscript submission. In my experience as an Editor, the review time for a Stage 1 submission, and the number and difficulty of revisions until the paper is issued an in-principle acceptance (IPA), is roughly equivalent to how long it takes for traditional papers to be accepted for publication.[3] Under this comparison, Registered Reports clearly take much longer to publish because following the IPA, researchers must still conduct the study and submit the Stage 2 for another round of (typically quicker) review, whereas the traditional article would have been put to bed.

A schematic representing how I believe people are making the comparison between traditional articles and registered reports. The top sequence, for traditional articles, shows the flow from manuscript submission, to revision cycle, to the article being accepted. The bottom sequence, for registered reports, shows the flow from Stage 1 submission, to the revision cycle, to Stage 1 in-principle acceptance, to conducting the study, to Stage 2 submission, to the article being accepted. Article acceptance for the traditional article and Stage 1 in-principle acceptance for registered report are occurring at roughly the same time, indicating that registered reports take longer to publish.

Figure 1. A commonly believed, but totally wrong, comparison between Registered Reports and traditional articles.

I have no data, but I am convinced that this is what most people are thinking when making the comparison, and it is astonishing because it is so incredibly wrong. Counting time from the point of submission makes no sense, because in one situation the study is completed and in the other it has not yet even begun. To specify the proper counterfactual, we need to account for all of the time that went into conducting and writing up the traditional paper, as well as the time it takes to actually get a journal to accept the paper for publication. And, oh boy, once we start doing that, things don’t look so good for the traditional papers.

In fact, that phase of the process is such a mess and is so variable, it is really not possible to know how much time to allocate. Sure, we could come up with some general estimates, but consider the following:

It is not uncommon to have to submit a manuscript to multiple journals before it is accepted for publication. This is often referred to as “shopping around” the manuscript until it “finds a home.” I know some labs will always start with what they perceive to be the “top-tier” journal in their field and then “work their way down” the prestige hierarchy. In my group we always try to target papers well on initial submission, and just looking at my own papers about a quarter were rejected from at least one journal prior to being accepted. This should all sound very familiar to all researchers, and it is just plain misery.

It is not uncommon for manuscripts to be submitted, rejected, and then go nowhere at all. This problem is well known, as part of the file-drawer problem, where for a variety of reasons completed research never makes it to publication. Sometimes this follows the preceding process, where researchers send their paper to multiple journals, get rejected from all of them, and then give up. I had a paper that received a revise and resubmit at the first journal we submitted it to, but then it was ultimately rejected following the revision. We submitted to another journal, got another revise and resubmit, and then another rejection. This was, of course, extremely frustrating, and so I gave up on the paper. Many years later, one of my co-authors fired us up to submit it to a new journal, and it was accepted…..14 years after I first started working on it. That paper just as easily could have ended up in the file drawer.[4]

It is not uncommon for great research ideas to go nowhere at all. Ideas! We all have great ideas! I get excited about new ideas and new projects all the time. We start new projects all the time. We finish those projects….rarely. I estimate that we have published on less than a third of the projects we have ever started, which includes not only those that stalled out at the conceptualization and/or pilot phase, but also those for which we collected data, completed some data analysis, and maybe even drafted the paper. For some of these, we invested a huge amount of time and resources, but just could not finish them off. Things happen, life happens, priorities change, motivations wane. So it goes.

All of the above is perfectly normal and understandable within the normative context of conducting science that we have created. Accordingly, all of it needs to be considered in any discussion of comparing the timeliness of Registered Reports and traditional papers. Registered Reports do not completely eliminate all of the above maddening situations, but they severely, severely reduce their likelihood of occurrence. Manuscripts are less likely to be shopped around, less likely to be file drawered, and if you get an IPA on your great idea, chances are high you will follow through. We need to acknowledge that the true comparison between the two is not what is depicted in Figure 1, but more like Figure 2, where the timeline for Registered Reports is relatively fixed and known, whereas the timeline for traditional papers is an unknown hot mess.

 

A schematic representing a more accurate comparison between traditional articles and registered reports. The top sequence, for traditional articles, consists of five different pathways. First is the one shown previously, from manuscript submission, to revision cycle, to the article being accepted. Second highlights how the initial submission may be rejected and sent to a new journal, to start the process over again. Third shows the same process, with the author eventually giving up. Fourth shows the author giving up after conducting the study. Fifth shows the author giving up after starting the study. The bottom sequence, for registered reports, is the same as before, showing the flow from Stage 1 submission, to the revision cycle, to Stage 1 in-principle acceptance, to conducting the study, to Stage 2 submission, to the article being accepted.

Figure 2. A more accurate comparison between Registered Reports and traditional articles. Note that the timelines are not quite to scale.

(Edit: a couple of people have commented that the above figure is biased/misleading, because Registered Reports can also be rejected following review, submitted to multiple journals, etc. Of course this is the case, and I indicated previously that Registered Reports do not solve all of these issues. But "shopping around" a Stage 1 manuscript is very different from doing so with a traditional article, where way more work has already been put in. Adding those additional possibilities (of which there are many for both types of articles) does not change the main point that people are making the wrong comparisons when thinking about time to publish the two formats, and that Registered Reports allow you to better control the timeline. See this related post from Dorothy Bishop.) 

To be clear, I am not claiming that there are no limitations or problems with Registered Reports. What I am trying to bring attention to is the need to make appropriate comparisons when weighing Registered Reports against traditional articles. Doing so requires us to recognize the true state of affairs in our research and publishing process. The normative context of conducting science that we have created is a deeply dysfunctional one, and Registered Reports have the potential to bringing some order to the chaos.  



[1] I don’t know what to call these. “Traditional” seems to suggest some historical importance. Scheel et al. (2021) called them “standard reports,” which I do not like because they most certainly should not be standard (even if they are). Chambers & Tzavella (2021) used “regular articles,” which suffers from the same problem. Maybe “unregistered reports” would fit the bill.

[2] Over 300 journals have adopted Registered Reports, which sounds great until you hear that there are at least 20,000 journals.

[3] Here, I am making a within-Editor comparison: me handling Registered Reports vs. me handling traditional articles. There are of course wide variations in Editor and journal behavior that makes comparisons difficult.

[4] The whole notion of the file drawer is antiquated in the era of preprint servers, but the reality is that preprints are still vastly underused.

Thursday, July 28, 2022

You’re so Vain, You Probably Think This Article Should Have Cited You

Have you ever been upset because an article didn’t cite you? I have.

When I was a doctoral student and new Assistant Professor, whenever I came across a new article in my research area (mostly racial/ethnic identity, in those days), I would immediately look at the reference list to see if they cited my work. I remember even doing this shortly after I published my first paper, when it was impossible that the paper could be cited any time remotely soon, given the glacial pace of publishing in psychology (this was well before preprints were used in the field). The vast majority of times when I checked if I was cited in an article, I was quite disappointed to find that I was not.

This was frustrating for me. Why weren’t other researchers citing my papers? Why was my work being overlooked, when it was clearly relevant? Was there some bias against me, and/or in favor of others?

Over time, I realized that my reactions were all wrong. Yes, my research was relevant and could have been cited, but I was far from the only person studying racial/ethnic identity. Authors certainly are not going to cite all published papers related to the topic. Even if that was possible—which it is not—that would lead to absurd articles and reference lists. So, authors obviously must be selective in who they cite. Why should they cite me instead of someone else who does related work? If we all believe we should be cited when relevant, that would mean that we believe authors should cite all relevant work. It is clearly a nonsensical position, but one that we are socialized into adopting within the bizarrely insecure world of academic publishing. Citations are currency in the academic world, and money can make us act in strange ways.

There is a phenomenon that I have observed (too often) on social media and at conferences that I refer to as “citation outrage,” or the act of publicly complaining about not being cited in a particular paper. This seems to stem from an inflated sense of our own relevance to other’s published work. Of course, your work could be cited in a whole host of papers, but did it need to be cited? Would the authors arguments, interpretations, or conclusions be any different if they had cited your paper? Chances are, the answer is no, and in such cases, you should probably just relax.

Now that said, it is not the case that all complaints about lack of citation are the same. Far from it.

Sometimes certain work should indisputably be cited. This can take a couple of different forms. It is nearly always advisable to cite the originator of a term or idea, especially if it is relatively recent, i.e., does not have a clear historical precedent and is not part of common knowledge. Additionally, if one’s work is not just related to the topic area, but directly related to the specific study, then yeah, it should almost certainly be included. To return to my early research, if a paper is focused on narratives of race/ethnicity-related experiences, and how those narratives are related to racial/ethnic identity processes, it would be a strange omission to not include articles I published on that exact topic. That is quite different, though, from expecting my work to be cited in any article related to racial/ethnic identity, which is a broad remit. Indeed, some have heard me complain about a time that our work was not cited when it should have been. I gave a talk on a topic that was relatively novel at the time, and had a subsequent discussion about it with a senior researcher who was in the room. They informed me that they were working on a paper that covered similar ground, so I sent them our published work on the topic. About a year later, I saw the paper published in a high-profile outlet with nary a citation to our previous work that was directly related and that which I know they were aware. That was both frustrating and academically dishonest: the authors knowingly omitted references to our papers to make their work appear to be novel. 

There are additional structural factors around citations that must be considered. There has been quite a bit of attention recently to citation patterns and representation, particularly in regard to gender and race/ethnicity. Several lines of evidence indicate gender and racial citation disparities across a number of fields (e.g., Chakravartty et al., 2018; Chatterjee & Werner, 2021, Dworkin et al., 2020; Kozlowski et al., 2022), with generally more studies focused on gender than race. As with nearly all social science research, however, this literature is difficult to synthesize due to inconsistent analysis practices and lack of attention to confounds, such as working in different fields, seniority, institutional prestige, and disciplinary differences in authorship order (for a discussion of some of the issues, see Anderson et al., 2019; Dion & Mitchell, 2019; Kozlowski et al., 2022). I have not gone deep enough into all of the studies to arrive at a conclusion about the strength of evidence for these disparities, but I certainly have a strong prior that they are real given the racialized and gendered nature of science, opportunity structures, and whose work is seen as valued[1]. Additionally, we know that researchers can be lazy with their citations, relying on titles or other easily-accessible information rather than reading the papers (Bastian, 2019; Stang et al., 2018). This kind of “familiarity bias” will almost certainly reinforce inequities.

The recognition of these biases and disparities has led to pushes for corrective action, sometimes under the label of “citational justice” (Kwon, 2022) but more generally in terms of being more aware, transparent, and communicative about citation practices (Ahmed, 2017; Zurn et al., 2020). Various tools have been introduced, such as the Gender Balance Assessment Tool (Sumner, 2018) and the Citation Audit Template (Azpeitia et al., 2022), which provide data on the gender or racial background of the authors in a reference list, raising the awareness of authors’ citation patterns and giving them an opportunity to make changes.

To be blunt, I am not a big fan of these tools, what they imply, or the technology that underlies them. I agree that they can be very useful for raising awareness of our citation patterns, as I imagine few have a clear sense for their citation behavior. I am less positive about the possible actions that will come from such tools. They reinforce thinking about diversity in terms of superficial quotas; so long as you cite a reasonably equal number of men and women, or some (unknown) distribution of racial groups, then you have done your deed towards reducing disparities. They also rely on automated methods of name analysis or intensive visual-search strategies that are both highly prone to error. For example, in a widely-discussed—and ultimately retracted—article of over 3 million mentor-mentee pairs, the gender of 48% of authors could not be classified (AlShebli et al., 2020). The challenges of automated classification become ever more difficult when moving beyond the gender binary or attempting to classify based on race or nationality. To be fair, the authors of such tools and those who advocate for them acknowledge the limitations and don’t claim that using them will solve all the problems, but that it is a position that is difficult for people “out there” to resist. These quota-based approaches are the typical kind of quick-fix, minimal effort solutions to addressing disparities that researchers just love. They could be thought of as a form of “citation hacking,” or misuse of citations in service of some goal other than the scientific scope of the paper. They focus on representation—which is not a bad thing—but they don’t at all require that we engage with the substance of the work.

Indeed, whereas of course citations are important and necessary within the academic economy, the larger issue is one of epistemic exclusion (Settles et al. ,2021), the phenomenon of faculty of color’s scholarship being devalued by their White colleagues. The solution to this problem is not citation audits or citation quotas. The solution to this problem is to be more reflective about the work that you engage with, and how it influences your own work. And yes, this includes providing proper credit in the form of citation. The Cite Black Women movement, founded in 2017 by the Black feminist anthropologist Christen A. Smith, is an excellent model for focusing on our practice of reading, appreciating, and acknowledging contributions, rather than on the number or percentages of Black women cited in papers.

So how do we think about all of this together? To be honest, I had only planned to write about citation outrage, but then realized the discussion would be incomplete or confused without including citation justice. At first glance, it may seem like these are the same thing; that is, citation justice is just a more formal type of citation outrage. But this is wrong. Citation justice is seeking to bring attention to the systemic inequities around how we engage with, appreciate, and acknowledge work from marginalized populations within a society stratified by race and gender. Citation outrage is about the irrational sense of entitlement, importance, and relevance that is all too common among academics. I acknowledge that this distinction will be lost on some readers, but in short, one flows from a system of oppression, and the other simply doesn’t. 

So then, should that article have cited you? Maybe, maybe not. Probably not. Should you have cited other articles? You always could have, you probably should, and it definitely would be worthwhile to reflect on who you include and why. Again, citations are currency. What should matter more is the substance of the work, but citations impact who gets hired, promoted, awarded, funded, and so on, so it is worth being thoughtful about.

And now, for those of you just here for the Carly Simon: 



References

Ahmed, S. (2017). Living a feminist life. Duke University Press.

AlShebli, B., Makovi, K., & Rahwan, T. (2020). RETRACTED ARTICLE: The association between early career informal mentorship in academic collaborations and junior author performance. Nature communications, 11(1), 1-8. https://doi.org/10.1038/s41467-020-19723-8

Andersen, J. P., Schneider, J. W., Jagsi, R., & Nielsen, M. W. (2019). Meta-Research: Gender variations in citation distributions in medicine are very small and due to self-citation and journal prestige. eLife8, e45374. https://doi.org/10.7554/eLife.45374

Azpeitia, J., Lombard, E., Pope, T., & Cheryan, S. (2022). Diversifying your references. SPSP 2022 Virtual Workshop; Disrupting Racism and Eurocentrism in Research Methods and Practices.

Bastian, H., (2019). Google Scholar Risks and Alternatives [Absolutely Maybe]. https://absolutelymaybe.plos.org/2019/09/27/google-scholar-risks-and-alternatives/

Chakravartty, P., Kuo, R., Grubbs, V., & McIlwain, C. (2018). # CommunicationSoWhite. Journal of Communication68(2), 254-266. https://doi.org/10.1093/joc/jqy003

Chatterjee, P., & Werner, R. M. (2021). Gender disparity in citations in high-impact journal articles. JAMA Network Open4(7), e2114509-e2114509. https://doi.org/10.1001/jamanetworkopen.2021.14509

Dion, M. L., & Mitchell, S. M. (2020). How many citations to women is “enough”? Estimates of gender representation in political science. PS: Political Science & Politics, 53(1), 107-113. https://doi.org/10.1017/S1049096519001173

Dworkin, J. D., Linn, K. A., Teich, E. G., Zurn, P., Shinohara, R. T., & Bassett, D. S. (2020). The extent and drivers of gender imbalance in neuroscience reference lists. Nature Neuroscience23(8), 918-926. https://doi.org/10.1038/s41593-020-0658-y

King, M. M., Bergstrom, C. T., Correll, S. J., Jacquet, J., & West, J. D. (2017). Men set their own cites high: Gender and self-citation across fields and over time. Socius, 3, 1-22. https://doi.org/10.1177/2378023117738903

Kozlowski, D., Larivière, V., Sugimoto, C. R., & Monroe-White, T. (2022). Intersectional inequalities in science. Proceedings of the National Academy of Sciences119(2), e2113067119. https://doi.org/10.1073/pnas.2113067119

Kwon, D. (2022). The rise of citational justice: how scholars are making references fairer. Nature 603, 568-571. https://doi.org/10.1038/d41586-022-00793-1

Settles, I. H., Jones, M. K., Buchanan, N. T., & Dotson, K. (2021). Epistemic exclusion: Scholar(ly) devaluation that marginalizes faculty of color. Journal of Diversity in Higher Education, 14(4), 493–507. https://doi.org/10.1037/dhe0000174

Stang, A., Jonas, S., & Poole, C. (2018). Case study in major quotation errors: a critical commentary on the Newcastle–Ottawa scale. European Journal of Epidemiology, 33(11), 1025-1031. https://doi.org/10.1007/s10654-018-0443-3

Sumner, J. L. (2018). The Gender Balance Assessment Tool (GBAT): a web-based tool for estimating gender balance in syllabi and bibliographies. PS: Political Science & Politics, 51(2), 396-400. https://doi.org/10.1017/S1049096517002074

Zurn, P., Bassett, D. S., & Rust, N. C. (2020). The citation diversity statement: a practice of transparency, a way of life. Trends in Cognitive Sciences, 24(9), 669-672. https://doi.org/10.1016/j.tics.2020.06.009



[1] I have a paper in which I discuss this, but given the evidence for higher self-citation among men (King et al., 2017), I will sit this one out. 

Tuesday, May 24, 2022

Knowing When to Collaborate….and Knowing When to Run Away

As a graduate student, I once went out to lunch with a new post-doc in our department who had similar research interests to mine. We were having a nice chat about personal and professional topics, and at one point I said, “we should think about writing something together.” This clearly made them uncomfortable, and they said something to the effect of, “let’s wait and see if something relevant comes up.” I was a bit confused at the time, because I thought this is what academics did. I thought that “we should collaborate” is academic-ese for “we should be friends.” After some time, I realized how mistaken I was, and how wise they were to be cautious about entering into an unspecified collaboration with someone they barely knew. Over the years, I have now learned this lesson many times over. The purpose of this post is to share some of those lessons on why we should all be cautious about scientific collaboration.

Collaboration and “team science” are all the rage in psychology these days, which has traditionally valued a singular, “do it all yourself” kind of academic persona. When I was in graduate school, it was clear that the single-authored paper was the ultimate sign of academic greatness. Plenty of people still think that way, but change is certainly afoot, and there are many excellent articles on the benefits and practicalities of collaborative team science (e.g., Forscher et al., 2020; Frassl et al., 2018; Ledgerwood et al., in press; Moshontz et al., 2021).

Amidst the many discussions about the benefits of team science, there is relatively less coverage of potential pitfalls—what to watch out for as you think about collaborating with new people. How do you know whether to engage in a particular collaboration? How can you ensure that the experience is a positive one? A recent column by Carsten Lund Pedersen on How to pick a great scientific collaborator outlines a framework consisting of three traits to ensure success: choose collaborators who are fun to work with, contribute to the work, and have the same ambition. This is a useful and accurate framework, albeit incomplete (e.g., trust is a key aspect of collaborations, especially within cultural and ethnic minority research; see Rivas-Drake et al., 2016), but sometimes you don’t really have sufficient information about these traits of your collaborators until it may already be too late. It is critical to attend to possible warning signs in the earliest phases of a collaboration.

Thus, what brought me to write this entry today: when to run away from any potential collaborations. The following examples are from my personal experience, and so of course does not constitute an exhaustive list, but nevertheless it can be a handy list of the kind of things one should watch out for when establishing new collaborations.

When you receive vague invitations to collaborate. Successful collaborations are nearly always either a) specific to an existing or planned project or b) an extension of an existing collegial relationship. It is not uncommon for people to propose a potential collaboration, via email or in person at conferences, with no additional details about what the collaborative project might be. These are invitations to collaborate on some unknown future project with someone you don’t really know. This is the type of invitation I described making at the outset of this post, and is generally a bad idea to initiate or accept them and a good idea to run away.

When you observe inklings of anti-social behavior. Not long ago, I was asked to be part of a project by someone who I like and respect a great deal on a topic I am enthusiastic about. So far, so good. This person, who was the lead on the paper, shared a 500-word abstract to the authorship group to be submitted for a special issue. Another person on the team, who I did not know at all, responded with an extremely long and detailed email (2608 words long, to be precise) that heavily centered their own work. I wrote back privately to the lead, essentially saying, “count me out of this business.” To me, this was anti-social behavior, but I acknowledge others would have no qualms with it whatsoever. There is no objective standard for what constitutes inappropriate academic behavior of this kind, but if you don’t feel good about it, if something seems off to you, better to jump ship early and save yourself further trouble. The team went on to write a fantastic paper, and when it was published I had a brief tinge of regret, but I know I made the right decision to run away based on my initial feelings.

When you do not want to work with one of the other collaborators. Similar to the previous story, not long ago, I was asked to be part of a project by someone who I like and respect a great deal on a topic I am enthusiastic about. I immediately agreed to be part of the team. When the follow-up email was sent to the full authorship team, however, I saw that one of the other collaborators was someone with a poor history of collaboration, mentorship, and collegiality. I was simply not willing to work with this person. I wrote back to the lead, and regretfully rescinded my involvement, explaining my reasons why. This experience highlighted how you should always find out who else will be involved with a project before agreeing to participate. As I wrote in my email, “I treasure my collaborations and always seek happiness and positivity from the work that I do, and part of that is knowing when something is a bad idea.” If you fear that the collaboration will not bring you happiness, it is best to run away.

When your views are not being respected. Collaborations can be extremely difficult, because we do not all see the world or our disciplines in the same way, and some collaborations involve multiple people who are accustomed to being “in charge.” It can sometimes be impossible to adequately represent everyone’s views. A paper I contributed to involved bringing together multiple groups of people who each had some experience with others on the team, but not everyone had previously worked together. There was a clash of styles in the approach to writing the paper, and one of the authors did not feel that their views and contributions were being respected by the lead author. Accordingly, the author who was not feeling respected decided it was best to cease the collaboration and be removed from the paper. This can be a difficult decision, but it is almost always the correct one. There are many opportunities out there, and if you are not enjoying what you are doing, not feeling respected by your coauthors, and not feeling like you can maintain your integrity through the collaboration, then it is best to just run away.

When you cannot be a good collaborator. My previous warnings focused on other people and their behavior, but sometimes the problem is you. Sometimes, you are just not in a good position to be a productive collaborator. The major culprit here is time, and our tendency to over-extend and take on too much. In recent years, I have taken to thinking really hard about whether I have the time and energy to engage in the collaboration, and try to do so in a realistic way. That is, I no longer fall prey to the fallacy that I will have more time in the future than I do now. That is always false. So, now I frequently decline invitations, or do not pursue opportunities, because I know that I will be a bad collaborator: I won’t respond to emails, I won’t provide comments, I won’t make any of the deadlines. For some projects that I do agree to, I am still clear about my capacity and what I can actually contribute. If you feel that you can participate in a project, but only contribute in a minor capacity, say so up front! That will save a lot of heartache down the road. But, as always, sometimes the wise move is to just run away.

When you want to say what you want to say. I have been involved in a couple of relatively large, big-ego type collaborations that resulted in some published position papers. These collaborations were extremely valuable and constitute some of the major highlights of my career. But the papers we produced were not very good. Team science and diversity of authorship teams have many, many benefits, but it is also difficult to avoid gravitating to the median, centrist view (see Forscher et al., 2020). The result is that the views become too watered-down in order to appease the other co-authors. If you want to argue for something radical, that will often be difficult to do with ten co-authors who also have strong opinions. Sometimes, you just need to go at it on your own, or with a small group of like-minded folks. To be clear, I am not saying that all big collaborations lead to conservative outputs. That is clearly not the case. But it is a risk, and you should assess whether you will be happy with that outcome, or if you should run away.

When you realize there are few things better than lovely collaborators. Ok, this is not a warning sign at all, quite the opposite! I do not want readers to take this post as anti-collaboration. Rather, it is a plea for engaging in highly selective collaborations. I do not want to engage in collaborations that do not bring me happiness. I need to have fun. I need to love the work that I am doing, and I need to love the people I am doing it with. I am fortunate to have three continuous, life-long collaborators in Linda Juang, Kate McLean, and the Gothenburg Group for Research in Developmental Psychology led by Ann FrisĂ©n. Working with these folks, and many others—especially current and former students—is among the great joys of my work. Indeed, collaboration can be the highlight of our academic lives, but only if they are done thoughtfully.

There are certainly plenty of other red flags to watch out for or reasons to not collaborate. This is not an exhaustive list, but a few lessons from my own experience. Please share any additional experiences that you have, and perhaps I will update this post, giving you credit of course (hey, a potential collaboration!).

References

Forscher, P. S., Wagenmakers, E., Coles, N. A., Silan, M. A., Dutra, N. B., Basnight-Brown, D., & IJzerman, H. (2020, May 20). The benefits, barriers, and risks of big team science. PsyArXiv. https://doi.org/10.31234/osf.io/2mdxh 

Frassl, M. A., Hamilton, D. P., Denfeld, B. A., de Eyto, E., Hampton, S. E., Keller, P. S., ... & Catalán, N. (2018). Ten simple rules for collaboratively writing a multi-authored paper. PLOS Computational Biology, 14(11), e1006508. https://doi.org/10.1371/journal.pcbi.1006508 

Ledgerwood, A., Pickett, C., Navarro, D., Remedios, J. D., & Lewis, N. A., Jr. (in press). The unbearable limitations of solo science: Team science as a path for more rigorous and relevant research. Behavioral and Brain Sciences. https://doi.org/10.31234/osf.io/5yfmq

Moshontz, H., Ebersole, C. R., Weston, S. J., & Klein, R. A. (2021). A guide for many authors: Writing manuscripts in large collaborations. Social and Personality Psychology Compass, 15(4), e12590. https://doi.org/10.1111/spc3.12590 

Pedersen, C. L. (2022). How to pick a great scientific collaborator. Nature. https://doi.org/10.1038/d41586-022-01323-9

Rivas-Drake, D., Camacho, T. C., & Guillaume, C. (2016). Just good developmental science: Trust, identity, and responsibility in ethnic minority recruitment and retention. Advances in Child Development and Behavior, 50, 161-188. https://doi.org/10.1016/bs.acdb.2015.11.002

Tuesday, September 7, 2021

Grieving the Only Way I Know How

I have been rather fortunate that tragedy, at least in the form of death, has not been a major force in my life. Of course, I have known people who have died, including those who did so much too young. My father died in 2005 when I was only 26, but his death came at the end of a very long journey with Alzheimer’s, so I had long made peace with him not being a part of my life—and although it may be uncouth to admit, his death was something of a relief.

On Friday, September 3, my good friend Will Dunlop killed himself. At the time of this writing, I have known this for four days, and I still cannot make any sense of it. To me, Will was among the happiest, most fun-loving people that I know. Everyone who knew him would agree.

We met in 2011 at a conference in a seedy Daytona Beach hotel. He was a graduate student at the time, and was told by Kate McLean (who was on his dissertation committee) to go to the conference to talk to me about identity, narrative, and culture. My first memory of Will is of him sitting on the floor of the lobby next to a dusty fake tree, in small shorts, tank top, and bare feet, working on his laptop (an image that I would come to know as quintessentially Will). As I walked by, he called out, “Hey, are you Moin? Kate sent me here to talk to you.” We went on to have deep conversations and many laughs, not only that day, but ever since.

I have many wonderful memories of Will, but those are for another time and audience. What I set out to write about here is how we deal with this kind of life. What do I do with my incredible sadness and confusion? How I am supposed to handle seeing people on the street who look like what Will might look in 20 years? How am I supposed to continue going about my life when so much of it is wrapped up with Will’s? Three days after his death, I was invited to review one of his papers for a journal. My most looming deadline is to write a paper that he specifically asked me to write after I visited his class last fall, to be included in a special issue on “the good life” that he is co-editing. How can I possibly write about “the good life” when his has ended? Compartmentalizing my personal and professional lives in order to “move on” is simply not an option.

The standard response in American culture for how to handle this kind of thing is to “talk to someone about it.” In the last few days, I have received numerous emails and texts with offers to “talk.” Of course I appreciate these offers in the abstract and know they come from a warm and thoughtful place, and I know that I am supposed to talk to people. But I don’t want to talk. I don’t personally find that to be helpful, especially in a large group setting. Some find comfort by immediately jumping into action, setting up tributes and the like. That is great and I understand why they want to do so—I imagine it provides feelings of control and utility—but that is not me. I need to sit with my feelings and fully understand them, but most centrally, I need to write. In recent years, I have come to understand my identity not as a psychologist, researcher, or teacher, but as a writer. It is as a writer that I am the most effective communicator of my thoughts and feelings, both personal and professional. Writing is how I understand my thoughts and feelings, it is how I can begin to make peace with that which I cannot comprehend, and for me it works in a way that talking to people simply does not. I talk plenty in my life—too much, some would say—but spoken words do not flow easily when I am sad, and especially not when I am expected to produce sad thoughts. I am a terrible comfort for people in grief for this very reason, as I always feel an unmet demand to soothe with words that I simply cannot find. But the written word comes so naturally. I have spent the last few days writing in my head, as I always do, and now I can sit down and let the words flow right out. I have decided that I need this to be ok, for others, yes, but especially for myself. This is how I manage my grief. Perhaps some of you share these feelings, and if you do, I hope that my writing them out helps you as well.

The day after Will’s death an Instagram post informed me that it is National Suicide Prevention Month. Of course, this made me think even more about what I had already been thinking: what could I have done to prevent this? I know that is a fool’s game, but that does not stop me from playing it. Will’s research focused heavily on how people craft redemptive stories—turning negative life experiences into sources of growth and meaning—and what constitutes “the good life.” Was there something in his life that he was trying to redeem? Did he feel the good life was eluding him? Were there clues in his work? A fool’s game that is nearly impossible to avoid. 

I don’t think I even yet realize how much I will miss my friend. I know that I will never again see his goofy smile, never again get to make fun of him dressing like a Long Beach teenager, never again share texts about absurd observations, and never again meet him at the bar after a day of travel to share some beers and stories. What I do know is that I will continue to write, both about him and for him, and that doing so will slowly repair my broken heart.



Thursday, August 26, 2021

Secrets from the Editor’s Portal; Or, Everything You Didn’t Realize you Never Learned About Publishing

This is a risky post. As an editor, I feel a bit like the Masked Magician, betraying our craft by giving away all of this insider information. But I find it truly amazing: Submitting manuscripts for publication is central to scientific research, and yet most authors have little knowledge of how journals and editors operate. In an ideal world, this information would be part of a first-term professional development sequence for new graduate students, but few training programs offer such a thing. The reality is that it is not only students and early-career researchers who are in the dark, but so are many long-time faculty and researchers.

This post contains a jumble of insights that, based on my experience as an editor and online observer, I am keenly aware that many people simply do not know. I expect that some of you are going to be all like, “not all journals” and “not all editors.” You are correct, so let me be clear: I am not making universal claims about all journals/editors. My experience comes from journals in psychology, and my comments here may very well be limited to that field, and may not even apply to all journals in psychology. The broader message, relevant to all, is that the system is not as rigid as it seems from the outside. Some know this and take advantage of it, which is a source of inequities in publishing. Many of my entries pertain to engaging in increased correspondence with editors[1], and I fully appreciate that those who have more precarious positions in academia (e.g., women, racial/ethnic minorities) may be both more reluctant to engage in these practices and may not reap the same benefits as their more secure colleagues. Additionally, I am not necessarily suggesting that these are all good practices. What I am presenting is the system as it currently functions, which is important to understand.

In no particular order:

You can appeal if your manuscript is rejected. This seems like one of the biggest secrets in journal publishing, but you can always write back to the action editor and request that they reconsider. Very few journals have formal policies for handling appeals (see this paper on biomedical journals), and some journals may not consider your appeal at all, but it is always possible to ask. If you plan to do this, I strongly suggest you wait at least a couple of days (if not more) before contacting the editor. Your initial response to the decision is seldom rational, and you want to make sure you actually have a solid case for an appeal before requesting it.

You can ask for extensions. Holy shit, you can ask for extensions! This has been one of my saddest experiences as an editor: authors writing apologetic and pleading emails to ask for extensions because they are undergoing chemotherapy, close family members died, they are getting married, moving to a new country, and so on. The truth is, I had no idea your paper was soon to be due—deadlines and reminders are auto-generated—and honestly, it does not really matter if you resubmit your manuscript today or next month. Now, there are some exceptions, such as with special issues that tend to follow tight timelines, editors who are stepping down from their position and trying to wrap up loose ends, or production deadlines if your paper is to appear in a specific issue. But generally speaking, extending deadlines is really no big deal.

You can safely ignore the 48-hour “deadline” for returning proofs. Who among us has not received one of these threatening emails on a Friday afternoon, ruining all of our weekend plans? Good news: these deadlines are totally fake. Journals want you to return the proofs quickly so that they can keep their production workflows clean, but there is no reason for you to disrupt your work or relaxation plans accordingly. Rather than completely ignoring them, write back and tell them when they should expect your corrections. Saying something like “within the next week” is usually fine.

You can check with the editor before submission. If you are not certain whether your paper would fit with the journal’s scope, you can always write to the editor, briefly describe the paper, and ask whether they perceive it to be a fit based on the provided information. Importantly, if the editor replies that it is within scope, that is not a guarantee that the paper will be accepted or even sent out for peer review. Doing this is just to ensure that your paper is generally within what the realm of what the journal will consider, if you are not sure. Certainly not all editors agree, but personally it is a lot less work for me to respond with an elaborated version of “not really a good fit” rather than checking in the paper through the online system, doing the pre-processing that I do, and then submitting a desk reject decision for poor fit.

You can email the editor about the status of your manuscript. If it has been some time since you have heard from the journal, then it is totally fine to check in with the editor for a status update. Brief, polite emails of inquiry are rarely a problem. The big question is what constitutes “some time” since you have heard. Generally speaking, it is fine to check in after 3-4 months. I once had an author write to me one week after submission, asking why they had not yet received a decision. Do not do that.

Sometimes papers actually do get lost. As an author you would think this is not possible to lose a paper with an online tracking system, but then again authors have all used those systems, so know exactly how clunky they are. I have had a handful of cases where the paper just sort of fell through the cracks. This is one reason why checking in after 3-4 months can be a good idea (it is also the case that checking in gets the paper on the editor’s radar, squeaky wheel and all that).

You can write to clarify what the editor believes to be necessary for a revision. Some editors are really great at their job, expertly synthesizing reviewer comments to provide clear recommendations for a path towards publication. Ideally, they also make clear what revisions are non-negotiable. Other editors…..aren’t so good at it, either just summarizing the reviewer comments or writing “see reviewer comments below,” providing no guidance at all. If you are unclear about how to proceed, for example if there are conflicting reviewer comments, you can always write a brief email to the editor and ask for some guidance.

It is often better to contact the editor directly with questions. If you have a question about a manuscript, you will often get the most useful information if you email the editor directly at their institution account. Journal-specific email accounts can be inconsistently monitored and staffed, and sometimes those on the receiving end do not have the information you actually want. This is one tidbit that most editors probably do not want me to share, because who out there is really looking for more emails, but from the author side of things this is a smart approach.

You can (and should) ask to be on an editorial board. The biggest reward for completing timely, high-quality reviews, is more review requests from the same journal. Most journals have rating systems that score reviewers on timeliness and substance. If you have completed a good number of reviews for a journal within a year (say 3-4), then you should certainly write to the editor and request to be considered for the board. Waiting to be invited is a mistake. It is easy for journals to overlook recurring quality reviewers, so if that is you, definitely let the editor know. In most cases, we would be thrilled to have someone like you on the board.

You can thank editors for their decision, but few actually do! I get this question a lot. Your paper is accepted, or thoughtfully rejected, should you respond to the editor? In my experience, very few do this, but you are always welcome to. As an editor, such emails are nice and appreciated, but I do not at all expect them. Sometimes the emails are not so nice….better to leave those in your drafts folder.

Suggesting reviewers is helpful, but be thoughtful about it. Many journals now solicit suggested reviewers as part of the submission process. As an editor, this is helpful for identifying potential reviewers that I might have otherwise missed. However, these suggestions can go wrong in at least two ways. First, it is not helpful to suggest the most well-known, senior person in the field. I handled a paper on language development once where the authors suggested Steven Pinker. He is not likely to review your paper, and if he did, the quality of the review would probably be very low. (That is not a comment on Pinker per se—I know nothing about his reviews—I have just observed that more senior researchers provide rather cursory reviews.). Second, do not suggest your close collaborators as reviewers. Any editor who is doing their job properly will not just invite suggested reviewers without doing a little background work, and coauthors are very easy to discover. So, make suggestions for potential reviewers, but do so thoughtfully.  

Your paper did not have five reviewers because the editor hates you. Sometimes your papers have one reviewer, and sometimes they have seven. What gives? There can be good reasons for many reviewers on a paper, but much of this variation has nothing to do with your paper, per se. For example, when initially attempting to assign reviewers to a paper, I will send four or five invites at once. I do this because in the vast majority of cases, inviting four or five people will yield two who agree, which is generally want I want. Using this approach saves time, instead of inviting two, waiting for them to decline, then inviting another two, waiting for them, and so on. But it also means that sometimes they all agree and you end up with five reviewers. Sorry about that.

Word/page limits are not always rigid. In fact, the limits expressed on the journal webpage might not even be real. Much like faculty webpages, journal webpages can often be out of date, with editors not even familiar with what is listed. Even if word/page limits are accurate, journals handle these differently. Some journals enforce strict limits and will not even conduct an initial evaluation of the paper unless it conforms to the standards. Others have soft limits, and will consider longer paper with sufficient justification. As with most things on this list, you can always email the editor to find out what is possible. 

Cover letters for new submissions are often (but not always) useless (in psychology). Authors always have questions about the importance of cover letters, and what should be included within them. The answer is….it depends….a lot.  In some fields, the cover letter consists of a “sales pitch” in which you attempt to convince the editor that your manuscript is novel, exciting, and worthy of publication. For example, an old editorial in Nature Immunology suggested that authors, “present their cases in a one- to two-page (!!!!) cover letter that highlights the context of their experimental question and its relevance to the broader research community, the novelty of the new work, and the way that it advances our understanding beyond previous publications.” (incredulous exclamation marks added to communicate incredulity). This tweet describes a similar approach. In contrast, in many/most cover letters submitted to psychology journals, the authors provide a formal statement that amounts to “here it is, hoping for the best!” They may indicate their co-authors, that they followed APA ethical principles, and that the paper is not under consideration elsewhere, but that is about it. And personally, that’s all I want. I will judge the paper on its merits, not on the authors’ ability to persuade me of its value. This post from Retraction Watch and the associated comments highlights the variability across fields/journals. Accordingly, the only advice you should take about cover letters is to not take anyone’s advice. Look to see what it says on the journal webpage (which may not be accurate) and talk to colleagues who have experience with the journal. 

Cover letters for revisions are super important. Cover letters for new submissions and cover letters for revised submissions are in totally different genres of cover letters. In fact, this is why some journals distinguish between the “cover letter” and the “response to reviewers.” I have an entire post on how to handle this process, A Workflow for Dealing with the Dread of Revising and Resubmitting Manuscripts.

That’s about it for now. What did I miss? What did I get wrong? I will update the post as I receive feedback. For those of you who are angry about the content of these items, especially with regard to the disparate opportunity/impact for minority scholars, please re-read the beginning of this post. My intention here was to describe a system that is central to our work, yet opaque to the majority. Changing these systems to make them more equitable is a topic for another day. 


[1] To all of the editors out there, you are welcome!

Thursday, June 10, 2021

WEIRD Times: Three Reasons to Stop Using a Silly Acronym

Those who know me will groan at the appearance of this post. WEIRD has become my personal dumping ground, with me taking any opportunity to tell people why I think they should stop using the term. I have embedded my criticisms in various papers on broader topics (such as open science or acronyms), but I reasoned that rather than pointing people to specific passages of long boring papers, or repeatedly typing out my reasons, I would just do one thorough post that I can link to when needed. Welcome!

Some of you are likely wondering what WEIRD is and why it is in all caps. WEIRD is an acronym, standing for Western, Educated, Industrialized, Rich, and Democratic, introduced by Henrich et al. (2010). The gist of their argument was a simple one with which I am in full agreement: much of the behavioral sciences relies on an extremely narrow population from which it generalizes to all humanity. This fact has been well known for a very long time (Arnett, 2008; Guthrie, 1976; Hartmann et al., 2013). Henrich et al. added, however, that this fact is particularly perverse because this group that is over-sampled is notably different from the majority of humans. This group, who tends to be Western, Educated, Industrialized, Rich, and Democratic, is itself weird in the context of humanity.

I think this continues to be an important observation, and one I will not quibble with (at least not for now). No, my problem is with the acronym. The acronym is so dang catchy that it has become part of psychological researchers’ everyday nomenclature: “that literature relies on WEIRD samples,” “we need more data from non-WEIRD populations,” “the field is doing nothing to solve the WEIRD people problem,” and so on. The word has become a scientific term itself, broadly signifying “diversity,” losing contact with its constituent parts. I can tell you that plenty of people who know and use the term WEIRD could not accurately list the five elements. That is….not good.

Dear readers, here I am, asking you to stop using this term, for three reasons:

1. It is a backronym at worst, a contrived acronym at best. I covered this directly in my paper decrying the absurdity of acronyms, so I will just offer this quote:

“It is rather remarkable, particularly given that the paper was published in a supposed “top-tier” outlet, that the authors do not describe how they identified these five dimensions as constituting the focal set. Are we to believe that five core dimensions just happened to spell WEIRD and that is coincidental with the fact that their primary argument was that studies that rely on samples from WEIRD societies are, in fact, weird in relation to the rest of the world? Of course not. Clearly WEIRD is a backronym, which is fine, except that it should not be taken to have any scientific value.”

Ok, so it probably was not actually a backronym, in which the acronym is determined first and then the letters are forced to fit, but it is extremely implausible that the letters just happened to work out that way. Such an acronym might have rhetorical value, so I do not blame the authors for that, but now that the term has jumped the shark to take on scientific value, it is time to step back and re-assess things.

 2. WEIRD omits race/ethnicity (among other important dimensions of diversity). I often see people indicate that the “W” in WEIRD refers to White. It does not. In fact, the entire WEIRD paper is really quiet on the subject of race. This is ironic for a paper that is highlighting the problematic sampling bias in the behavioral sciences. Therefore, if you are using WEIRD or discussing the “WEIRD people problem,” you are contributing to the very problem the term if meant to address by continuing to ignore racial bias in the literature (see Clancy & Davis, 2019, for a detailed discussion of this issue).  

And, of course, it is not just about race/ethnicity. The original paper leaves out all kinds of potentially informative dimensions of diversity. For example, why is religion not one of the dimensions? That seems pretty important. Rich is mostly redundant with Educated, so you could consolidate those two, swap in Religious, and maintain WEIRD. Doing so, however, would raise thorny issues because you have both the USA, a very religious country, and the secular countries of Northern/Western Europe as part of the same WEIRD group. Does not really work out after all. Perhaps what the acronym stands for matters![1]

Surely, you are thinking, there was compelling rationale for why these five dimensions, in particular, are the ones that are worthy of emphasis. But I just indicated that was not the case! There was no rationale provided for why these five dimensions, and not others were included. Moreover, there was not even much rationale for why some of the focal dimensions were included. Quoting Rochat (2010) from an accompanying commentary:   

“…catchy acronyms like “WEIRD” for a population sample are good mnemonics. However, they carry the danger of distracting us from deeper issues. The last letter, D, for example, stands for “Democratic.” What does this mean, given that many Eastern cultures would not consider themselves as non-democratic, having universally elected parliaments in their countries? In using such an acronym to characterize a population sample, the authors must have a theory about what democrats and a democracy mean. They must also have some intuition as to what kind of impact such a regime might have on its citizens, as opposed to another. The democratic criterion would deserve more articulated rationale.” (p. 108)

3. WEIRD lacks specificity. Not only is WEIRD not adequately comprehensive of relevant dimensions of cultural variability, but somehow this lack of breadth is also accompanied by insufficient depth (again, see Clancy & Davis, 2019). Which countries/cultures, exactly, are WEIRD? This is far from clear. As Rochat asked, what does “Democratic” mean? In a footnote on the lead dimension, “Western,” Henrich et al. state, “We recognize that there are important limitations and problems with this label, but we use it for convenience” (p. 83). I would extend that statement to WEIRD itself.

The lack of specificity of the term has led to its over-application. WEIRD has become a shorthand for “USA, Canada, and/or (maybe some parts) of Europe.” It would probably just be clearer to go with the latter, or better yet, say exactly which populations you are referring to. A manuscript for which I was serving as editor stated that a limitation of the study was that it relied on WEIRD samples. But the samples were drawn from only two countries, which they did not name specifically. I see this kind of thing all the time. Wouldn’t it be preferable if we actually stated what we meant, with clarity, rather than adopt a vague acronym? From what I can tell from my colleagues, the answer is sadly, “no.”

I will reiterate that the Henrich et al. paper is an important one, and it helped raise awareness of representational problems in our science more effectively than the many similar papers that came before it. Nevertheless, as Dutra (2021) commented, “[WEIRD] unfortunately carries less nuance than the original paper” (p. 271). Indeed, the awareness was not accompanied by the nuance of the argument or a critical evaluation of the term WEIRD, nor how, if at all, it should be used in a scientific context. Rather, it was yet another example of researchers uncritically endorsing a simplistic heuristic for an incredibly complex issue. We need to do better.

References

Arnett, J. J. (2008). The neglected 95%: Why American psychology needs to become less American. American Psychologist, 63(7), 602–614. https://doi.org/10.1037/0003-066X.63.7.602

Clancy, K. B. H., & Davis, J. L. (2019). Soylent Is People, and WEIRD Is White: Biological Anthropology, Whiteness, and the Limits of the WEIRD. Annual Review of Anthropology, 48(1), 169–186. https://doi.org/10.1146/annurev-anthro-102218-011133

Dutra, N. B. (2021). Commentary on Apicella, Norenzayan, and Henrich (2020): Who is going to run the global laboratory of the future? Evolution and Human Behavior, 42(3), 271–273. https://doi.org/10.1016/j.evolhumbehav.2021.04.003

Guthrie, R. V. (1976). Even the rat was white: A historical view of psychology. Pearson Education.

Hartmann, W. E., Kim, E. S., Kim, J. H. J., Nguyen, T. U., Wendt, D. C., Nagata, D. K., & Gone, J. P. (2013). In search of cultural diversity, revisited: Recent publication trends in cross-cultural and ethnic minority psychology. Review of General Psychology, 17(3), 243–254. https://doi.org/10.1037/a0032260

Henrich, J. (2020). The WEIRDest people in the world: How the West became psychologically peculiar and particularly prosperous. Farrar, Straus and Giroux.

Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33(2–3), 61–83. https://doi.org/10.1017/S0140525X0999152X

Lightner, A., Garfield, Z., & Hagen, E. (2021). Religion: The WEIRDest concept in the world? PsyArXiv. https://doi.org/10.31234/osf.io/58tgd 

Syed, M. (2020). Acronym absurdity constrains psychological science. PsyArXiv. https://psyarxiv.com/293wx

Syed, M., & Kathawalla, U. K. (in press). Cultural psychology, diversity, and representation in open science. In K. C. McLean (Ed.), Cultural methods in psychology: Describing and transforming cultures. New York: Oxford University Press. https://psyarxiv.com/t7hp2 

This post is essay no. 14 in the series, “I Got a Lot of Problems with Psychology.”


[1] Interestingly, Henrich’s new book on WEIRD focuses heavily on the role of religion, but it was not really discussed meaningfully in the original paper. See Lightner et al.’s (2021) elaboration and critique of his analysis of religion.