Tuesday, September 7, 2021

Grieving the Only Way I Know How

I have been rather fortunate that tragedy, at least in the form of death, has not been a major force in my life. Of course, I have known people who have died, including those who did so much too young. My father died in 2005 when I was only 26, but his death came at the end of a very long journey with Alzheimer’s, so I had long made peace with him not being a part of my life—and although it may be uncouth to admit, his death was something of a relief.

On Friday, September 3, my good friend Will Dunlop killed himself. At the time of this writing, I have known this for four days, and I still cannot make any sense of it. To me, Will was among the happiest, most fun-loving people that I know. Everyone who knew him would agree.

We met in 2011 at a conference in a seedy Daytona Beach hotel. He was a graduate student at the time, and was told by Kate McLean (who was on his dissertation committee) to go to the conference to talk to me about identity, narrative, and culture. My first memory of Will is of him sitting on the floor of the lobby next to a dusty fake tree, in small shorts, tank top, and bare feet, working on his laptop (an image that I would come to know as quintessentially Will). As I walked by, he called out, “Hey, are you Moin? Kate sent me here to talk to you.” We went on to have deep conversations and many laughs, not only that day, but ever since.

I have many wonderful memories of Will, but those are for another time and audience. What I set out to write about here is how we deal with this kind of life. What do I do with my incredible sadness and confusion? How I am supposed to handle seeing people on the street who look like what Will might look in 20 years? How am I supposed to continue going about my life when so much of it is wrapped up with Will’s? Three days after his death, I was invited to review one of his papers for a journal. My most looming deadline is to write a paper that he specifically asked me to write after I visited his class last fall, to be included in a special issue on “the good life” that he is co-editing. How can I possibly write about “the good life” when his has ended? Compartmentalizing my personal and professional lives in order to “move on” is simply not an option.

The standard response in American culture for how to handle this kind of thing is to “talk to someone about it.” In the last few days, I have received numerous emails and texts with offers to “talk.” Of course I appreciate these offers in the abstract and know they come from a warm and thoughtful place, and I know that I am supposed to talk to people. But I don’t want to talk. I don’t personally find that to be helpful, especially in a large group setting. Some find comfort by immediately jumping into action, setting up tributes and the like. That is great and I understand why they want to do so—I imagine it provides feelings of control and utility—but that is not me. I need to sit with my feelings and fully understand them, but most centrally, I need to write. In recent years, I have come to understand my identity not as a psychologist, researcher, or teacher, but as a writer. It is as a writer that I am the most effective communicator of my thoughts and feelings, both personal and professional. Writing is how I understand my thoughts and feelings, it is how I can begin to make peace with that which I cannot comprehend, and for me it works in a way that talking to people simply does not. I talk plenty in my life—too much, some would say—but spoken words do not flow easily when I am sad, and especially not when I am expected to produce sad thoughts. I am a terrible comfort for people in grief for this very reason, as I always feel an unmet demand to soothe with words that I simply cannot find. But the written word comes so naturally. I have spent the last few days writing in my head, as I always do, and now I can sit down and let the words flow right out. I have decided that I need this to be ok, for others, yes, but especially for myself. This is how I manage my grief. Perhaps some of you share these feelings, and if you do, I hope that my writing them out helps you as well.

The day after Will’s death an Instagram post informed me that it is National Suicide Prevention Month. Of course, this made me think even more about what I had already been thinking: what could I have done to prevent this? I know that is a fool’s game, but that does not stop me from playing it. Will’s research focused heavily on how people craft redemptive stories—turning negative life experiences into sources of growth and meaning—and what constitutes “the good life.” Was there something in his life that he was trying to redeem? Did he feel the good life was eluding him? Were there clues in his work? A fool’s game that is nearly impossible to avoid. 

I don’t think I even yet realize how much I will miss my friend. I know that I will never again see his goofy smile, never again get to make fun of him dressing like a Long Beach teenager, never again share texts about absurd observations, and never again meet him at the bar after a day of travel to share some beers and stories. What I do know is that I will continue to write, both about him and for him, and that doing so will slowly repair my broken heart.



Thursday, August 26, 2021

Secrets from the Editor’s Portal; Or, Everything You Didn’t Realize you Never Learned About Publishing

This is a risky post. As an editor, I feel a bit like the Masked Magician, betraying our craft by giving away all of this insider information. But I find it truly amazing: Submitting manuscripts for publication is central to scientific research, and yet most authors have little knowledge of how journals and editors operate. In an ideal world, this information would be part of a first-term professional development sequence for new graduate students, but few training programs offer such a thing. The reality is that it is not only students and early-career researchers who are in the dark, but so are many long-time faculty and researchers.

This post contains a jumble of insights that, based on my experience as an editor and online observer, I am keenly aware that many people simply do not know. I expect that some of you are going to be all like, “not all journals” and “not all editors.” You are correct, so let me be clear: I am not making universal claims about all journals/editors. My experience comes from journals in psychology, and my comments here may very well be limited to that field, and may not even apply to all journals in psychology. The broader message, relevant to all, is that the system is not as rigid as it seems from the outside. Some know this and take advantage of it, which is a source of inequities in publishing. Many of my entries pertain to engaging in increased correspondence with editors[1], and I fully appreciate that those who have more precarious positions in academia (e.g., women, racial/ethnic minorities) may be both more reluctant to engage in these practices and may not reap the same benefits as their more secure colleagues. Additionally, I am not necessarily suggesting that these are all good practices. What I am presenting is the system as it currently functions, which is important to understand.

In no particular order:

You can appeal if your manuscript is rejected. This seems like one of the biggest secrets in journal publishing, but you can always write back to the action editor and request that they reconsider. Very few journals have formal policies for handling appeals (see this paper on biomedical journals), and some journals may not consider your appeal at all, but it is always possible to ask. If you plan to do this, I strongly suggest you wait at least a couple of days (if not more) before contacting the editor. Your initial response to the decision is seldom rational, and you want to make sure you actually have a solid case for an appeal before requesting it.

You can ask for extensions. Holy shit, you can ask for extensions! This has been one of my saddest experiences as an editor: authors writing apologetic and pleading emails to ask for extensions because they are undergoing chemotherapy, close family members died, they are getting married, moving to a new country, and so on. The truth is, I had no idea your paper was soon to be due—deadlines and reminders are auto-generated—and honestly, it does not really matter if you resubmit your manuscript today or next month. Now, there are some exceptions, such as with special issues that tend to follow tight timelines, editors who are stepping down from their position and trying to wrap up loose ends, or production deadlines if your paper is to appear in a specific issue. But generally speaking, extending deadlines is really no big deal.

You can safely ignore the 48-hour “deadline” for returning proofs. Who among us has not received one of these threatening emails on a Friday afternoon, ruining all of our weekend plans? Good news: these deadlines are totally fake. Journals want you to return the proofs quickly so that they can keep their production workflows clean, but there is no reason for you to disrupt your work or relaxation plans accordingly. Rather than completely ignoring them, write back and tell them when they should expect your corrections. Saying something like “within the next week” is usually fine.

You can check with the editor before submission. If you are not certain whether your paper would fit with the journal’s scope, you can always write to the editor, briefly describe the paper, and ask whether they perceive it to be a fit based on the provided information. Importantly, if the editor replies that it is within scope, that is not a guarantee that the paper will be accepted or even sent out for peer review. Doing this is just to ensure that your paper is generally within what the realm of what the journal will consider, if you are not sure. Certainly not all editors agree, but personally it is a lot less work for me to respond with an elaborated version of “not really a good fit” rather than checking in the paper through the online system, doing the pre-processing that I do, and then submitting a desk reject decision for poor fit.

You can email the editor about the status of your manuscript. If it has been some time since you have heard from the journal, then it is totally fine to check in with the editor for a status update. Brief, polite emails of inquiry are rarely a problem. The big question is what constitutes “some time” since you have heard. Generally speaking, it is fine to check in after 3-4 months. I once had an author write to me one week after submission, asking why they had not yet received a decision. Do not do that.

Sometimes papers actually do get lost. As an author you would think this is not possible to lose a paper with an online tracking system, but then again authors have all used those systems, so know exactly how clunky they are. I have had a handful of cases where the paper just sort of fell through the cracks. This is one reason why checking in after 3-4 months can be a good idea (it is also the case that checking in gets the paper on the editor’s radar, squeaky wheel and all that).

You can write to clarify what the editor believes to be necessary for a revision. Some editors are really great at their job, expertly synthesizing reviewer comments to provide clear recommendations for a path towards publication. Ideally, they also make clear what revisions are non-negotiable. Other editors…..aren’t so good at it, either just summarizing the reviewer comments or writing “see reviewer comments below,” providing no guidance at all. If you are unclear about how to proceed, for example if there are conflicting reviewer comments, you can always write a brief email to the editor and ask for some guidance.

It is often better to contact the editor directly with questions. If you have a question about a manuscript, you will often get the most useful information if you email the editor directly at their institution account. Journal-specific email accounts can be inconsistently monitored and staffed, and sometimes those on the receiving end do not have the information you actually want. This is one tidbit that most editors probably do not want me to share, because who out there is really looking for more emails, but from the author side of things this is a smart approach.

You can (and should) ask to be on an editorial board. The biggest reward for completing timely, high-quality reviews, is more review requests from the same journal. Most journals have rating systems that score reviewers on timeliness and substance. If you have completed a good number of reviews for a journal within a year (say 3-4), then you should certainly write to the editor and request to be considered for the board. Waiting to be invited is a mistake. It is easy for journals to overlook recurring quality reviewers, so if that is you, definitely let the editor know. In most cases, we would be thrilled to have someone like you on the board.

You can thank editors for their decision, but few actually do! I get this question a lot. Your paper is accepted, or thoughtfully rejected, should you respond to the editor? In my experience, very few do this, but you are always welcome to. As an editor, such emails are nice and appreciated, but I do not at all expect them. Sometimes the emails are not so nice….better to leave those in your drafts folder.

Suggesting reviewers is helpful, but be thoughtful about it. Many journals now solicit suggested reviewers as part of the submission process. As an editor, this is helpful for identifying potential reviewers that I might have otherwise missed. However, these suggestions can go wrong in at least two ways. First, it is not helpful to suggest the most well-known, senior person in the field. I handled a paper on language development once where the authors suggested Steven Pinker. He is not likely to review your paper, and if he did, the quality of the review would probably be very low. (That is not a comment on Pinker per se—I know nothing about his reviews—I have just observed that more senior researchers provide rather cursory reviews.). Second, do not suggest your close collaborators as reviewers. Any editor who is doing their job properly will not just invite suggested reviewers without doing a little background work, and coauthors are very easy to discover. So, make suggestions for potential reviewers, but do so thoughtfully.  

Your paper did not have five reviewers because the editor hates you. Sometimes your papers have one reviewer, and sometimes they have seven. What gives? There can be good reasons for many reviewers on a paper, but much of this variation has nothing to do with your paper, per se. For example, when initially attempting to assign reviewers to a paper, I will send four or five invites at once. I do this because in the vast majority of cases, inviting four or five people will yield two who agree, which is generally want I want. Using this approach saves time, instead of inviting two, waiting for them to decline, then inviting another two, waiting for them, and so on. But it also means that sometimes they all agree and you end up with five reviewers. Sorry about that.

Word/page limits are not always rigid. In fact, the limits expressed on the journal webpage might not even be real. Much like faculty webpages, journal webpages can often be out of date, with editors not even familiar with what is listed. Even if word/page limits are accurate, journals handle these differently. Some journals enforce strict limits and will not even conduct an initial evaluation of the paper unless it conforms to the standards. Others have soft limits, and will consider longer paper with sufficient justification. As with most things on this list, you can always email the editor to find out what is possible. 

Cover letters for new submissions are often (but not always) useless (in psychology). Authors always have questions about the importance of cover letters, and what should be included within them. The answer is….it depends….a lot.  In some fields, the cover letter consists of a “sales pitch” in which you attempt to convince the editor that your manuscript is novel, exciting, and worthy of publication. For example, an old editorial in Nature Immunology suggested that authors, “present their cases in a one- to two-page (!!!!) cover letter that highlights the context of their experimental question and its relevance to the broader research community, the novelty of the new work, and the way that it advances our understanding beyond previous publications.” (incredulous exclamation marks added to communicate incredulity). This tweet describes a similar approach. In contrast, in many/most cover letters submitted to psychology journals, the authors provide a formal statement that amounts to “here it is, hoping for the best!” They may indicate their co-authors, that they followed APA ethical principles, and that the paper is not under consideration elsewhere, but that is about it. And personally, that’s all I want. I will judge the paper on its merits, not on the authors’ ability to persuade me of its value. This post from Retraction Watch and the associated comments highlights the variability across fields/journals. Accordingly, the only advice you should take about cover letters is to not take anyone’s advice. Look to see what it says on the journal webpage (which may not be accurate) and talk to colleagues who have experience with the journal. 

Cover letters for revisions are super important. Cover letters for new submissions and cover letters for revised submissions are in totally different genres of cover letters. In fact, this is why some journals distinguish between the “cover letter” and the “response to reviewers.” I have an entire post on how to handle this process, A Workflow for Dealing with the Dread of Revising and Resubmitting Manuscripts.

That’s about it for now. What did I miss? What did I get wrong? I will update the post as I receive feedback. For those of you who are angry about the content of these items, especially with regard to the disparate opportunity/impact for minority scholars, please re-read the beginning of this post. My intention here was to describe a system that is central to our work, yet opaque to the majority. Changing these systems to make them more equitable is a topic for another day. 


[1] To all of the editors out there, you are welcome!

Thursday, June 10, 2021

WEIRD Times: Three Reasons to Stop Using a Silly Acronym

Those who know me will groan at the appearance of this post. WEIRD has become my personal dumping ground, with me taking any opportunity to tell people why I think they should stop using the term. I have embedded my criticisms in various papers on broader topics (such as open science or acronyms), but I reasoned that rather than pointing people to specific passages of long boring papers, or repeatedly typing out my reasons, I would just do one thorough post that I can link to when needed. Welcome!

Some of you are likely wondering what WEIRD is and why it is in all caps. WEIRD is an acronym, standing for Western, Educated, Industrialized, Rich, and Democratic, introduced by Henrich et al. (2010). The gist of their argument was a simple one with which I am in full agreement: much of the behavioral sciences relies on an extremely narrow population from which it generalizes to all humanity. This fact has been well known for a very long time (Arnett, 2008; Guthrie, 1976; Hartmann et al., 2013). Henrich et al. added, however, that this fact is particularly perverse because this group that is over-sampled is notably different from the majority of humans. This group, who tends to be Western, Educated, Industrialized, Rich, and Democratic, is itself weird in the context of humanity.

I think this continues to be an important observation, and one I will not quibble with (at least not for now). No, my problem is with the acronym. The acronym is so dang catchy that it has become part of psychological researchers’ everyday nomenclature: “that literature relies on WEIRD samples,” “we need more data from non-WEIRD populations,” “the field is doing nothing to solve the WEIRD people problem,” and so on. The word has become a scientific term itself, broadly signifying “diversity,” losing contact with its constituent parts. I can tell you that plenty of people who know and use the term WEIRD could not accurately list the five elements. That is….not good.

Dear readers, here I am, asking you to stop using this term, for three reasons:

1. It is a backronym at worst, a contrived acronym at best. I covered this directly in my paper decrying the absurdity of acronyms, so I will just offer this quote:

“It is rather remarkable, particularly given that the paper was published in a supposed “top-tier” outlet, that the authors do not describe how they identified these five dimensions as constituting the focal set. Are we to believe that five core dimensions just happened to spell WEIRD and that is coincidental with the fact that their primary argument was that studies that rely on samples from WEIRD societies are, in fact, weird in relation to the rest of the world? Of course not. Clearly WEIRD is a backronym, which is fine, except that it should not be taken to have any scientific value.”

Ok, so it probably was not actually a backronym, in which the acronym is determined first and then the letters are forced to fit, but it is extremely implausible that the letters just happened to work out that way. Such an acronym might have rhetorical value, so I do not blame the authors for that, but now that the term has jumped the shark to take on scientific value, it is time to step back and re-assess things.

 2. WEIRD omits race/ethnicity (among other important dimensions of diversity). I often see people indicate that the “W” in WEIRD refers to White. It does not. In fact, the entire WEIRD paper is really quiet on the subject of race. This is ironic for a paper that is highlighting the problematic sampling bias in the behavioral sciences. Therefore, if you are using WEIRD or discussing the “WEIRD people problem,” you are contributing to the very problem the term if meant to address by continuing to ignore racial bias in the literature (see Clancy & Davis, 2019, for a detailed discussion of this issue).  

And, of course, it is not just about race/ethnicity. The original paper leaves out all kinds of potentially informative dimensions of diversity. For example, why is religion not one of the dimensions? That seems pretty important. Rich is mostly redundant with Educated, so you could consolidate those two, swap in Religious, and maintain WEIRD. Doing so, however, would raise thorny issues because you have both the USA, a very religious country, and the secular countries of Northern/Western Europe as part of the same WEIRD group. Does not really work out after all. Perhaps what the acronym stands for matters![1]

Surely, you are thinking, there was compelling rationale for why these five dimensions, in particular, are the ones that are worthy of emphasis. But I just indicated that was not the case! There was no rationale provided for why these five dimensions, and not others were included. Moreover, there was not even much rationale for why some of the focal dimensions were included. Quoting Rochat (2010) from an accompanying commentary:   

“…catchy acronyms like “WEIRD” for a population sample are good mnemonics. However, they carry the danger of distracting us from deeper issues. The last letter, D, for example, stands for “Democratic.” What does this mean, given that many Eastern cultures would not consider themselves as non-democratic, having universally elected parliaments in their countries? In using such an acronym to characterize a population sample, the authors must have a theory about what democrats and a democracy mean. They must also have some intuition as to what kind of impact such a regime might have on its citizens, as opposed to another. The democratic criterion would deserve more articulated rationale.” (p. 108)

3. WEIRD lacks specificity. Not only is WEIRD not adequately comprehensive of relevant dimensions of cultural variability, but somehow this lack of breadth is also accompanied by insufficient depth (again, see Clancy & Davis, 2019). Which countries/cultures, exactly, are WEIRD? This is far from clear. As Rochat asked, what does “Democratic” mean? In a footnote on the lead dimension, “Western,” Henrich et al. state, “We recognize that there are important limitations and problems with this label, but we use it for convenience” (p. 83). I would extend that statement to WEIRD itself.

The lack of specificity of the term has led to its over-application. WEIRD has become a shorthand for “USA, Canada, and/or (maybe some parts) of Europe.” It would probably just be clearer to go with the latter, or better yet, say exactly which populations you are referring to. A manuscript for which I was serving as editor stated that a limitation of the study was that it relied on WEIRD samples. But the samples were drawn from only two countries, which they did not name specifically. I see this kind of thing all the time. Wouldn’t it be preferable if we actually stated what we meant, with clarity, rather than adopt a vague acronym? From what I can tell from my colleagues, the answer is sadly, “no.”

I will reiterate that the Henrich et al. paper is an important one, and it helped raise awareness of representational problems in our science more effectively than the many similar papers that came before it. Nevertheless, as Dutra (2021) commented, “[WEIRD] unfortunately carries less nuance than the original paper” (p. 271). Indeed, the awareness was not accompanied by the nuance of the argument or a critical evaluation of the term WEIRD, nor how, if at all, it should be used in a scientific context. Rather, it was yet another example of researchers uncritically endorsing a simplistic heuristic for an incredibly complex issue. We need to do better.

References

Arnett, J. J. (2008). The neglected 95%: Why American psychology needs to become less American. American Psychologist, 63(7), 602–614. https://doi.org/10.1037/0003-066X.63.7.602

Clancy, K. B. H., & Davis, J. L. (2019). Soylent Is People, and WEIRD Is White: Biological Anthropology, Whiteness, and the Limits of the WEIRD. Annual Review of Anthropology, 48(1), 169–186. https://doi.org/10.1146/annurev-anthro-102218-011133

Dutra, N. B. (2021). Commentary on Apicella, Norenzayan, and Henrich (2020): Who is going to run the global laboratory of the future? Evolution and Human Behavior, 42(3), 271–273. https://doi.org/10.1016/j.evolhumbehav.2021.04.003

Guthrie, R. V. (1976). Even the rat was white: A historical view of psychology. Pearson Education.

Hartmann, W. E., Kim, E. S., Kim, J. H. J., Nguyen, T. U., Wendt, D. C., Nagata, D. K., & Gone, J. P. (2013). In search of cultural diversity, revisited: Recent publication trends in cross-cultural and ethnic minority psychology. Review of General Psychology, 17(3), 243–254. https://doi.org/10.1037/a0032260

Henrich, J. (2020). The WEIRDest people in the world: How the West became psychologically peculiar and particularly prosperous. Farrar, Straus and Giroux.

Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33(2–3), 61–83. https://doi.org/10.1017/S0140525X0999152X

Lightner, A., Garfield, Z., & Hagen, E. (2021). Religion: The WEIRDest concept in the world? PsyArXiv. https://doi.org/10.31234/osf.io/58tgd 

Syed, M. (2020). Acronym absurdity constrains psychological science. PsyArXiv. https://psyarxiv.com/293wx

Syed, M., & Kathawalla, U. K. (in press). Cultural psychology, diversity, and representation in open science. In K. C. McLean (Ed.), Cultural methods in psychology: Describing and transforming cultures. New York: Oxford University Press. https://psyarxiv.com/t7hp2 

This post is essay no. 14 in the series, “I Got a Lot of Problems with Psychology.”


[1] Interestingly, Henrich’s new book on WEIRD focuses heavily on the role of religion, but it was not really discussed meaningfully in the original paper. See Lightner et al.’s (2021) elaboration and critique of his analysis of religion.

Monday, February 15, 2021

The Time for Criticism is Now

We recently read Hardeman & Karbeah (2020), Examining racism in health services research: A disciplinary self-critique, in our weekly Diversity Science Reading Group. The attendees, which primarily consist of graduate students and post-docs, expressed appreciation for the self-reflective and self-critical approach, feeling that we need more of that in psychology. The conversation, however, quickly turned to the question of when in one’s career is it acceptable to engage in work that is critical of the field? It is clear that some of them have received the message (implicitly or explicitly) that criticism of ideas and practices is to be conducted later in one’s career, after becoming “established.” I quickly told them that this is nonsense. The time for criticism is now.  

Those who hold power in a discipline benefit from the idea that one must wait to engage in serious critique. Graduate students and those otherwise new to the field often have the ability to see things as they truly are. They have not yet been socialized to accept standard ways of thinking about or doing science. Practices that don’t make sense are seen as exactly that, but the system makes them believe that they just don’t yet understand, and that with more mentoring and experience they will learn the benefits of doing things in the accepted way. Moreover, as many others have pointed out, the system is set up as a treadmill of waiting, always moving the goalposts for when critique can start to the next milestone ahead. I will act once I finish my Ph.D., once I get a job, once I get tenure, once I make full professor. By time you make full professor your career has been built on the existing system and you have no incentive or motivation to change. This is how the system maintains itself.

The time for criticism is now. We desperately need the new voices and perspectives. The way we are doing our science is not ok, and even if we thought it was, we still need self-critique. We always need it. I recently gave a talk at the meeting for the Society for Personality and Social Psychology in which I heavily critiqued the strong focus on experiments in the field [video] [preprint]. Several people, publicly and privately, told me I was “brave” for giving such a critical talk to the core audience that needed to hear it. Of course I understand where this sentiment is coming from, but it really shouldn’t be this way. It should not be brave to be critical of our methods and theories. It should be our standard practice. The way we do our science is not natural or predetermined, it is a choice among many alternatives, and thus that choice should always be under scrutiny.

At this point I know exactly what you are thinking: graduate students and early career researchers who are critical will face retaliation from senior people in the field. As Hilda Bastian wrote about retaliation (in the context of signing peer reviews), “We shouldn’t let this be the end of a discussion. It should be the start of several more.” I agree strongly with Dr. Bastian that we cannot all live in fear of retaliation from those protecting themselves and the system. Retaliation is anti-social and is scientific misconduct. Those engaging in it should live in fear from the rest of us. If you hear about retaliation, call it out, let it be known who is engaging in this abhorrent behavior. Constantly pointing to the possibility of retaliation as a reason not to engage in a scientific practice is de facto acceptance of the retaliatory behavior. [Edited 2/16/21 to add: None of the above is to suggest that retaliation does not occur or is not harmful. It it does, and it is, and can be especially damaging to people from marginalized groups. My point is that we can both be aware of this aspect of the system and seek to change it.]

How to approach being critical is complex, to be sure. I can be flippant, and at times even disrespectful. Some might see this as inappropriate—especially in academia amidst so many fragile egos—but I err on that side because it is often what people need to shake out of their rut and see that the way they do things is not necessarily right. You need to find your own critical voice, your approach to how you want to critical, but you need to do it. The time for criticism is now.